The Hidden Dangers of Vibe Coding: Why Businesses Should Think Twice Before Shipping AI-Generated Code
Vibe coding tools like Lovable and v0 have taken the development world by storm. The pitch is simple: describe what you want in plain English and let AI build it for you. No coding experience required. For startups, solo founders, and non-technical teams, it sounds like the ultimate shortcut.
But beneath the speed and convenience lies a growing body of evidence that vibe-coded applications are riddled with security flaws, architectural weaknesses, and hidden liabilities. If your business is considering putting AI-generated code into production, here is what you need to know.
The Numbers Do Not Lie
Veracode's 2025 GenAI Code Security Report analysed code produced by over 100 large language models across 80 real-world coding tasks. Their finding was stark: 45% of AI-generated code introduced security vulnerabilities classified within the OWASP Top 10, the definitive list of the most critical web application security risks. Perhaps more concerning, this figure has not improved over time. Newer, larger models are no more secure than their predecessors.
Java was the riskiest language tested, with a security failure rate exceeding 70%. Python, C#, and JavaScript were not far behind, with failure rates between 38% and 45%. Cross-site scripting defences failed in 86% of relevant code samples. These are not obscure edge cases. These are the vulnerabilities that attackers actively exploit every single day.
Lovable: A Case Study in What Can Go Wrong
In March 2025, security researcher Matt Palmer discovered a critical vulnerability in apps built with Lovable. By simply modifying REST API requests, he could access entire user databases without any authentication. The flaw stemmed from missing or misconfigured Row Level Security (RLS) policies in Lovable's integration with Supabase.
Palmer reported the issue responsibly. Lovable acknowledged receipt but failed to act. When a Palantir engineer independently discovered the same vulnerability in April and publicly demonstrated it, the scale of the problem became clear. A subsequent scan of 1,645 Lovable-built applications found that 170 of them had critical security flaws exposing personal data, API keys, payment records, and home addresses to anyone who cared to look.
The vulnerability was assigned CVE-2025-48757 with a CVSS score of 9.3 out of 10. Even after Lovable released a security scanning feature, researchers found it only checked whether RLS policies existed, not whether they actually worked. Follow-up testing showed that removing an authorisation header from requests still bypassed all access controls on sites maintained by Lovable's own employees.
v0: Not Immune Either
Vercel's v0 has positioned itself as the more security-conscious option in the vibe coding space, and to their credit, they have invested in security scanning that has blocked over 100,000 insecure deployments. However, the platform is not without its own issues.
In July 2025, Okta's threat intelligence team reported that attackers were using v0 to rapidly generate convincing phishing pages, hosting them on Vercel's own infrastructure to exploit the trust associated with the platform. The tool that was supposed to democratise development was being weaponised for credential theft at scale.
Even setting aside malicious use, v0 itself acknowledges that generated code is untrusted. Their own documentation states that it may contain bugs, be incomplete, or be inappropriate for your use. The responsibility for evaluating and securing the output falls entirely on the user.
Why AI-Generated Code Is Inherently Risky
The fundamental problem is not that these tools are poorly built. It is that large language models are optimised to produce code that works, not code that is secure. They learn from billions of lines of public code, much of which contains outdated patterns, insecure practices, and known vulnerabilities. When an LLM generates a database query, it is just as likely to produce a textbook SQL injection flaw as it is a secure, parameterised query.
Common issues found in vibe-coded applications include:
- Hardcoded API keys and secrets embedded directly in frontend code, visible to anyone who opens browser developer tools
- Missing input validation, leaving applications wide open to injection attacks and cross-site scripting
- Client-side authentication logic that can be trivially bypassed by modifying requests
- Outdated or fictitious dependencies, with AI models sometimes referencing libraries that do not exist, creating opportunities for supply chain attacks through a technique known as slopsquatting
- No logging or error handling, making it nearly impossible to detect or investigate a breach
- Missing security headers such as Content Security Policy, HSTS, and X-Frame-Options
A study from METR, an organisation that evaluates AI models, found that experienced open-source developers were actually 19% slower when using AI coding tools, despite believing they were 24% faster. The perception of productivity does not always match reality.
The Real Cost of Cutting Corners
The appeal of vibe coding is understandable. Why spend weeks building something when you can have a working prototype in an afternoon? But a prototype and a production application are fundamentally different things. The gap between "it works" and "it is secure, maintainable, and fit for purpose" is where businesses get into trouble.
Consider what a data breach actually costs. Under GDPR, fines can reach up to 4% of global annual turnover or 20 million euros, whichever is higher. Beyond the financial penalty, there is the reputational damage, the loss of customer trust, and the operational disruption of incident response and remediation. The money saved by skipping proper development is a rounding error compared to the cost of getting it wrong.
There is also the question of maintainability. Vibe-coded applications are notoriously difficult to maintain because no one fully understands the code. When something breaks, and it will, debugging becomes a guessing game. Knowledge silos form instantly because the only "documentation" is a series of prompts that may or may not reproduce the same output.
What Businesses Should Do Instead
None of this means AI has no place in software development. Used responsibly, AI coding assistants can be valuable tools in the hands of experienced developers who understand what the code is doing and can evaluate it critically. The key distinction is between using AI as a typing assistant and blindly trusting it to make architectural and security decisions on your behalf.
If you are building software that handles customer data, processes payments, or powers any part of your business operations, here is our advice:
- Work with developers who understand the stack. We build with Laravel, React, and NextJS because we know these frameworks inside out. Every line of code is written with intent and reviewed with care.
- Treat all AI-generated code as untrusted. It should go through the same rigorous review process as any third-party code before it gets anywhere near production.
- Invest in security from day one. Bolting security on after the fact is always more expensive and less effective than building it in from the start.
- Do not confuse a prototype with a product. Vibe coding tools can be useful for exploring ideas and building throwaway demos. They are not a substitute for proper software engineering.
- Keep your software up to date. Security patches exist for a reason. Falling behind on updates is one of the most common ways businesses expose themselves to attack.
The Bottom Line
Vibe coding tools are impressive in what they can produce quickly, but speed without security is a liability, not an advantage. When nearly half of all AI-generated code contains known vulnerabilities, "prompt and ship" is not a development strategy. It is a risk your business cannot afford to take.
At The API Guys, we build APIs and web applications for businesses that take their software seriously. If you would like to talk about your next project, or if you are concerned about the security of code you have already shipped, get in touch.
