AI coding tools like Cursor, Lovable, Bolt, and Replit Agent can scaffold an entire application in minutes. That's genuinely impressive. But "it works" and "it's safe" are two very different bars to clear.

After reviewing a number of vibe-coded apps, the same security issues keep showing up. These aren't edge cases. They're patterns — consistent blind spots in how AI generates code.

Here are the seven that come up most often.

1. Exposed API Keys and Secrets

AI tools frequently place API keys, database connection strings, and service credentials directly in client-side code. Sometimes they end up in environment variables that get bundled into the frontend build. Sometimes they're hardcoded in the source.

The model is focused on making the feature work. An API key in the code means the API call succeeds. It doesn't reason about what happens when someone opens DevTools.

2. Missing or Broken Authentication

AI-generated auth is often either missing entirely or only implemented on the frontend. The AI builds features in isolation — prompting "add a dashboard page" gets you a page, but not necessarily any check for who should see it.

Client-side auth checks are a UX convenience, not a security boundary. If the server isn't verifying tokens, the API is open.

3. No Authorization (Everyone's an Admin)

Even when authentication exists, authorization is almost always missing. Any logged-in user can access any other user's data. This is Insecure Direct Object Reference (IDOR) — one of the OWASP Top 10.

The AI doesn't understand business rules. It doesn't know that User A shouldn't see User B's invoices. It builds the query, returns the data, and moves on.

4. Supabase RLS Disabled or Misconfigured

This is especially common in the Lovable/Bolt/v0 ecosystem. Supabase Row Level Security enforces access rules at the database level — it's the last line of defense. But AI tools routinely create tables with RLS disabled, or write policies so permissive they don't actually protect anything.

A SELECT policy set to true means any authenticated user can read every row in the database. That's not security. That's a checkbox.

5. No Input Validation

AI-generated code tends to trust all input. Form data, URL parameters, API request bodies — it flows straight into database queries and business logic without validation or sanitization.

The AI optimizes for the happy path. It doesn't generate code for the adversarial path — the one where someone sends malformed or malicious input. This opens the door to SQL injection, XSS, and other attacks that have been in the OWASP Top 10 for over a decade.

6. Broken Error Handling

Error handling in vibe-coded apps tends to be either completely absent or overly verbose. Both are problems.

No error handling means crashes and leaked stack traces. Verbose error handling returns database column names, file paths, or query details to the client. Either way, it gives anyone poking around a clearer picture of your internals than they should have.

7. Insecure Deployment Configuration

The code might be fine, but the deployment configuration creates gaps: wide-open CORS, missing security headers, debug mode in production, no rate limiting, default credentials.

Deployment config is outside the scope of most AI coding prompts. Nobody asks the AI to make sure CORS is locked down. These issues tend to be invisible until something goes wrong.


The Bigger Picture

None of these issues are unique to AI-generated code. Human developers make these mistakes too. But vibe coding means they accumulate faster, across more surface area, with less review.

The solution isn't to stop using AI tools. They're productive and they're here to stay. The solution is to pair that speed with a review step before shipping. Build fast, then verify.

Ready for a Professional Review?

I offer comprehensive vibe code reviews with a 48-hour turnaround. Get a detailed security and quality report with prioritized fix recommendations.