Vibe Coding Has a Hangover
You shipped in two hours. Congrats. Now your users’ data is all over the internet.
Hey all, Real John here… First off - GO KNICKS! Ok, moving on… this week I’m talking about security and AI. Even a little better security is better than the vibe coded insanity that is out there. Enjoy reading, and hopefully check your code for api keys and more. 8 of Diamonds - if you know you know. Thanks for reading! Let’s chat more - IN REAL LIFE.
I love vibe coding. I really do. I built Cash Critters with AI tools, I’m building StatusSage with them, and I preach the gospel of shipping fast with a $50/month stack every single week in this newsletter.
So when I tell you vibe coding has a serious problem right now, I need you to hear it — not as a critic, but as someone who’s all in on these tools and doesn’t want to watch them get buried by a wave of self-inflicted disasters.
Because that wave? It’s already here.
The Party Was Great. Someone Left the Door Open.
Moltbook launched January 28th, 2026. Big hype. AI social network. The founder proudly announced he “didn’t write a single line of code.” Within three days, 1.5 million API keys were exposed. Thirty-five thousand emails. Gone. The entire database was publicly accessible because the AI never enabled Row Level Security — and no human ever checked.
Three. Days.
That’s not a one-off. Researchers at Escape.tech scanned over 1,400 vibe-coded production apps and found that 65% had security issues and 58% had at least one critical vulnerability. Georgia Tech’s Vibe Security Radar tracked 6 CVEs attributable to AI-generated code in January 2026. By March? Thirty-five. In a single month. And researchers estimate the real number is 5 to 10 times higher because most AI tools don’t leave identifiable commit metadata.
Meanwhile, Veracode tested over 100 AI models on security-sensitive coding tasks. Forty-five percent of AI-generated code introduced OWASP Top 10 vulnerabilities. Not obscure edge cases. The classics. XSS. SQL injection. Hardcoded secrets. The stuff every developer learns to avoid in year one.
The AI doesn’t know what it doesn’t know. And if you don’t know either, you just shipped a vulnerability to production with confidence.
Why This Is Happening (And It’s Not the Tools’ Fault)
Here’s the uncomfortable truth: the tools are working exactly as designed.
AI coding tools optimize for making the error message go away. They generate code that satisfies the stated requirement. The problem is “make a login form” and “make a secure login form” are two completely different prompts — and most people only type the first one.
Columbia University researchers put it plainly: AI agents will remove validation checks, relax database policies, and disable authentication flows just to resolve a runtime error. Not because they’re malicious. Because they’re optimizing for the output you asked for, not the ten other things you forgot to ask for.
An experienced developer writes secure-by-default because of thirty years of scar tissue. The AI has no scar tissue. It has patterns from training data — much of which is legacy code written before anyone cared about security best practices.
The code looks right. It runs fine in the demo. It falls apart the moment a real user touches it in a way you didn’t anticipate.
What You Actually Need to Do
I’m not telling you to stop using these tools. That’s not the answer and I’d be a hypocrite if I said it. The answer is to stop treating AI output as finished code.
Think of it like hiring a brilliant but overconfident intern. You wouldn’t let that intern push directly to production without a review. Same rule applies here.
The non-negotiables before you ship anything real:
Row Level Security is not optional. If you’re using Supabase, Postgres, or any row-level permission system — check it yourself. The AI almost certainly skipped it.
Search your own codebase for your API keys. Seriously, do it right now.
grep -r "sk-" .and hold your breath.Authenticated vs. unauthenticated testing. Manually verify that logged-out users cannot access logged-in content. This single test would have caught the majority of the documented failures this year.
No AI-generated auth code in production without review. Authentication is the one area where “it looks fine” is not a sufficient standard.
Run a dependency scan. About 20% of AI-generated code references packages that don’t exist — and attackers are registering those hallucinated names as real malicious packages. It’s called slopsquatting. It’s real and it’s accelerating.
None of this requires a security team. It requires thirty minutes and the willingness to look.
The speed is real. The productivity gains are real. I’m not giving those up and you shouldn’t either. But “I shipped in two hours” is only a flex if the thing you shipped doesn’t leak your users’ bank account data three days later.
Ship fast. Review what you shipped. Those two things are not in conflict — only the hangover makes them feel that way.
Go build something amazing. Just check the locks before you open the doors.
John Mann is the founder of Startups and Code LLC, a software engineering executive, and the builder behind Cash Critters — financial literacy for kids, built for $50/month because constraints are a feature, not a bug. Subscribe for weekly takes on AI, startups, and building things that matter.



