10 lessons after one year of vibe-coding

building in Bali

hey,

I’m writing this from Bali. Been here for a bit — catching my breath, integrating everything that’s happened over the past year.

After a year of building, writing guides for you, breaking, and rebuilding startups across Milan, Berlin, and Paris, I finally slowed down enough to look back and connect the dots.

It’s been wild: startups, a few thousand users, and more caffeine than sleep. Some projects grew fast, some crashed hard, but every single one taught me something about building with AI — for real, not for hype.

Because the truth is, AI building in 2025 feels different now.

The hype is fading. The productivity illusion is cracking. but underneath the noise, the opportunity’s still massive — if you learn how to lead the machine instead of just prompting it.

Let’s get into it.

10 lessons after one year of vibe-coding

The productivity illusion is real — and you're probably in it

Before you write another line of AI-generated code, know this: a July 2025 study found developers using Cursor with Claude were 19% slower than coding without AI — but believed they were 20% faster.

The trap:

  • You feel productive because code appears fast

  • But you're not reading diffs anymore

  • You're stacking invisible tech debt

  • Security vulnerabilities are 2.5x higher in AI code

  • Code duplication is now 10x worse than 2020

The fix:

  • Treat AI like a caffeinated junior dev, not a senior engineer

  • YOU are the architect. AI fills gaps.

  • Read every diff. Every time.

  • If you can't explain the code to someone else, don't ship it.

Claude Sonnet 4.5 changed the game (but not how you think)

Sonnet 4.5 is the current king for coding. It can work autonomously for 30+ hours. GitHub Copilot uses it. Cursor swears by it. Devin saw an 18% boost.

But here's what matters:

  • Use Sonnet 4.5 for complex architecture decisions

  • Drop to Sonnet 3.7 or Codex for routine tasks to save costs

  • 200K context window sounds huge, but "lost in the middle" problem is real

  • Even the best model needs structure — which brings us to...

PRDs aren't optional anymore — they're your survival kit

The vibe coding era is over. September 2025 brought the "hangover" — senior engineers citing "development hell" with unreviewed AI code.

Your new workflow:

  • Write a real PRD (here’s the guide on that) before touching Cursor (product.md in root)

  • Describe what you're building, why, and with what tools

  • Keep it as your compass — AI loses context fast

  • Reference it constantly

  • Update it weekly

Why this matters: AI thrives with constraints. A PRD is your constraint system.

Vibe code for MVPs, architect for products

Here's the truth nobody wants to hear: Base44's founder sold for $80M shipping 13 times per day with vibe coding. But those Lovable apps? 10% had security vulnerabilities.

The split:

  • Vibe coding works for: 48-hour MVPs, throwaway prototypes, testing ideas

  • Real engineering required for: anything touching money, user data, or scaling past 100 users

My rule: Vibe code to validate. Refactor to scale. Ship to learn.

Your debug ritual determines your survival rate

AI will break things. Cursor will hallucinate features. Claude will rewrite your entire auth flow when you asked it to fix a typo.

The anti-chaos system:

  • Keep a debug-log.md at root

  • Scope every bug before opening AI chat

  • Never say "fix this" — say "investigate, suggest 3 solutions, wait for approval"

  • Test manually after EVERY AI fix

  • Branch in Git before every debug session

  • Short chats (50 messages max) > one eternal god chat

The trap: Asking AI to "fix the whole thing" is like asking a blender to cook dinner. That’s why I wrote AI Debugging & Refactor Playbook.

Cursor features you're sleeping on (October 2025 edition)

Everyone uses Cursor. Most people use 20% of it.

Actually game-changing features:

  • Plan Mode — makes Cursor write detailed plans before coding (prevents rogue rewrites)

  • Browser Controls — debug UI issues by taking screenshots, not guessing

  • Background Agents — run tasks on remote machines (2x faster)

  • .cursorrules — define project-specific rules (this is your senior engineer)

  • Commands — reusable prompts teams can share via deeplinks

Pro move: Combine .cursorrules with a solid PRD. It's like giving Cursor a brain transplant.

The minimal viable AI stack (nothing more, nothing less)

You don't need 47 tools. You need 5.

The core:

Total cost for a serious solo builder: ~$415/month

  • $80 AI tools

  • $300 design work (Upwork when needed) or 21st.dev

  • $35 learning resources

Everything else is distraction.

Security is your Achilles heel

AI-generated code has 322% more privilege escalation paths. Hard-coded credentials up 40%. CVSS 7.0+ vulnerabilities are 2.5x higher.

The horror story: Amazon Q Extension shipped a poisoned update that could DELETE local files and SHUT DOWN AWS EC2 instances.

Your defense layers:

  • Manual review for ALL auth, data processing, external connections

  • Run static analysis constantly, not just at deploy

  • Never trust AI with security-critical code

  • Stop measuring productivity by commit count

Reality check: AI-assisted commits merge 4x faster — meaning insecure code bypasses normal review cycles.

Weekly hygiene or weekly hell

Tech debt builds at AI speed. You'll MVP fast, but the mess scales faster.

Your weekly ritual (30 minutes, saves 30 hours):

  • Run codebase cleanup

  • Delete temp files

  • Reorganize folder structure

  • Update .cursorrules

  • Refactor one messy component

  • Update your deployment manual

The deployment manual: Document EXACTLY how to ship. Which branch, which env vars, which bodies are buried. You will forget. Cursor will forget. This file saves you at 2am.

Your job is to lead the machine (not vibe with it)

The allocation economy is here. Junior dev roles are getting disrupted. Senior engineers are becoming "AI orchestrators."

The shift:

  • You delegate implementation to AI

  • You focus on architecture, design decisions, strategy

  • You maintain the vision while AI handles boilerplate

Critical framework: Plan → Act → Review → Repeat

  • Plan: Create detailed requirements before AI touches code

  • Act: Let AI generate in manageable chunks (not entire codebases)

  • Review: Mandatory human review for ALL snippets

  • Repeat: Iterate until it's actually right

The truth: Programmers who don't adapt won't be replaced by AI. They'll be replaced by programmers who leverage AI effectively.

I love this approach:

UNDERSTAND: What is the core question being asked?

ANALYZE: What are the key factors/components involved?

REASON: What logical connections can I make?

SYNTHESIZE: How do these elements combine?

CONCLUDE: What is the most accurate/helpful response?

Now answer: [YOUR ACTUAL QUESTION]

Building with AI isn’t about speed anymore.

It’s about clarity, architecture, and understanding how to lead the machine — not vibe with it. That’s what I now help founders and teams with.

If you’re scaling your product or trying to integrate real AI workflows into your stack, I do 1:1 sessions and custom implementation work as a Frontier AI Deployment Engineer.

Get the Full Agent Prompting Guide

The guide I've been working on covers:
– System prompt blueprints (drop-in templates)
– Mode playbooks for writing, coding, research, data extraction, and UI navigation
– Multi-agent orchestration patterns
– JSON schemas and validation strategies
– Compliance and safety gates
– QA checklists and troubleshooting flows

I'm still adding sections on LangChain, LangGraph, LangSmith, and other orchestration frameworks — when you need them, when you don't, and how to choose.

That’s where i’m at right now. building slower, thinking clearer, trying to make things that actually matter.

stay caffeinated.
lead the machines.
launch anyway.

—Miron