• Vibe Code Lab
  • Posts
  • how to build agents, memory wars, and a hackathon

how to build agents, memory wars, and a hackathon

hey,

so I disappeared for a bit. not dead — just deep in Milan, fighting off a cold at a hackathon, building startups, writing guides on agent prompting that don't make you cry, and watching the AI landscape shift faster than I can debug.

but we're back. and there's a lot to unpack.

this issue: Lovable actually got good, agent prompting that doesn't make you cry, and my slightly unhinged theory about why every app will soon have a "Login with Claude" button.

let's go.

Lovable: Hackathon in Milan (While Sick, Naturally)

Lovable Hackathon

I've been exploring the Italian tech scene — meeting founders, VCs, checking out what's happening here. Then I stumbled on a hackathon by Lovable. The format? Five hours. Solo. Recreate the core functionality of Shopify, Airbnb, or Slack using only Lovable.

I almost didn't go — I was sick as hell, barely standing. But I'm not built to skip things like this.

So I picked Slack. Built the whole thing in four hours: video calls, auth, channels, notifications, the full stack. Spent the fifth hour playing pool.

Did I win? No. Top 14 out of 100. And honestly? I thought it was kind of a scam at first. Out of the three winners, one project was legitimately impressive — solid tech, clean design, my respect. The second had awful design. The third literally didn't launch on stage.

Plot twist: a few days later, they told me I made top 14 and now I've got a call with Vento, one of Italy's biggest VC funds. So maybe losing isn't losing. Maybe it's just a longer game.

The real takeaway? Lovable actually impressed me. It's come a long way. Fast, clean, functional — great for prototyping. But Cursor is still 100 steps ahead if you're building something scalable and production-ready. Lovable feels like a closed ecosystem. You can export the code, sure, but there aren't many buttons to control everything under the hood. It's a prototyping tool, not your main dev environment.

Still — if you tried it six months ago and bounced, maybe give it another shot. It grew up.

Agent Prompting: You're the CEO, Not the Intern

I just finished a new guide on multi-agent workflows and agent architecture. It's dense, it's practical, and it's built from actual projects — not Twitter threads.

Here's the short version:

Agents aren't magic. They're just prompts with memory, tools, and a little autonomy. But that "little autonomy" is where people lose their minds.

The trick? You don't prompt agents like you prompt ChatGPT. You architect them.

  • Define their role clearly (e.g., "You are a debugging agent. You only investigate code issues. You don't refactor.")

  • Give them constraints (e.g., "Never change files outside /src/components")

  • Chain them carefully (Agent A researches → Agent B drafts → Agent C reviews)

Think of it like hiring a team. You wouldn't hire five "generalists" and hope for the best. You hire specialists and give them clear lanes.

Here's a quick example from the guide on mode blocks — how to structure task-specific directives:

modes:
  research: { directive: "Cite sources with title + URL; prefer primary sources." }
  coding: { directive: "Code first; then why/trade-offs; add test; state complexity." }
  data: { directive: "Return strict JSON per schema; never include prose." }

Another key principle: deterministic ambiguity. If the agent is <90% confident it understands the task, it should return an UNCLARIFIED stub with 2 questions instead of guessing.

The full guide goes way deeper — with prompts, examples, JSON schemas, exhaustion protocols, deduplication strategies, and actual workflows I use. I'll drop the link at the end.

But the core lesson: lead the agents. Don't just unleash them and pray.

The Big Shift: We're Moving Into the Memory Era

Okay, this one's important.

I think we're watching a massive platform shift happen in real-time — and most people are too busy arguing about AGI timelines to notice.

Claude and ChatGPT are both racing toward the same endgame: becoming your central AI hub.

Not just a chatbot. Not just a coding assistant. Your entire AI operating system.

Here's what's happening:

  1. Memory layers are getting serious. Claude remembers your preferences. ChatGPT remembers your projects. They're building persistent context across sessions.

  2. Tool integration is exploding. Google Drive, calendars, email, task managers — all plugging directly into your LLM. Why use ten apps when one AI can orchestrate them all?

  3. Browsers are in play. ChatGPT has a browser. Perplexity has a browser. They all run on Chromium, so I don't think we'll see a new dominant player challenge Google here — but it's another layer of consolidation.

  4. Identity is next. I'm calling it now: within 18 months, you'll see "Login with ChatGPT" and "Login with Claude" buttons everywhere. Just like "Login with Google" — but now your entire conversational history, preferences, and context travel with you.

This isn't science fiction. It's already starting.

Why This Matters (And Why I'm Paranoid)

I used to work on a proof-of-identity blockchain. Layer 1, zero-knowledge proofs, the whole decentralized dream. The idea was simple: in the AI age, you need verifiable identity because everything else can be faked.

Too hard. Too early. Maybe too idealistic.

But I'm thinking about it again — because if your entire digital life is stored in Claude's memory or ChatGPT's context window, who owns that? Who secures it? What happens when it leaks? Or gets subpoenaed? Or just... disappears?

Right now, your AI memory lives in markdown files or some company's encrypted database. That's fine for demos. But when it's powering your calendar, your emails, your business decisions? When it's the source of truth for who you are online?

We're going to need better answers.

I don't have a solution yet. But I know the problem is coming. Fast.

Get the Full Agent Prompting Guide

The guide I've been working on covers:
– System prompt blueprints (drop-in templates)
– Mode playbooks for writing, coding, research, data extraction, and UI navigation
– Multi-agent orchestration patterns
– JSON schemas and validation strategies
– Compliance and safety gates
– QA checklists and troubleshooting flows

I'm still adding sections on LangChain, LangGraph, LangSmith, and other orchestration frameworks — when you need them, when you don't, and how to choose.

Forward this to someone who's wrestling with AI agents or wondering why their chatbot keeps forgetting everything. Let's keep building smarter.

And if you've got thoughts on any of this — memory layers, agent workflows, hackathon conspiracy theories, or why Lovable still isn't Cursor — hit reply. I read everything.

Stay caffeinated.
Lead the machines.

—Miron