how to prompt like a pro (or at least not like a bot)

alright.
this week’s drop is about prompt engineering — but without the fluff.

no “ultimate 500 prompt pack.”
just raw strategy, chaos-tested workflows, and what actually works when you’re solo-building with AI.

1. Prompting is interface design

a prompt isn’t a wish. it’s a wireframe in words.

bad: build me a dashboard with login, stats, and user settings

better: you’re my react assistant. we’re building a dashboard in next.js. start with just the sidebar. use shadcn/ui. don’t write the whole file. I’ll ask for parts one by one.

scope small. give context. assign roles.
don’t write essays. don’t vibe your way into confusion.

2. Waterfall prompting = guided discovery

some of the best prompts? not prompts. they’re mini-interviews.

example:

q1: what is y combinator?
q2: do they publish startup lists?
q3: what’s the best way to find all W23 startups?
q4: what trends show up across batches?
q5: if I wanted to build a local clone of the best idea from S22, what would that look like?

you just walked ChatGPT from “what is YC” to “help me build a startup idea from YC trends.”

same thing applies to coding:

q1: what file touches this component?
q2: what logic is shared across it?
q3: if I wanted to add a feature without breaking anything else, what’s the cleanest way?
q4: okay, let’s implement your idea — scoped and testable.

prompt like you’re talking to a junior dev. walk them there.

Stop 3 am bug-hunts → ship fixes in 30 min

Debug & Implement guide cover

most devs burn out post-MVP
this guide helps you debug smart, refactor clean, and implement without fear

scoped debug rituals
implementation prompt flows
how to not let AI nuke your repo

3. Treat AI like a team

I don’t use GPT-4 for UX wireframes. I don’t ask Claude to debug SQL.
every model has a role.

craft many chats within one project inside your LLM for:

→ planning, analysis, summarization
→ logic, iterative writing, heavy workflows
→ scoped edits, file-specific ops, PRs
→ layout, flow diagrams, structural review

you wouldn’t ask your UI designer to rewrite your backend. same rule applies here.

4. Short chats. always.

one feature = one chat.
one bug = one thread.
one idea = one prompt.

don’t open 300-line monologues and expect quality.
context drifts. hallucinations spike.
AI becomes a toddler with scissors.

5. Prompt iteratively (not magically)

inspired by Matt McCartney’s blog (worth the read here): LLMs aren’t search engines. they’re pattern generators.

so give them better patterns:

  • set constraints

  • define the goal

  • include examples

  • prompt step-by-step

the best prompt is often... the third one you write.

6. Save your best prompts like code

your prompt library is as important as your repo.

I version mine like this:

  • feature_implementation_prompts.md

  • debugging_routines.md

  • prompt_snippets_ui_gen.md

build once, reuse everywhere.
save your gold.

7. My personal stack right now

what I use most:

  • ChatGPT with Custom Instructions for writing and systems thinking

  • Claude / Gemini for implementation and iteration

  • Cursor + BugBot for inline edits

  • Perplexity Labs for product research

also: I write most of my prompts like I’m in a DM with a dev friend.
it helps.

8. Debug your own prompts

if AI gives you trash, it’s probably your fault.

go back and ask:

  • did I give it a role?

  • did I share context or just vibes?

  • did I ask for one thing or five?

  • did I tell it what not to do?

90% of my “bad” AI sessions came from lazy prompts, not dumb models.

Seen this week (stuff worth knowing)

some bits that hit my radar this week — signal only:

  • Grok just launched Tasks — lets you track news, competitors, and complex topics on schedule. finally starting to feel like a real assistant.

  • this Google engineer dropped a killer prompt playbook for devsthe prompt engineering guide

  • Sam Altman on OpenAI’s new podcast:

    • GPT-5 → coming this summer

    • debates over naming clarity (they’re worried we’re confused — we are)

  • Midjourney v1 = video gen is here. think: prompts → motion.

  • Google Search Live Mode now lets you speak in AI Mode.

  • MIT brain scan study:

    • ChatGPT users forgot what they wrote 5 minutes later

    • neural connections dropped from 79 → 42

    • (the AI is smart. your brain? might get lazy.)

more coming next week.
until then:

stay caffeinated.
lead the machine.
launch anyway.

☀️ miron