Building with AI: Three Principles That Matter
Don’t Just Automate. Build with Intention, Context, and Flexibility
When you're building with AI, it's easy to get carried away by the excitement of new tools, shiny models, and clever workflows. But hype fades fast, and what you're left with are the choices you made. That's why having a few grounding principles matters.
This isn't about frameworks or tech stacks. It's about how you think. The kind of questions you ask before you open your editor. The instincts you trust when you're making tradeoffs.
In this piece, I’ve laid out three principles that have helped me build more intentionally: these are not commandments, they are just lenses through which to view decisions. But if you're experimenting, shipping, or scaling with AI, I hope they help you build things that don’t just run, but resonate.
#1 - Problem-First Prospecting
In the current wave of AI excitement, it’s tempting to start with the tech: which model to use, how fast it can be built, what clever workflows you can stitch together. But real value emerges when you flip that approach: start with a specific, real-world problem and work backwards. This is "problem-first prospecting", the mental model that reminds you: nobody pays for your AI agent because it uses GPT-4 or LangChain.
When it comes to building for small businesses or lean teams, much of the AI automation hype comes from creators claiming six-figure success, yet their income often comes not from delivering real solutions, but from monetizing YouTube content, paid communities, or selling courses about automation itself.
Take a personal trainer who spends hours each week manually following up with leads from their website. A simple automation that sends a personalized reply and booking link right after someone fills out the form can make a huge difference; not because it's fancy, but because it's timely. The tech stack could be anything -n8n, Make, Zapier, OpenAI, Claude - what matters is that it plugs a costly leak in the business.
Problem-first prospecting means spotting the specific bottlenecks people face - like a gym owner missing trial leads in DMs, or a B2B sales team losing deals due to lack of timely follow-ups. Whether it’s a solo founder or an enterprise team, the win lies in relieving something repetitive, painful, or slow. Your goal isn't to automate broadly; it's to relieve something painful, tedious, or expensive. One focused win beats a generic system with 10 cool features.
Takeaway: So instead of asking, "What can I build with AI this week?" ask, "Whose week can I make meaningfully better, and how?"
#2 - Design with Humans in the Loop
Good AI products aren't just fully automated black boxes. They're smart loops that include people at the right points.
As Ge Wang puts it in his piece - Humans in the Loop: The Design of Interactive AI Systems, “it's tempting to think of AI as a “Big Red Button” — a technology that reliably delivers the right answers while hiding the process that leads to them… the ideal solutions often exist somewhere in between, as a duality between automation and human interaction, between autonomous technology and the tools we wield.”
This idea is called "human-in-the-loop." Instead of removing humans, it's about choosing when and how to include them. Sometimes, you want the AI to assist and suggest, and you step in only for review. Other times, you need the human to take control at key moments - like when sensitive decisions or judgment calls are involved.
Think of it like three checkpoints:
Assist: The AI drafts or speeds things up, but the human shapes it.
Takeover: The AI hands off control when stakes are high (payments, legal, etc).
Audit: The AI finishes a task, but a human reviews or approves it.
The trick is to design for flexibility. A slider that controls how much AI rewrites a legal document may seem like a minor feature, but it transforms a passive tool into something interactive and responsive. Similar design choices show up everywhere - tone sliders in writing tools, image enhancement controls in photo apps, or toggles that let users turn AI suggestions on or off.
Other examples include dropdowns for selecting tone or style, and drag-and-drop blocks that let users reorganize or rephrase AI-generated content. In coding assistants, users can accept, tweak, or discard AI-generated lines of code in real time. These interaction points, as small as they are, puts the user in charge. They turn automation into collaboration.
Takeaway: Whether you’re building for a small shop or a big company, human-in-the-loop systems often perform better than full automation. They bring in context, meaning, and judgment—and every time a human steps in, the AI can learn from it.
#3: Multiplicity in AI Agents
Most of the systems we’ve built over the last couple of decades in business tools, websites, or workflows have followed a pretty straight path. Think step A → step B → step C → done. That worked well when everything was predictable, but the moment something goes off-script (which it often does), those systems either break or stall.
Enter AI agents in the mix: autonomous programs that reason and act toward goals. They let us design for multiplicity. They have the ability to handle multiple paths and changes dynamically. This hinges on three interconnected properties that make agents more flexible than rigid systems.
Branching happens when an agent reaches a decision point and needs to evaluate what to do next. It doesn’t rely on pre-set rules but reads the current context, just like a support bot deciding whether to answer directly or escalate based on the customer’s query. It allows systems to shift paths on the fly instead of locking into a single outcome.
Looping kicks in when the agent tests its own output. Say it's generating a summary or email, it doesn't stop at the first try. It reviews, critiques, and revises internally, improving quality before a human even sees it. Think of it like a smart intern who drafts and redrafts before sharing the final version.
Adapting ties everything together. As situations evolve, the agent isn’t locked into a rigid path. It can remember what’s already been attempted, adjust its plan accordingly, and even call on different tools or sub-agents mid-process—like changing tactics halfway through an onboarding flow because a vendor’s documentation was incomplete.
Imagine customer support. Old-school automation would have a fixed tree of options. Agents, on the other hand, can handle multiple moving parts: responding to a customer’s tone, escalating only if needed, retrieving personalized answers, even asking clarifying questions on the fly.
Takeaway: Don’t build AI for straight lines; design for detours. Agent workflows branch, loop, and adapt like living systems, turning brittle processes into resilient, many-path flows.
So the next time you're tempted to chase the latest model or ship the flashiest feature, pause. Ask yourself: where's the pain, who's in the loop, how does that loop integrate smoothly, and how does this thing adapt??