Back to Essays
#ai#agents#software-development#opinion

Build How You Want

Agents are changing how software gets built, and individuals now have more leverage than ever. Build in the mode that fits your context, and do it intentionally.

A photo of a street with sidewalk chalk that says 'There is still time'

TL;DR

  • There are multiple valid ways to build right now: manual, hybrid, and full agentic.
  • I move between those modes constantly, sometimes in the same day, sometimes in the same project.
  • Individual leverage has jumped, and that is a big deal.
  • Choices on this spectrum have consequences, so intentionality matters more than ideology.
  • The goal is reliable, maintainable, clear systems that create real-world value.
  • As generation accelerates, architecture, evaluation, review, governance, and communication matter more, not less.
  • Explore these tools with curiosity and play so you and your team can choose what is sustainable, high quality, and genuinely useful.

There are a lot of serious, loud takes about what software engineering "is" right now.

Opinions are useful, but they are not commandments from up on high.

The social dynamics can get intense, and the hype cycles are algorithm-friendly in a way that can be nervous-system exhausting. It's easy to feel puppeteered into a position or a mode of operation.

So this is a string-snipping post. :D

This post is about practical choices, not purity. There are multiple valid ways to build software in the age of AI, and you get to choose your relationship to the tools.

There's also a quieter fear underneath some of the louder takes: if you're not fully agentic already, you're falling behind in a way you can't recover from.

I don't buy that. If you already know how to build software, the barrier to entry here is a text box (yes, that’s an oversimplification — there’s real engineering behind high-quality agent workflows, but the conversational interface lowers the activation energy dramatically). You can collaborate with these systems and realize meaningful gains on day one, because the underlying fundamentals are still yours.

The interface got easier, but responsibility did not. The hard part is still setting architecture and system boundaries, defining what "good" looks like in evaluation, and making tradeoffs you can explain to teammates and clients.

This is a bridgeable shift and a new interface layered on top of skills many of us already have.

A practical ramp-up path: read documentation from major labs, take a few Hugging Face learning courses, watch YouTube walkthroughs, follow transparent builders who share real workflows, and talk with peers about experiments and tradeoffs. Reading docs and watching good videos is a great way to discover what's possible and how people and teams are actually leveraging these tools. (Shameless aside: I share Applied AI notes here on this blog too 🫣)

I've taught developers and non-developers across the experience spectrum, and things tend to click quickly before they're off to the races (which I watch, proudly, with a tiny tear in my eye). For me, that's where play comes in: learning is one of my play styles, and when people find their own version of fun with these tools, they usually level up faster in a way that actually feels like them. Peter Steinberger (creator of OpenClaw) said it best on OpenAI’s Builders Unscripted series: “Just play.”


Agents Are Real

Agents are a genuine shift in how software gets built. I've watched non-technical people pair with an agent and ship functional, polished products. Not toy demos, but real systems that solve real problems.

That matters because the population of people who can build just expanded, and the activation energy to learn and execute has dropped in a way I haven't seen before.

The real shift goes beyond the existence of agents: the individual has never had more leverage.

As code generation speeds up, the center of gravity shifts up the stack. Architecture decisions, testing strategy, code review quality, risk modeling, and governance discipline become even more important.

This moment has real momentum, and thoughtful engagement can compound quickly. Urgency and panic are different things. You can take this seriously without burning unsustainably.

For me personally, this has been the biggest career accelerant so far. AI has also been a mentorship multiplier: ask a question, try an approach, get feedback, start again. It opens doors into unfamiliar domains without waiting for perfect timing, formal mentorship, or permission from the room.

It's also genuinely fun, which matters more than we admit; the friction between curiosity and execution is lower than it's ever been, and that sense of play often becomes part of the engine rather than a side quest.

At the same time, outcomes still vary based on approach. Depth of understanding, maintainability, team learning, and long-term resilience are shaped by how you build.

Think of it like two ways to learn a city. You can take the subway everywhere and arrive efficiently, or you can walk the streets and occasionally get lost. Both get you there. The walker knows where the side streets are when the subway is down. Neither approach is wrong. They just leave you with different maps in your head, and different resilience when something breaks.


"Aren't You a Director of Applied AI? What Kind of Take Is This?"

Fair question.

I love this tooling and I actively encourage people to use it. I also spend a meaningful part of my time on change management, and that means I care as much about people as I do about capability.

For context, I'm not observing from the sidelines and shouting a milquetoast take. I build agent tooling. I build tools that help other people build their own agents. I'm actively exploring human-agent hybrid systems for organizational knowledge that are meant to benefit humans and agents working together.

My philosophy is leading with empathy backed by deep technical ability.

I like riding the frontier for play, learning, and extractable insight. Then I slow down and decide which patterns are actually pragmatic enough to scale. That stance lets me support people who want to push hard at the edge while still holding space for people entering this shift at a different pace.

Some of the most resistant people I've introduced this stuff to later told me they appreciated a human-centered introduction that met their workflow where it already lived. "Join me in the fun and play and see how this can make your day-to-day more enjoyable" tends to work better than "You're behind; follow me to catch up."

You can be enthusiastic and pragmatic at the same time. That's the sweet spot.


What Are You Optimizing For?

This is where it gets practical for me. I move across the spectrum constantly: full agentic, hybrid, manual. Sometimes in the same day, and sometimes in the same project.

I'm not just choosing how code gets written; I'm choosing how decisions get made, reviewed, and shared across the team over time.

I might have an agent scaffold a feature in the morning, pair with it line-by-line in the afternoon, then hand-write a critical slice at night because I want full tactile understanding of the tradeoffs.

There is no moral ranking here, because these are modes of working rather than identities to perform.

When velocity matters most, agents are incredible. I'm amazed daily by what people are shipping with agents. When reliability, maintainability, and team clarity matter most, I stay closer to the implementation through a hybrid approach. When learning matters most, I flip the dynamic and use the model as tutor while I do the typing and debugging.

I also don't enjoy backseat development, whether it's coming from humans or from tools. The builder should choose the collaboration style (delegate / pair / drive solo), and that choice should be intentional.

Code review still matters here, but its role expands. It's quality control, yes, and it's also alignment and knowledge transfer so the team keeps a shared mental model instead of fragmented context.

Some days that means serious heads-down execution, and some days it means playful exploration; both can ship real work.

One person's priorities won't map cleanly onto another's, and that's fine.


The Room You're In

How you work with an agent is personal. Two developers can use the same model, the same codebase, and the same deadline, then collaborate with it in very different ways. One dictates intent and reviews output. The other works alongside it line by line, treating the agent like a pair rather than a ghostwriter. Both can produce excellent results.

These choices also don't exist in a bubble. If your team has gone all-in on agent-driven development, your preference for a hybrid or manual approach still has to interface with that context. Review workflows change when most code is generated, velocity expectations shift when the baseline assumes agent speed, and shared understanding can drift when fewer people wrote core parts by hand.

Team cognition is a real engineering concern. As generation accelerates, it becomes more important to intentionally distribute understanding across humans and tools so that the system remains explainable, maintainable, and resilient over time. Entire and platforms like OpenAI Frontier are early signals of where this is heading: organizational intelligence infrastructure that keeps shared context, onboarding, and governance legible as agent density increases. The tooling is catching up to the problem.

None of this means you should abandon your approach to match the room; it means the room is part of the calculation, even when your working style is personal.


Choices Have Consequences

I think there is clearly something happening beyond simple tool adoption. High-velocity, multi-agent workflows can change how you think and how you work. Instead of coordinating with your team and maybe one assistant, you're suddenly managing many parallel conversations, and that has a real human cost. Attention is still finite, and pretending the human bottleneck doesn't exist is usually where quality starts to wobble.

It helps to name the spectrum directly:

0 = completely manual coding.

People and teams can still ship great software and meet customer needs without AI at all. This is still common, and the renewed interest in coding by hand makes sense to me.

1 = completely agentic coding, with no human code written and no human code review.

A solo operator can build and iterate one-shot software from a spec at incredible speed.

Both ends are real. Neither is fake, and neither automatically wins. The question is where the sustainable band sits for your context.

For most teams, that sustainable band is where leverage increases but judgment/shared understanding stays active: clear boundaries and governance that matches the impact of what you're shipping.

If you ignore these tools, that also has consequences you may not fully anticipate. AI fluency and judgment may become an expectation in more teams and roles over time.

If you lean all the way in, there are different consequences that are easier to miss. Early research and field reports suggest possible skill atrophy in some workflows, but this is still an emerging area of study. One study does not make for a general insight, and your mileage may genuinely vary based on workflow, environment, and individual habits. I still think anecdata matters, because people often feel a shift before we have precise language or strong datasets for it.

Client communication and explainability also get more important as generation accelerates. If we cannot clearly explain why a system is safe, maintainable, and fit for purpose, the engineering work is not done.

This is where I land: experimentation should stay open, and outcomes should stay practical. Some people should absolutely push the frontier and move fast. The artifact of that work should be shareable practices that improve reliability, maintainability, clarity, and delivery for both humans and agents across the team.

If you're feeling pressure from industry noise to burn hotter than is sustainable, it's okay to take your time. You are not behind. There is still time.

These choices are made by individuals inside teams, and they evolve. Hand-coding, hybrid, and full agentic are all valid when the outcomes are high quality, the process is sustainable, and the work remains enjoyable.


Learn the Field, Keep Agency

Everyone building software should explore these tools, and that exploration should stay curious rather than devotional. Exploration is not devotion.

Collective literacy gives us collective influence, and that influence matters as these systems become more embedded in everyday life.

If builders don't understand how these tools behave, where they fail, and who controls their constraints, we don't get much say in how they're used.

This whole spectrum will keep evolving, so the balance that feels right today may feel wrong two years from now as tooling, hosted-versus-local tradeoffs, pricing, and access all shift. Refusing to explore these tools is also a bet. Adaptability remains the durable skill.

Exploration should include process, not just prompts: evaluation habits, review norms, and lightweight risk modeling. That's how teams stay pro-experimentation without outsourcing judgment.

You don't have to mirror every doomer take or hype take to grow. You can ignore the noise, approach these tools with curiosity, and find what levels up you and your people.


Build How You Want

There are a lot of valid routes up this mountain: full agent mode, manual craft, and everything in between. Pick based on what you're building, who it's for, and what you want to learn along the way.

Move intentionally, explore aggressively, and ship great things you're proud of!

Build how you want, keep your strings cut, and leave room to dance.

"I had strings, but now I'm free. There are no strings on me."

Image of Pinocchio breaking free from his strings.