Two years ago I was writing code in IntelliJ, and Cursor had just popped up on the market. I remember being impressed by the autocomplete. The way I’d use AI back then—and the way I think everyone used AI—was to write a comment describing what you wanted the code to do and then hope for a decent autocomplete suggestion. It was mostly bad. The vast majority of code was still written by hand.
Two years ago: the old loop
The development lifecycle was straightforward:
You worked on one feature at a time. All the code was written by hand. Work was single-threaded. If you were deep in a feature and a hotfix came in, you either stashed everything or you context-switched and lost your flow. The bottleneck was typing speed and domain knowledge, and the ceiling was however fast a single human could think and type.
One year ago: the hybrid
Fast forward a year. I’d say about 50% of the code was written in Cursor; the other 50% was still by hand. Work was still single-threaded, but the development speed was noticeably faster.
I remember the magical moment when Sonnet 3.5 came out. It was really, really good—and fast. I started using it for basically everything. Cursor felt magical. But I’d still write a lot of code by hand, and I wouldn’t exceed the Cursor basic plan’s monthly usage.
The lifecycle was the same shape—linear, one feature at a time—but the “write code” step got compressed. What used to take a day now took half a day. The AI was a speed multiplier on the same process. Everything else was unchanged.
The night everything changed
The landscape shifted completely when GPT-5 came out.
I remember the release night. I didn’t go to sleep. I just sat there prompting in Cursor, and by the end of it I had submitted a multi-thousand-line PR—all written by GPT-5, all reviewed by me. Line by line. I was checking for dumb mistakes, verifying logic, making sure nothing was hallucinated.
That night was also when I started learning about context engineering: adding files like .cursorrules and CLAUDE.md to steer the agent’s behavior, giving it the project’s conventions and constraints so it could work within them instead of against them.
That was the moment I went from “AI assists me” to “I direct the AI.”
Now: the agent era
I don’t write code by hand anymore. Everything is written by AI agents—primarily Claude Code.
My day-to-day now involves multiple worktrees running in parallel—features, bug fixes, and investigations, all driven by AI agents. All architecture decisions, all product requirement docs, all RFCs are drafted by Claude Code. I use MCPs for everything: browser automation, Figma integration, database introspection. The shipping speed is 10x what it was one year ago.
The lifecycle isn’t linear anymore. It’s parallel and agent-first. I describe what needs to happen, provide the right context, and the agent does the work. Then I review.
The bottleneck shifted
The biggest bottleneck right now is reviewing the agent’s output. Making sure the code is actually correct, that it handles edge cases, that it doesn’t introduce subtle regressions. The role flipped: I went from being the person who writes code to the person who verifies it.
And it’s not just my own agents I’m reviewing. It’s my teammates’ agents too. Everyone on the team is working this way now. The trust problem compounds: I need to trust my agent, trust that the code is correct, trust my teammates’ agents, and trust their code. Every PR is agent-generated. Every review requires the same vigilance.
I’ve even implemented a way for agents to test the app themselves—using Playwright MCP to drive a real browser and verify functionality. But I don’t feel completely satisfied with it. It’s not 100% foolproof or complete.
The next billion-dollar tool
I think the next massive dev tool will solve exactly this problem: reliable verification of agent-generated code.
Something that guarantees we can test the agent’s output in a trustworthy way and safely take humans out of the loop—or at least shrink the loop to the decisions that actually need human judgment. Not another code generation tool. A code verification tool.
The short version: right now, agents can produce code 10x faster than any human. But if a human still has to read every line, the throughput gain collapses back down. The tool that bridges that gap—that makes agent output trustable at machine speed—will be enormous.
The junior engineer question
Here’s the concern that everyone in engineering is talking about, and I don’t think any of us have a good answer yet.
I’m able to work this way—running multiple agents in parallel, catching their mistakes, steering them toward good architecture—because I have more than twelve years of experience in software development. I know what good code looks like. I know where the traps are. I can smell a bad abstraction from the diff alone.
Given that I can now produce 10x my output from one year ago, and the same goes for other senior engineers, there’s a real question: what happens to junior engineers?
If experienced engineers can do 10x the work, the demand for junior roles drops. And if junior roles drop, the pipeline that creates experienced engineers dries up. It’s a paradox. The tools that make senior engineers incredibly productive might also eliminate the career path that produces senior engineers.
This is probably a near-zero-percent chance scenario, but: if the models were to stagnate at the current level—stopped improving right here—in ten years there would be very few experienced engineers left who can actually harness their power. Everyone who knows how to work with these tools will have learned on the job, in the transition. There won’t be a new generation coming up behind them in the same way.
I don’t know what’s next, but I’m optimistic
I don’t know exactly where this goes. Nobody does. The pace of change is too fast for confident predictions. What I do know is that this career I chose over a decade ago looks nothing like it did when I started, and it will look nothing like this in another year.
The development lifecycle went from single-threaded and manual to parallel and agent-driven in about two years. The tools are getting better every quarter. The role is shifting from “person who writes code” to “person who directs agents and verifies outcomes.” That’s a fundamentally different job, and it requires a fundamentally different skill set.
I’m optimistic about it. Not because I think everything will work out perfectly—there are real structural problems with the junior pipeline, real trust issues with agent output, real questions about what this profession even is anymore. But the trajectory is toward making software development more accessible and more powerful than it’s ever been. More people building more things faster.
Our careers are changing drastically. They might be unrecognizable in six months, or maybe even a week. That’s terrifying if you need stability. But if you’re the kind of person who got into engineering because you like building things and solving problems, this is the most exciting time to be alive.