I've been using AI every day for over a year. And on paper, I'm faster and more efficient than ever. Features that used to take weeks now take days. Debugging unfamiliar parts of the system is more approachable. Thorny refactors I've been deprioritizing for months happen in a morning. Shipping has never been easier, yet I feel busier and more distracted than ever.
Early on in my AI journey a typical day might include an agent scoping a new feature in one window, another handling technical discovery on an unfamiliar part of the codebase, another wiring up an endpoint, and another updating the client that consumes it. PRs sat at different stages of review. Slack was open in the background. Other work in other tabs, internal projects, DMs from teammates. I'm sure this sounds familiar. The day was mostly spent moving between these contexts, responding to prompts, nudging things forward, putting up PRs, then babysitting them through review.
Traditional metrics told me I'd had a great day. Code was written. PRs went up. Progress moved forward. By any reasonable measure, that's what productivity looks like. But that's not what it felt like. What it felt like was burnout. The kind of fatigue you feel when you're in motion all day without ever quite being engaged in any meaningful way. I'd close the laptop knowing I'd moved a lot of things forward, without being able to point to a moment where I'd really been deep in the work.
To be clear, AI is genuinely useful. And the teams using it thoughtfully are not only shipping fast, they're also producing quality work. I don't see a world where we go back. But there's a cost hiding underneath all that speed, and if you're not careful, it compounds.
The cost is that AI makes it really easy to stop thinking. And it does so subtly, without you even noticing. In Thinking, Fast and Slow, Daniel Kahneman shows that our brains are wired with two modes: a fast, intuitive one and a slow, deliberate one. And given the chance, we'll lean on the fast one every time. We've always offloaded the deliberate work onto intuition when we could get away with it. AI gives us a new place to offload to. An agent will produce a plausible answer to almost any question you pose, fast enough that the path of least resistance is to accept it and move on. Meanwhile, the hard part gets quietly skipped. The part where you stay with the problem long enough to actually understand it. The part where you notice all the little things that don't quite fit. The part where you wrestle with two approaches and pick one because you've genuinely weighed the tradeoffs. You're left with code that works, but at the end of the day you can't point to a single decision that was yours.
And even when you do recognize the need to push back, doing so is harder because of all the context-switching. Effective critique requires being immersed in the problem. You need the constraints in your head. You need a feel for the tradeoffs. You need to know which parts of the system will be impacted. When you're juggling four threads, you're rarely immersed in any of them. You glance at a diff, sense that something is off, and then have to decide whether to spend ten minutes trying to understand the full context or just trust the agent and move on. Most of the time, under pressure, you move on. The instinct to question is there, but the conditions to act on it are not.
The same thing happens at the product level, and it's arguably worse there. AI makes it trivially easy to build all the things. Every feature anyone has asked for. Every edge case. Every "wouldn't it be cool if the product did this" request. The constraint that used to force good judgment was that writing code was expensive, so you had to be honest about what was actually worth doing. That constraint is mostly gone. What's left, if you're not careful, is a product with more features than reasons to exist. The agents, harnesses, and workflows amplify whatever judgment you bring to them. If the judgment is "build it because we can," you get a product that reflects that. The restraint has to come from somewhere, and increasingly the only place it can come from is you, willing to hold the problem long enough to know what not to build.
Parallelism makes this worse. Before agents, the cost of running four things at once was prohibitive; you couldn't hold four problems in your head well enough to make progress on any of them, and the work serialized itself. Now an agent can hold a chunk of context for me while I hold a chunk for someone else, and the day fans out into four or five partial threads that all move forward a little. Each thread gets just enough attention to keep moving, and none get enough to settle in.
The cost is that AI makes it really easy to stop thinking.
Looking back, I can see what was actually going on. The days were full, but the work was shallow. Code I didn't really read. Decisions I didn't really make. Problems I didn't really wrestle with. Work that didn't feel like mine because I wasn't fully engaged with it. This is adjacent to what Addy Osmani has been calling comprehension debt, the gap between code that's been generated and code that's been internalized. That kind of technical debt is real and worth taking seriously. But there's another debt that runs alongside it: the personal debt of not being fully engaged with the work. You burn out faster. The work feels hollow. The days blur. And the motivation that used to be there slowly starts to fade.
When I started pushing back against this, the practice that worked best was almost embarrassingly simple: focus on one task at a time. While the agent is working, I stay in the problem space instead of jumping to the next thread. I read what the model produces carefully enough to form opinions about it. I disagree with choices. I ask why it took one approach over another. I use the conversation to interview myself about what I actually want. I spend what would have previously felt like an unreasonable amount of time on the polish: the copy, the spacing, the edge case nobody will hit, the refactor that makes the next feature easier. The work takes about as long on the calendar as the parallel version would have, but it feels completely different. The thinking comes back, and I'm actually in the work again.
And here's the best part. Working this way is not meaningfully slower. Even doing one thing at a time, with real attention, I'm still exponentially faster than I would have been before AI. The gains are so large that you can spend them on focus instead of throughput and still come out ahead. The parallel approach trades quality for raw output. It trades design. It trades maintainability. It trades the things that don't show up in a PR count. The single-threaded approach refuses that trade. You still get the productivity gains; you just don't pay for them with the parts of the work that matter most.
That changes what I think AI is actually for. The dominant pitch right now (across tools, vendors, and most of the writing on this topic) is that AI enables you to move faster. Ship more features. Close more tickets. Run more agents in parallel. That option is real, and I've been taking it. But lately, a more interesting option is starting to emerge from folks in the industry: use the same calendar to do bigger, better, and more ambitious things.
The features we used to cut from six weeks down to three because we couldn't afford the depth? We can afford them now. The polish we used to defer because the schedule was tight? We can build it in. The architectural choice that would have been "good enough" because the better version was a week longer? We can take the better version. Same time, more ambition, more craft. More, but better.
I don't think this is nostalgia. The version of AI usage being sold to most teams right now emphasizes throughput, parallelism, and output. That version is less interesting, and if you're not careful, it will quietly cost you the parts of the work that matter most. The more interesting version emphasizes depth, quality, and value: focusing on one task at a time, longer than feels reasonable, with the model in the room as a collaborator rather than a fleet of workers to dispatch.
If you're working through something like this on your team, or you've found a different shape of practice that's working, we'd love to hear about it.