I saw a tweet from Andrej Karpathy that's been sitting with me. He's never felt this behind as a programmer. I've been thinking about this through the marshmallow challenge, where kindergartners beat MBAs. The kids just build and iterate. Most of us are the MBAs right now with AI tools.
Some of it is that the radical acceleration in productivity isn't real. See Brook's "No Silver Bullet". You certainly have those moments where you describe a bug and ask if it can understand it and get an answer in two minutes, but when you consider everything that goes into the "definition of done", 10x just isn't realistic.
My take at work is that I'm not running much faster, but I am getting better quality. Some of it is my attitude, but with AI I am more likely to go back and forth and ask things until I really understand what is going on, write tests even when it is a hassle to write tests, ask the IDE questions about the dependencies I use so I can really understand how they work, try two or three possible solutions and pick the best, etc.
When it comes to things like that memory leak it is very hit and miss. If you give it try it might solve it, it might not. It's worth trying. But you can't count on something like that working all the time.
I think you're right that 10x isn't realistic for most work, and Brooks is still mostly correct. The "No Silver Bullet" argument holds because most of software development isn't typing code faster.
But you're describing exactly the shift that matters. You're not running faster, you're getting better quality. You're more likely to understand dependencies, write tests, try multiple solutions. That's the actual productivity gain.
The marshmallow challenge point isn't about whether AI makes you 10x faster. It's about the mindset shift. The MBAs didn't lose because they were slower. They lost because they spent their time planning the perfect approach instead of iterating.
The memory leak example from Boris Cherny isn't about AI being reliable. It's about his coworker not having the baggage of "this is how you debug memory leaks." They just tried asking Claude first. Sometimes it works, sometimes it doesn't. But the willingness to try it first is what creates the gap.
Personally I think it's that I know how to do software development and I always have my eyes on getting the project done.
I don't think about AI tools a lot. I use Junie because it is integrated with my favorite IDE and like sending more money Jetbrain's way. I don't read blogs or tweets about AI coding.
What I do do is try little things that are always oriented to the work in front of me. I work for an MBA who is great at what he does but when he tried "vibe coding" he got nowhere and had that similar feeling of puzzlement from the gap between the results he got and the results that influencers say they are getting that a lot of people express. I've learned AI assisted coding by doing and from square one I realized it was going to work some of the time and fail some of the time and have always prioritized not getting stuck. It's certainly fair to make some wild (but well-formed) request and see what you can get.
My take at work is that I'm not running much faster, but I am getting better quality. Some of it is my attitude, but with AI I am more likely to go back and forth and ask things until I really understand what is going on, write tests even when it is a hassle to write tests, ask the IDE questions about the dependencies I use so I can really understand how they work, try two or three possible solutions and pick the best, etc.
When it comes to things like that memory leak it is very hit and miss. If you give it try it might solve it, it might not. It's worth trying. But you can't count on something like that working all the time.
But you're describing exactly the shift that matters. You're not running faster, you're getting better quality. You're more likely to understand dependencies, write tests, try multiple solutions. That's the actual productivity gain.
The marshmallow challenge point isn't about whether AI makes you 10x faster. It's about the mindset shift. The MBAs didn't lose because they were slower. They lost because they spent their time planning the perfect approach instead of iterating.
The memory leak example from Boris Cherny isn't about AI being reliable. It's about his coworker not having the baggage of "this is how you debug memory leaks." They just tried asking Claude first. Sometimes it works, sometimes it doesn't. But the willingness to try it first is what creates the gap.
I don't think about AI tools a lot. I use Junie because it is integrated with my favorite IDE and like sending more money Jetbrain's way. I don't read blogs or tweets about AI coding.
What I do do is try little things that are always oriented to the work in front of me. I work for an MBA who is great at what he does but when he tried "vibe coding" he got nowhere and had that similar feeling of puzzlement from the gap between the results he got and the results that influencers say they are getting that a lot of people express. I've learned AI assisted coding by doing and from square one I realized it was going to work some of the time and fail some of the time and have always prioritized not getting stuck. It's certainly fair to make some wild (but well-formed) request and see what you can get.