7 comments

  • ngburke 2 hours ago
    Spot on. All those years of slinging code and debugging gave me and others the judgement and eye to check on all the AI generated code. I now wonder often about what hiring looks like in this new era. As a small startup, we just don't need junior engineers to do the day to day implementation.

    Do we instead hire a small number of people as apprentices to train on the high level patterns, spot trouble areas, develop good 'taste' for clean software? Teach them what well organized, modular software looks like on the surface? How to spot redundancy? When to push the AI to examine an area for design issues, testability, security gaps? Not sure how to train people in this new era, would love to hear other perspectives.

  • jjk166 23 minutes ago
    > and those tasks were never just tasks. They were the mechanism that built judgment, intuition, and the ability to supervise the systems we now delegate to AI.

    Bullshit. The busywork wasn't being done by low level engineers to train them up, they were doing it because it needed doing, it was undesirable, and they were lowest on the totem pole.

    Jobs are self training. Sure doing other jobs may give you some intuition that can be applied to new jobs. Manually writing code and fixing your human created mistakes obviously carries over for debugging AI written code. But people who start their careers with AI written code will also learn how to debug AI code. You don't learn how to architect a system by coding a system somebody else architected. At best you might pick up some common patterns by osmosis, but this often breeds worse engineers who do things as they have been done in the past without understanding why and without regard to how they really ought to be done. True understanding of why A was chosen in this case and B works better in another comes from actually doing the high level work.

    Indeed, if AI usage is like any other tool that has come before it, those who grow up using it will be much more adept at utilizing it in practice than those who are adopting it now after spending a lifetime learning a different skillset. We don't exactly lament how much worse software engineers have gotten since they no longer learn how to sort their punch cards if they drop them.

    Even if you are of the opinion that the tasks junior engineers do, which now AI can do, are fundamental to becoming competent at higher level skills, that's no problem. You can train people without them doing value-added work. Have engineers code the old fashioned way for training purposes. It's no different from doing math problems despite calculators existing. This is a problem only for extracting underpaid labor out of junior engineers with the lie that they are being paid in experience.

    • hungryhobbit 18 minutes ago
      >> and those tasks were never just tasks. They were the mechanism that built judgment, intuition, and the ability to supervise the systems we now delegate to AI. >Bullshit. The busywork wasn't being done by low level engineers to train them up, they were doing it because it needed doing, it was undesirable, and they were lowest on the totem pole.

      Why not both? It was work that needed doing AND it taught people to be better engineers.

      • jjk166 13 minutes ago
        > it taught people to be better engineers.

        It generally does not.

        And if it does, they can still do those tasks as exercises.

  • bearfox 1 hour ago
    This fits my bias so well, I'm skeptical but can't refute. So well that in fact the title reminds me of an SF story I always come back to when thinking about effect of AI on the society. The Plateau by Christopher Anvil.
  • datadrivenangel 1 hour ago
    Demand for software is large and as the cost goes down we'll want more of it, so there will be demand to keep training people.
  • DGAP 41 minutes ago
    There's going to be very very very few engineers.
  • sublinear 36 minutes ago
    Haven't we been saying similar for all other aspects of software engineering too as they have changed over time? Writing code is just one responsibility amongst many.

    I don't want code from someone/something that doesn't know the needs of the business, cannot find where to compromise effectively, does not understand the deployment environments their app will run in, would not know how to respond to an incident with their application in production, etc.

    I don't think writing code with AI is relevant to career progress at all. What matters that I can hold someone accountable for the code we have in prod, and they'd better have answers or they don't have a job.

    If they are dependable there, only then they can be trusted with more responsibility. That's all we're really talking about. You get paid to be accountable. You do not get paid to do one narrow thing well. It should not take you a decade to read and write code quickly and effectively. I'd argue that should have happened when you were in high school and college (how it was for everyone in upper management right now).

    I feel like the quality of new hires has progressively become worse over the years, and we have made so many concessions to remedy it (AI included), and all it's doing is making the problem worse.

  • wolttam 3 hours ago
    I feel like some of the data in this is horrendously out of date. They're referencing articles from the end of 2024.

    There was a massive step-change in the capability of these models towards the end of 2025.

    There is just no way that an experienced developer should be slower using the current tools. Doesn't match my experience at all.

    The title of the article, though - absolutely true IMO

    • Esophagus4 2 hours ago
      Yeah…

      > For tasks that would take a human under four minutes—small bug fixes, boilerplate, simple implementations—AI can now do these with near-100% success. For tasks that would take a human around one hour, AI has a roughly 50% success rate. For tasks over four hours, it comes in below a 10% success rate

      Opus 4.6 now does 12hr tasks with 50% success. The METR time horizon chart is insane… exponential progression.

      • indoordin0saur 2 hours ago
        Really depends on what you're working in. For me, I work with a lot of data frameworks that are maybe underrepresented in these models' training sets and it still tends to get things wrong. The other issue is business logic is complex to describe in a prompt, to the point where giving it all the context and business logic for it to succeed is almost as much work as doing it myself. As a data engineer I still only find models to be useful with small chunks of code or filling in tedious boilerplate to get things moving.
        • blonder 2 hours ago
          Agreed. Common use cases like creating a simple LMS system Opus is shockingly good, saving hours upon hours from having to reinvent the wheel. Other things like simple queries to, and interactions with our ERP system it is still quite poor at, and increases development time rather than shortens it.
      • alistairSH 2 hours ago
        How is success defined in those metrics? Is success "perfect - can deploy to prod immediately" or "saved some arbitrary amount of engineering time"?

        Anecdotal experience from my team of 15 engineers is we rarely get "perfect" but we do get enough to massive time savings across several common problem domains.