The author of "Choose boring technology" regretted the choice of the word "boring" [1].
Anyway, boring is bad. Boring is what spends your attention on irrelevant things. Cobol's syntax is boring in a bad way. Go's error handling is boring in a bad way. Manually clicking through screens again and again because you failed to write UI tests is boring in a bad way.
What could be "boring in a good way" is something that gets things done and gets out of your way. Things like HTTPS, or S3, or your keyboard once you have leaned touch typing, are "boring in a good way". They have no concealed surprises, are well-tested in practice, and do what they say on the tin, every time.
New and shiny things can be "boring in the good way", e.g. uv [2]. Old and established things can be full of (nasty) surprises, and, in this regard, the opposite of boring, e.g. C++.
What you describe is the difference between tedious and simple.
Boring is good. I don't want to be excited by technology, i want to be bored in the sense that it's simple and out of my way.
Same for KISS. I tend to tell people to not only keep things simple, but boring even. Some new code i need to read and fix or extend? I want to be bored. Bored means it's obvious and simple.
The difference? There are many complex libraries. By definition they are not simple technology.
For example a crypto library. Probably one of the most complex tasks. I would consider it a good library if it's boring to use/extend/read.
> The author of "Choose boring technology" regretted the choice of the word "boring"
Well, yes, but only in the sense that people kept giving him beef about how boring is a bad word in their mind, not because it was a bad word for this context per se. Which is somewhat ironic given your comment!
I suppose what you're getting at is the difference between boring, and "boooooriiiiiing".
What you just described fits my definition of boring, which is some function of (time passed, individual at keyboard)
Cobol was (and for some, still is) exciting at first, but _becomes_ boring once you master it, and the ecosystem evolves to fix or work around its shortcomings. Believe it or not, even UX/UI testers can deal with and find happiness in clicking through UIs for the tenth thousand time (sure, last time I saw such Tester, was at around 2010).
This doesn't mean the technology itself becomes bad or stays good. It just means the understanding (and usage patterns) solidifies, so it becomes less exciting, hence: "boring".
But you can't sell a book with the title "Choose well-established technology". Because people would be like, no sht, Sherlock, I don't need a book to know that.
Why conflate boring with old? "Boring" in this context means: proven and stable. Yes, that would take some time to become apparent, but the converse is not necessarily the case: a tech does not become "boring" in a good way simply because it is old.
All this was my understanding before, so not sure why you think "boring" was meant to be equivalent to "old"
I do not want my browser to be exiting. I do not want for it to change every week. Say moving buttons to different places. Changing how address bar operates. Maybe trying new short cut keys...
Same goes for most useful software. I actually do want them to be dull. And do their job and not get in between and make my day more interesting by having to fight against it.
Well known and mature tools are still sharp and lots of them are not tedious to use..
I picked "sharp" not "exciting".
Dull knife doesn't do its job, you want tools that do the job efficiently. "Boring" as it seems was interpreted as picking tools that don't do the job efficiently. That is why original idea creator found "Choose boring technology" most likely misunderstood.
LLMs are useful in contexts where fuzzy and hazily accurate is acceptable. A developer trying to hack together some solution through trial and error for example.
They are less useful in contexts where accuracy is expected or legally required. An audit log for example.
Many businesses have made bad judgements of where the distinction is, some don't even recognise a distinction. This will improve over time.
I tend to think that the reason people over-index on complex use-cases for LLMs is actually reliability, not a lack of interest in boring projects.
If an LLM can solve a complex problem 50% of the time, then that is still very valuable. But if you are writing a system of small LLMs doing small tasks, then even 1% error rates can compound into highly unreliable systems when stacked together.
The cost of LLMs occasionally giving you wrong answers is worth it for answers to harder tasks, in a way that it is not worth it for smaller tasks. For those smaller tasks, usually you can get much closer to 100% reliability, and more importantly much greater predictability, with hand-engineered code. This makes it much harder to find areas where small LLMs can add value for small boring tasks. Better auto-complete is the only real-world example I can think of.
>If an LLM can solve a complex problem 50% of the time, then that is still very valuable
I'd adjust that statement - If an LLM can solve a complex problem 50% of the time and I can evaluate correctness of the output, then that is still very valuable. I've seen too many people blindly pass on LLM output - for a short while it was a trend in the scientific literature to have LLMs evaluate output of other LLMs? Who knows how correct that was. Luckily that has ended.
> I've seen too many people blindly pass on LLM output
I misread this the first time and realised both interpretations are happening. I've seen people copy-paste out of ChatGPT without reading, and I've seen people "pass on" or reject content simply because it has been AI generated.
True! This is what has me more excited about LLMs producing Lean proofs than written maths proofs. The Lean proofs can be proved to be correct, whereas the maths proofs require experts to verify them and look for mistakes.
That said, I do think there are lots of problems where verification is easier than doing the task itself, especially in computer science. I think it is easier to list tasks that aren't easier to verify than to do from scratch actually. Security is one major one.
Even there it's risky. LLMs are good at subtly misstating the problem, so it's relatively easy to make them prove things which look like the thing you wanted but which are mostly unrelated.
I feel that with LLMs and AI, people are furiously trying to argue the reality they desire into existence. I've never read more articles predicting the future than on this topic (I am guilty of it, too.)
Predicting AGI and all those "complete replacement of jobs" are boring example of this. What's more annoying to me are people who do that arguing into existence about reality they want where AI is completely useless and fake and can't do anything and make you stupid and ten more loudest clickbait titles of recent year
- LLM's are too limited in capabilities and make too many mistakes
- We're still in the DOS era of LLM's
I'm leaning more towards the the 2nd, but in either case pandora's box has been opened and you can already see the effects of the direction our civilization is moving towards with this technology.
I like this article, and I didn't expect to because there's been volumes written about how you should be boring and building things in an interesting way just for the hell of it, is bad (something I don't agree with).
Small models doing interesting (boring to the author) use-cases is a fine frontier!
I don't agree at all with this though:
> "LLMs are not intelligent and they never will be."
LLMs already write code better than most humans. The problem is we expect them to one-shot things that a human may spend many hours/days/weeks/months doing. We're lacking coordination for long-term LLM work. The models themselves are probably even more powerful than we realize, we just need to get them to "think" as long as a human would.
> LLMs already write code better than most humans.
If you mean better than most humans considering the set of all humans, sure. But they write code worse than most humans who have learned how to write code. That's not very promising for them developing intelligence.
The issue is one that's been stated here before: LLMs are language models. They are not world models. They are not problem models. They do not actually understand world or the underlying entities represented by language, or the problems being addressed. LLMs understand the shape of a correct answer, and how the components of language fit together to form a correct answer. They do that because they have seen enough language to know what correct answers look like.
In human terms, we would call that knowing how to bullshit. But just like a college student hitting junior year, sooner or later you'll learn that bullshitting only gets you so far.
That's what we've really done. We've taught computers how to bullshit. We've also managed to finally invent something that lets us communicate relatively directly with a computer using human languages. The language processing capabilities of an LLM are an astonishing multi-generational leap. These types of models will absolutely be the foundation for computing interfaces in the future. But they're still language models.
To me it feels like we've invented a new keyboard, and people are fascinated by the stories the thing produces.
Is it bullshitting to perform nearly perfect language to language translation or to generate photorealistic depictions from text quite reliably? or to reliably perform named entity extraction or any of the other millions of real-world tasks LLMs already perform quite well?
I think this is, essentially, a wishful take. The biggest barrier to models being able to do more advanced knowledge work is creating appropriately annotated training data, followed by a few specific technical improvements the labs are working on. Models have already nearly maxed out "work on a well-defined puzzle that can be feasibly solved in a few hours" -- stunning! -- and now labs will turn to expanding other dimensions.
OT: Since the author is a former Apple UX designer who worked on the Human Interface Guidelines, I hope he shares his thoughts on the recent macOS 26 and iOS updates - especially on Liquid Glass.
The investment fund that acquired the company that acquired our company requests all that all companies it owns go big on cloud and AI, no matter what, because this raises valuation and they can sell them for bigger profits.
I have nothing against cloud or AI per se, but I still believe in the right tool for the right job and in not doing things just for the sake of it. While raising valuation is a good thing, raising costs, delaying more useful features and adding complexity should also be taken into account.
I also agree that boring is good, but in our current society you won't get a job for being boring, and when you get a job, it's is guaranteed you are not being paid to solve problems.
> but in our current society you won't get a job for being boring,
One can argue that every other field of engineering outside of Software Engineering, specializes in making complex things into boring things.
We are the unique snowflakes that take business use cases and build castle in the clouds that may or may not actually solve the business problem at hand.
Anyway, boring is bad. Boring is what spends your attention on irrelevant things. Cobol's syntax is boring in a bad way. Go's error handling is boring in a bad way. Manually clicking through screens again and again because you failed to write UI tests is boring in a bad way.
What could be "boring in a good way" is something that gets things done and gets out of your way. Things like HTTPS, or S3, or your keyboard once you have leaned touch typing, are "boring in a good way". They have no concealed surprises, are well-tested in practice, and do what they say on the tin, every time.
New and shiny things can be "boring in the good way", e.g. uv [2]. Old and established things can be full of (nasty) surprises, and, in this regard, the opposite of boring, e.g. C++.
[1]: https://boringtechnology.club/#30
[2]: https://github.com/astral-sh/uv
Boring is good. I don't want to be excited by technology, i want to be bored in the sense that it's simple and out of my way.
Same for KISS. I tend to tell people to not only keep things simple, but boring even. Some new code i need to read and fix or extend? I want to be bored. Bored means it's obvious and simple.
The difference? There are many complex libraries. By definition they are not simple technology.
For example a crypto library. Probably one of the most complex tasks. I would consider it a good library if it's boring to use/extend/read.
Well, yes, but only in the sense that people kept giving him beef about how boring is a bad word in their mind, not because it was a bad word for this context per se. Which is somewhat ironic given your comment!
I suppose what you're getting at is the difference between boring, and "boooooriiiiiing".
Cobol was (and for some, still is) exciting at first, but _becomes_ boring once you master it, and the ecosystem evolves to fix or work around its shortcomings. Believe it or not, even UX/UI testers can deal with and find happiness in clicking through UIs for the tenth thousand time (sure, last time I saw such Tester, was at around 2010).
This doesn't mean the technology itself becomes bad or stays good. It just means the understanding (and usage patterns) solidifies, so it becomes less exciting, hence: "boring".
But you can't sell a book with the title "Choose well-established technology". Because people would be like, no sht, Sherlock, I don't need a book to know that.
All this was my understanding before, so not sure why you think "boring" was meant to be equivalent to "old"
As much as I do get the idea, I can see how promoting use of tedious to use and dull tools is something that really misses the mark.
Well known and mature tools are still sharp and lots of them are not tedious to use.
I do not want my browser to be exiting. I do not want for it to change every week. Say moving buttons to different places. Changing how address bar operates. Maybe trying new short cut keys...
Same goes for most useful software. I actually do want them to be dull. And do their job and not get in between and make my day more interesting by having to fight against it.
I picked "sharp" not "exciting".
Dull knife doesn't do its job, you want tools that do the job efficiently. "Boring" as it seems was interpreted as picking tools that don't do the job efficiently. That is why original idea creator found "Choose boring technology" most likely misunderstood.
Many businesses have made bad judgements of where the distinction is, some don't even recognise a distinction. This will improve over time.
If an LLM can solve a complex problem 50% of the time, then that is still very valuable. But if you are writing a system of small LLMs doing small tasks, then even 1% error rates can compound into highly unreliable systems when stacked together.
The cost of LLMs occasionally giving you wrong answers is worth it for answers to harder tasks, in a way that it is not worth it for smaller tasks. For those smaller tasks, usually you can get much closer to 100% reliability, and more importantly much greater predictability, with hand-engineered code. This makes it much harder to find areas where small LLMs can add value for small boring tasks. Better auto-complete is the only real-world example I can think of.
I'd adjust that statement - If an LLM can solve a complex problem 50% of the time and I can evaluate correctness of the output, then that is still very valuable. I've seen too many people blindly pass on LLM output - for a short while it was a trend in the scientific literature to have LLMs evaluate output of other LLMs? Who knows how correct that was. Luckily that has ended.
I misread this the first time and realised both interpretations are happening. I've seen people copy-paste out of ChatGPT without reading, and I've seen people "pass on" or reject content simply because it has been AI generated.
That said, I do think there are lots of problems where verification is easier than doing the task itself, especially in computer science. I think it is easier to list tasks that aren't easier to verify than to do from scratch actually. Security is one major one.
The emperor's new clothes ...
If he means they will never outperform humans at cognitive or robotics tasks, that's a strong claim!
If he just means they aren't conscious... then let's don't debate it any more here. :-)
I agree that we could be in a bubble at the moment though.
- LLM's are too limited in capabilities and make too many mistakes - We're still in the DOS era of LLM's
I'm leaning more towards the the 2nd, but in either case pandora's box has been opened and you can already see the effects of the direction our civilization is moving towards with this technology.
Small models doing interesting (boring to the author) use-cases is a fine frontier!
I don't agree at all with this though:
> "LLMs are not intelligent and they never will be."
LLMs already write code better than most humans. The problem is we expect them to one-shot things that a human may spend many hours/days/weeks/months doing. We're lacking coordination for long-term LLM work. The models themselves are probably even more powerful than we realize, we just need to get them to "think" as long as a human would.
If you mean better than most humans considering the set of all humans, sure. But they write code worse than most humans who have learned how to write code. That's not very promising for them developing intelligence.
In human terms, we would call that knowing how to bullshit. But just like a college student hitting junior year, sooner or later you'll learn that bullshitting only gets you so far.
That's what we've really done. We've taught computers how to bullshit. We've also managed to finally invent something that lets us communicate relatively directly with a computer using human languages. The language processing capabilities of an LLM are an astonishing multi-generational leap. These types of models will absolutely be the foundation for computing interfaces in the future. But they're still language models.
To me it feels like we've invented a new keyboard, and people are fascinated by the stories the thing produces.
https://jenson.org/about-scott/
I have nothing against cloud or AI per se, but I still believe in the right tool for the right job and in not doing things just for the sake of it. While raising valuation is a good thing, raising costs, delaying more useful features and adding complexity should also be taken into account.
um, dynamo is a generator, it takes mechanical energy and turns into to electricity.
That's just your experience, based on your geolocation and chain of events.
One can argue that every other field of engineering outside of Software Engineering, specializes in making complex things into boring things.
We are the unique snowflakes that take business use cases and build castle in the clouds that may or may not actually solve the business problem at hand.
... and if it all falls down, don't blame us - you clicked the EULA /s