No, they didn't raise $122B as the HN title implies. A big chunk of that $122B is a "maybe" that depends on various things that need to happen in the future.
Oh, man... I can't wait to see where this is going. Might not be pretty after all.
It just makes comparing funding rounds hard to understand, since money in the bank is money in the bank, and a lot of the "committed capital if you reach a milestone" is capital that would be easy to get if you reached that milestone, if it is sufficiently advanced, and has enough outs, etc., that you may as well have just raised another round in the future.
$2b/month which is $24b/year. Not as much as I expected considering they were at $20b by end of 2025.[0] They only added $4b since?
Anthropic had $19b by end of February 2026 and they added $6b in February alone.[1] This means if they added another $6b in March, they're higher than OpenAI already.
However, I heard that OpenAI and Anthropic report revenue in a different way. OpenAI takes 20% of revenue from Azure sales and reports revenue on that 20%. Anthropic reports all revenue, including AWS's share.[2] Not exactly sure how this works. Anyone know?
And that is revenue only. In the past 15 or so years most US companies (and especially startups) always talk about revenue only. Wheras only profit should matter.
E.g. what good is 20 billion per year when "OpenAI is targeting roughly $600 billion in total compute spending through 2030". That is $150 billion per year?
It's not as much as you think. Google is spending $185b on data centers this year alone. Amazon is spending $200b this year.
Since everyone is trying to get compute from anywhere they can, including OpenAI going to Google, it's hard to tell what is used internally vs externally.
For example, it's entirely possible that Google's internal roadmap for Gemini sees it using $600b of compute through 2030 as well. In that case, OpenAI needs to match since compute is revenue.
why should only profits matter? if i had a killer product today that i just need to sell tomorrow, wouldn't you still invest today knowing i'll probably only start to make money tomorrow (or perhaps next week)?
the expectation is that they'll eventually make money. they can't raise forever. only startups are not profitable for a few years. but most companies that have existed for a long while have been profitable
and since they're expected to make a LOT of money, everyone wants a piece of that future pie, pushing up the valuation and amount raised to admittedly somewhat delusional levels like here
> Today, we closed our latest funding round with $122 billion in committed capital at a post money valuation of $852 billion.
A couple things that stand out to me about this is the use of the phrase "committed capital", which only sounds like a promise that could break from various circumstances, and the valuation of their funding keeps changing so it sounds like a max rather than the valuation every investor invested at.
Probably a lot? It would be much more tax-advantageous to do it this way, $50B worth of credits != $50B worth of spend on Amazon's part, and they might meet in the middle about how much equity that translates to.
That’s typical. Large funding rounds usually aren’t delivered as one single giant lump sum into the bank account. The capital is committed in stages that can depend on hitting milestones or goals.
This is done even in smaller startup funding rounds some times.
> The broad consumer reach of ChatGPT creates a powerful distribution channel into the workplace
They mention this line in different forms a couple of times in the article. It’s clear they’re pretty rattled about Anthropic’s momentum in enterprise, I wonder how confident they really are in this rationale.
Kind of makes me wonder how 'accelerated' the timeline of publishing this article was based upon the Claude Code leak today. Considering everyone has gotten a sneak peek at what Anthropic is working on OpenAI might be a little worried. This could also just be coincidence, but this piece really does read like self-encouraging fluff.
I'm old enough to remember when companies worth $1 billion were called "unicorns." Now we have a company raising 122 times that? Valued at nearly 1000 times that...?
At least they're throwing consumers a bone via the ARK deal. It's crazy how little AI exposure is available to anyone who isn't already wealthy and/or connected.
I think this is reality-distortion field rivaling that of Jobs', and a crisis of faith. Nobody apparently believes that capital is worth investing into anything but AI.
> Nobody apparently believes that capital is worth investing into anything but AI.
This is the main reason we see this insane investment into AI imo. If you imagine having lots of money, where should you invest that currently?
Housing market: Seems very overvalued (at least in germany). Also with the current uncertainty and inflation its hard to make an investment that pays back over 20-30 years. So building is also difficult.
Stocks are very volatile currently. Not only since Iran. To me it seems since the financial crisis 2008 investors don't enjoy stocks as before.
Gold: Only if you are paranoid about collapse of society. It doesn't make sense to invest into s.th. without interest rates.
Crypto: Same as gold, but better if you like gamling. I would assume most people who are very rich don't gamble with most of their fortune.
Looking around, and especially forward, it would be military tech, e.g. [1], and its supply chain, e.g. [2] :-\ Valuations are not as crazy, but I bet there'll going to be a lot of demand in the coming decade, unfortunately.
Chip production, too, of course, but it's overflowing with money already, apparently. It's growing though, because there are real actual shortages of stuff like RAM and SSDs, there's money to be made immediately if you can. Chinese RAM manufacturers are building out like crazy.
It's the result of too much echo chambered bullshit floating around daily about how capable LLMs really are. It's literally crypto/blockchain all over again. It's one big lie that a lot of people have bought into which causes it to self-perpetuate, like religion.
> At least they're throwing consumers a bone via the ARK deal. It's crazy how little AI exposure is available to anyone who isn't already wealthy and/or connected.
It is deliberate. Period.
It's always been known that you make money in the private markets and pre-IPO companies and retail is the final exit for insiders and early investors.
Retail is not allowed to be early into these companies (Because that would ruin the point of being an insider) and this "exposure" has to be at the near top.
This announcement completes the betrayal of their founding principles.
"Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return."
- Not advancing digital intelligence
- While locking people into a superapp
- Because they are further constrained to generating financial returns
This all smells fishy. They didn’t “raise” $122B. Raise means someone put funds in your bank account and said send us the next quarterly report to tell us how our investment is doing.
They have pieces from paper of folks saying they may put up funds or goods and services in that amount. But it’s important to remember that:
1. While they are “raising” commitments others are backing out of deals (see Disney, various data center things). Big deals announced to major fanfare are falling through.
2. They slashed capital expenditure for the future after previously boasting about all the commitments. This is turning into bonkers math of X + Y - X + Z + W - 1/2 of Y = ? On trying to keep track of what’s actually “raised / real” vs what was PR puffery that folks ran away from later.
3. Circular financing still seems to be going on. Big difference of here’s cash, have fun and various “commitments” and balance sheet games that seem to still be going on.
Net net this all still looks very scary and iffy at best.
Are we truly arguing semantics on HN which is a news aggregator for startups and everyone truly knows what a "raise" is and it is obviously not funds in your bank account? I don't disagree with the rest of your comment and the core thesis is valid that OpenAI is very much doing circular financing.
Edit: A raise comes with stipulations on what you can use the money for. I don't know if I was being too mean about responding to a parent but before you comment just google what a raise has..
I can't help but think building an "everything" app is so.. both unbelievably ambitious, and a folly. I am not personally convinced that people want all the things that this super app purports to do.
I am from a generation that still sits behind a desktop computer when making "big purchases." I can't even buy a flight on my phone. I am so much less likely to want to have an AI agent do that for me.
Then the idea that daily consumption of these products will drive people to use them more at work... I have a very different life outside of work. My use of AI outside of work is exceedingly different to what I use it for at work.
I sometimes feel wildly out of touch. But sometimes I view this as the VR moment. To me there are some things that I think may always be preferable to do outside of that ecosystem. And for me, a lot of tasks that 'agents' enable are small enough or important enough that I want to do them myself.
I don't think I'll ever be comfortable allowing an agent to call me a taxi, or order food on my behalf. Because the convenience of asking for food isn't worth the chance it'll mess up, and opening an app and looking at a menu is simpler.
I also think we're coming to a moment where we can start identifying the markers of AI generated content on sight. And I think there's a growing animosity to it. I might be comfortable asking AI something, but when I am looking for or searching for other content, seeing AI content markers make me angry at this point.
To finish, I do just sort of straight up hate the idea that we're comparing this moment to the invention of electricity. It's on the face of it absurd.
I think you lack imagination. This is going to be the future because it is legitimately a step up from the previous ways of doing things. I can do things that were way more difficult before.
It doesn't have to be AI all the way - no one's asking AI to book things on its own and make the payments on their own. What does work is, make AI do the research and you verify and you do the payment. Human in the loop.
To me this is clearly the future - AI has access to all the data sources and can translate your intent by accessing these tools in a loop and use intelligence to automate things.
Maybe there's a scenario where that is useful. But again, I don't know why I'd want an AI to do this research for me. I hop on Skyscanner. I type my location, and where I'd like to go. It presents me with a list of options, and I can then use the filters to find times that work best for me.
I see a flight that isn't in my time frame, but is actually like 400 euros cheaper. And I decide in that moment that waking up at 5am is worth the savings.
I'd have not typed that into a prompt. I made that decision at the moment I saw the possibility. I didn't even know that it was an option prior to that moment.
Then I go look at hotels. I have a list of requirements, but I see that one of the hotels that I just glanced at has a really nice long pool, and the amenities look nicer from the images. I change my mind at that exact moment, I can walk 15 minutes more to the beach.
Now it should be even clearer why this is important for food.
>> To finish, I do just sort of straight up hate the idea that we're comparing this moment to the invention of electricity. It's on the face of it absurd.
Do you feel that any technology is comparable in it’s impact?
Most of modern medicine, by which I mean each discovery and invention in their own right, stand alongside electricity. Particularly vaccines.
AI isn’t there yet. You could turn off AI tomorrow and there’d be a shock but people would quickly switch back. You could not do the same for electricity, medicine, combustion engines (or steam engines/turbines), computers, the internet, modern building materials, etc. You try to swap back off any of those and the modern world (literally and figuratively) collapses. Turn off AI, and there’d be a financial collapse but afterwards everything would return relatively easily to an earlier way of doing things (ye know, the way from just 4 years ago, and which is still 99% of how people do things :) )
I think the Internet is the more apt analogy. But even with electricity, you could have taken it away within the first couple decades of its popularity and society would have shrugged it off. Once they got used to that telegraph thing, not so much.
Yeah, I agree, but AI isn’t there yet. It’s too early to call it one way or the other. There’s plenty else that’s as important as electricity in my view, and maybe AI will join those ranks in 15 years or so when it’s gone through the hype loop and when the economy has recovered from the now-basically-inevitable AI- and war-fueled turmoil of the next decade.
That's primarily a function of the time for adoption, though, not the utility of the technology. In 20 years, people would not be able to so easily say that they could turn off AI with no impact.
That..what..no. The question was whether there are any comparable to electricity, of which I have put forth a number of examples. And also offered my opinion that it is too early to judge whether AI will be as significant or not.
There are loads of technologies that, despite being decades old, do not qualify. So, no, it’s not “primarily a function of time”. It absolutely is about the utility. We can only be in a position to judge utility when sufficient time has passed, and AI ain’t had enough time yet to prove its utility. Given enough time, it might prove as useful as electricity, or it might just sit alongside computer operating systems - never quite making it onto anyone’s “this changed the world” list, even if it has as much utility as an OS.
I hate to read this line when academics and graduate students who work in basic and hard sciences have their funding cut. The grand funding that pays minimum wage to grad students is a burden for this society, yet for a company that took all the valuable data from sources that never got credit, raises billions of dollars. Open says the name, but closed it is by operation. Sorry for this rant, but the priorities of this world suck.
feels like an insult to readers to try to pretend that their revenue per month is comparable to google or apples growth when the funding is absurdly different, not to mention inflation itself.
I am very much onboard with AI within my workflow. I just don't really see a future where openai/anthropic are the absolute front runners for devs though. Maybe OpenAI does just have the better vision by targeting the general public instead, and just competing to become the next google before google can just stay google?
What is their next step to ensure local models never overtake them? If i could use opus 4.6 as a local model isntead and wrap it in someone else's cli tool, i 100% do it today. are the future model's gonna be so far beyond in capability that this sounds foolish? the top models are more than enough to keep up with my own features before i can think of more... so how do they stretch further than that?
A side note i keep thinking about, how impossible is a world where open source base models are collectively trained similar to a proof of work style pool, and then smaller companies simply spin off their own finishing touches or whatever based on that base model? am i thinking of thinks too simplistically? is this not a possibility?
Anthropic is definitely gaining ground over OpenAI in the business world. Cowork is the absolute hotness right now, and even prompted MSFT to drop their own variant yesterday
Codex and Gemini CLI seem 1-2 months behind Claude Code. They will catch up. This race will eventually be won by whoever can come up with the cheapest compute.
> how impossible is a world where open source base models are collectively trained similar to a proof of work style pool
Current multi-GPU training setups assume much higher bandwidth (and lower latency) between the GPUs than you can get with an internet connection. Even cross-datacenter training isn't really practical.
LLM training isn't embarrassingly parallel, not like crypto mining is for example. It's not like you can just add more nodes to the mix and magically get speedups. You can get a lot out of parallelism, certainly, but it's not as straightforward and requires work to fully utilize.
> What is their next step to ensure local models never overtake them?
As someone who experiments with local models a lot, I don’t see this as a threat. Running LLMs on big server hardware will always be faster and higher quality than what we can fit on our laptops.
Even in the future when there are open weight models that
I can run on my laptop that match today’s Opus, I would still be using a hosted variant for most work because it will be faster, higher quality, and not make my laptop or GPU turn into a furnace every time I run a query.
Though I think these companies are wildly overvalued, I don't see LLMs as a service going away in the future. The value in OpenAI is that it provides extra compute, data access, etc. My money is on local AI becoming more of a thing, while services like OpenAI still exist for local AIs to consult with. If a local model can somehow know that it's out of it's depth on a question/prompt, it can ask an OpenAI model if it's available, but otherwise still work locally if OpenAI fails to respond or goes out of business. To me that makes a lot more sense than the future being either-or.
It's hard to train models in the open. All the big players are using lots of "dodgy" training data. Like books, video, code, destinations. If you did that in the open, the lawyers would shut you down.
isn't it weird that there is no attribution to a human here? i mean, eventually, they have to dropkick sama and install GPT itself as king, right? EOQ seems as good a time as any
> Within a year of launching ChatGPT, we reached $1B in revenue. By the end of 2024 we were generating $1B per quarter. We are now generating $2B in revenue per month.
They raised $122B.
122 / 12*2 = 5 years to get your money back (I simplify, I know revenue <> profit)
They are so big that almost no one can afford to acquire them. It is similar as someone would like to acquire MSFT or AAPL.
The title is incorrect. The $122B includes previous promises. They raised an additional $12B of promises:
"The round totaled $122 billion of committed capital, up from the $110 billion figure that the company announced in February. SoftBank co-led the round alongside other investors, including Andreessen Horowitz and D. E. Shaw Ventures, OpenAI said."
This IPO, if anyone underwrites it, is going to fleece retail so hard. Better make it a SPAC with the help of Chamath and Cantor & Fitzgerald.
Anthropic doesn't have anything else other than the Claude models.
But notice that no-one, not a single mention of Deepseek tells me that they are preparing to scare everyone again. Which is why Dario continues to scare-monger on local models.
Sometimes you do not need hundreds of billions of dollars for inference when it can be done locally with efficient software; and Google proved that. But where is the money in that? So continues the flawed belief in infinitely buying GPUs to scale which Nvidia needs you to do.
Only a matter of time for local models to reach Opus level. We are 1 or at most 2 years behind that and Anthropic knows that.
I'm seeing diminishing returns, though in fairness we have no idea yet how to integrate properly with existing good practices and principles. I suspect improvement is going to come mainly from improved took usage rather than more impressive models.
I feel that too, every technology has its limits.
I use AI daily. But I can’t see the “intelligence“.
All I see is fine tuning and bigger datasets.
Yesterday I asked claude to fix the color issues of graph. It failed miserably.
Opus 4.6 wasn’t able to figure out why the text was grey. It made something up, instead of realizing the problem was simple, oklch wrapped inside a hsl color. hsl(oklch(…))
I easily figured this out by just looking at the css and adding some logs to js.
This is not intelligence. This is a tool that’s smart. Not sentient. AGI won’t be achieved by scaling alone.
They have to focus on the distant future (where they are frankly unlikely to exist) because they are falling further and further behind in the immediate future.
Their latest desperate bid for relevance is a plugin for Claude Code that uses Codex as a second opinion. Please clap.
No mention of "AGI" this time. Since we all knew it was a scam. But this is the most damning of them all:
> The OpenAI flywheel is simple. More compute drives more intelligent models. More intelligent models drive better products. Better products drive faster adoption, more revenue and more cashflow.
FTX had a "flywheel". It fell off. Being saddled with hundreds of billions of debt makes this situation ten times worse.
> The OpenAI flywheel is simple. More compute drives more intelligent models. More intelligent models drive better products. Better products drive faster adoption, more revenue and more cashflow. That gives us the ability to reinvest and deliver intelligence more efficiently to consumers, enterprises, and builders around the world.
-x-
In short, the musical chairs are still playing... Keep on walkin' round, y'all, till the music stops.
personal estimation too, the motivation was to create somethin faster than a transformer since transformers were absurdly slow on my cpu - and it's very obviously faster. I get it llm hype you ad adfinitum regardless...
Oh, man... I can't wait to see where this is going. Might not be pretty after all.
Fckin lmao. Its all about continuing the hype in the run up to the IPO to fix a good share price. Are you seriously this naive?
Anthropic had $19b by end of February 2026 and they added $6b in February alone.[1] This means if they added another $6b in March, they're higher than OpenAI already.
However, I heard that OpenAI and Anthropic report revenue in a different way. OpenAI takes 20% of revenue from Azure sales and reports revenue on that 20%. Anthropic reports all revenue, including AWS's share.[2] Not exactly sure how this works. Anyone know?
[0]https://www.reuters.com/business/openai-cfo-says-annualized-...
[1]https://finance.yahoo.com/news/anthropic-arr-surges-19-billi...
[2]https://x.com/EthanChoi7/status/2036638459868385394
E.g. what good is 20 billion per year when "OpenAI is targeting roughly $600 billion in total compute spending through 2030". That is $150 billion per year?
Since everyone is trying to get compute from anywhere they can, including OpenAI going to Google, it's hard to tell what is used internally vs externally.
For example, it's entirely possible that Google's internal roadmap for Gemini sees it using $600b of compute through 2030 as well. In that case, OpenAI needs to match since compute is revenue.
the expectation is that they'll eventually make money. they can't raise forever. only startups are not profitable for a few years. but most companies that have existed for a long while have been profitable
and since they're expected to make a LOT of money, everyone wants a piece of that future pie, pushing up the valuation and amount raised to admittedly somewhat delusional levels like here
profit isn't a function of having a killer product, it's a function of having no competition
A couple things that stand out to me about this is the use of the phrase "committed capital", which only sounds like a promise that could break from various circumstances, and the valuation of their funding keeps changing so it sounds like a max rather than the valuation every investor invested at.
This is done even in smaller startup funding rounds some times.
They mention this line in different forms a couple of times in the article. It’s clear they’re pretty rattled about Anthropic’s momentum in enterprise, I wonder how confident they really are in this rationale.
At least they're throwing consumers a bone via the ARK deal. It's crazy how little AI exposure is available to anyone who isn't already wealthy and/or connected.
This is the main reason we see this insane investment into AI imo. If you imagine having lots of money, where should you invest that currently?
Housing market: Seems very overvalued (at least in germany). Also with the current uncertainty and inflation its hard to make an investment that pays back over 20-30 years. So building is also difficult.
Stocks are very volatile currently. Not only since Iran. To me it seems since the financial crisis 2008 investors don't enjoy stocks as before.
Gold: Only if you are paranoid about collapse of society. It doesn't make sense to invest into s.th. without interest rates.
Crypto: Same as gold, but better if you like gamling. I would assume most people who are very rich don't gamble with most of their fortune.
Chip production, too, of course, but it's overflowing with money already, apparently. It's growing though, because there are real actual shortages of stuff like RAM and SSDs, there's money to be made immediately if you can. Chinese RAM manufacturers are building out like crazy.
[1]: https://www.ultimamarkets.com/academy/anduril-stock-price-ho...
[2]: https://www.marketscreener.com/quote/stock/RHEINMETALL-AG-43...
https://www.ark-funds.com/funds/arkvx
The fund is invested in most of the hot tech companies.
It is deliberate. Period.
It's always been known that you make money in the private markets and pre-IPO companies and retail is the final exit for insiders and early investors.
Retail is not allowed to be early into these companies (Because that would ruin the point of being an insider) and this "exposure" has to be at the near top.
"Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return."
They have pieces from paper of folks saying they may put up funds or goods and services in that amount. But it’s important to remember that:
1. While they are “raising” commitments others are backing out of deals (see Disney, various data center things). Big deals announced to major fanfare are falling through.
2. They slashed capital expenditure for the future after previously boasting about all the commitments. This is turning into bonkers math of X + Y - X + Z + W - 1/2 of Y = ? On trying to keep track of what’s actually “raised / real” vs what was PR puffery that folks ran away from later.
3. Circular financing still seems to be going on. Big difference of here’s cash, have fun and various “commitments” and balance sheet games that seem to still be going on.
Net net this all still looks very scary and iffy at best.
Edit: A raise comes with stipulations on what you can use the money for. I don't know if I was being too mean about responding to a parent but before you comment just google what a raise has..
https://thedeepdive.ca/openai-locked-up-40-of-global-ram-wit...
The valuation seems odd though, you'd expect $840B post-money from that earlier round?
iykyk
Edit: Why did this go from their press release to a news story?
I am from a generation that still sits behind a desktop computer when making "big purchases." I can't even buy a flight on my phone. I am so much less likely to want to have an AI agent do that for me.
Then the idea that daily consumption of these products will drive people to use them more at work... I have a very different life outside of work. My use of AI outside of work is exceedingly different to what I use it for at work.
I sometimes feel wildly out of touch. But sometimes I view this as the VR moment. To me there are some things that I think may always be preferable to do outside of that ecosystem. And for me, a lot of tasks that 'agents' enable are small enough or important enough that I want to do them myself.
I don't think I'll ever be comfortable allowing an agent to call me a taxi, or order food on my behalf. Because the convenience of asking for food isn't worth the chance it'll mess up, and opening an app and looking at a menu is simpler.
I also think we're coming to a moment where we can start identifying the markers of AI generated content on sight. And I think there's a growing animosity to it. I might be comfortable asking AI something, but when I am looking for or searching for other content, seeing AI content markers make me angry at this point.
To finish, I do just sort of straight up hate the idea that we're comparing this moment to the invention of electricity. It's on the face of it absurd.
Admittedly openAI is in a better position to do it, but not by much.
Everyone wants to be WeChat in china. No user wants that from them.
It doesn't have to be AI all the way - no one's asking AI to book things on its own and make the payments on their own. What does work is, make AI do the research and you verify and you do the payment. Human in the loop.
To me this is clearly the future - AI has access to all the data sources and can translate your intent by accessing these tools in a loop and use intelligence to automate things.
I see a flight that isn't in my time frame, but is actually like 400 euros cheaper. And I decide in that moment that waking up at 5am is worth the savings.
I'd have not typed that into a prompt. I made that decision at the moment I saw the possibility. I didn't even know that it was an option prior to that moment.
Then I go look at hotels. I have a list of requirements, but I see that one of the hotels that I just glanced at has a really nice long pool, and the amenities look nicer from the images. I change my mind at that exact moment, I can walk 15 minutes more to the beach.
Now it should be even clearer why this is important for food.
Do you feel that any technology is comparable in it’s impact?
AI isn’t there yet. You could turn off AI tomorrow and there’d be a shock but people would quickly switch back. You could not do the same for electricity, medicine, combustion engines (or steam engines/turbines), computers, the internet, modern building materials, etc. You try to swap back off any of those and the modern world (literally and figuratively) collapses. Turn off AI, and there’d be a financial collapse but afterwards everything would return relatively easily to an earlier way of doing things (ye know, the way from just 4 years ago, and which is still 99% of how people do things :) )
There are loads of technologies that, despite being decades old, do not qualify. So, no, it’s not “primarily a function of time”. It absolutely is about the utility. We can only be in a position to judge utility when sufficient time has passed, and AI ain’t had enough time yet to prove its utility. Given enough time, it might prove as useful as electricity, or it might just sit alongside computer operating systems - never quite making it onto anyone’s “this changed the world” list, even if it has as much utility as an OS.
I am very much onboard with AI within my workflow. I just don't really see a future where openai/anthropic are the absolute front runners for devs though. Maybe OpenAI does just have the better vision by targeting the general public instead, and just competing to become the next google before google can just stay google?
What is their next step to ensure local models never overtake them? If i could use opus 4.6 as a local model isntead and wrap it in someone else's cli tool, i 100% do it today. are the future model's gonna be so far beyond in capability that this sounds foolish? the top models are more than enough to keep up with my own features before i can think of more... so how do they stretch further than that?
A side note i keep thinking about, how impossible is a world where open source base models are collectively trained similar to a proof of work style pool, and then smaller companies simply spin off their own finishing touches or whatever based on that base model? am i thinking of thinks too simplistically? is this not a possibility?
Current multi-GPU training setups assume much higher bandwidth (and lower latency) between the GPUs than you can get with an internet connection. Even cross-datacenter training isn't really practical.
LLM training isn't embarrassingly parallel, not like crypto mining is for example. It's not like you can just add more nodes to the mix and magically get speedups. You can get a lot out of parallelism, certainly, but it's not as straightforward and requires work to fully utilize.
As someone who experiments with local models a lot, I don’t see this as a threat. Running LLMs on big server hardware will always be faster and higher quality than what we can fit on our laptops.
Even in the future when there are open weight models that I can run on my laptop that match today’s Opus, I would still be using a hosted variant for most work because it will be faster, higher quality, and not make my laptop or GPU turn into a furnace every time I run a query.
Best they can do is to somewhat reliably react to objective signals that they've failed at something (like test failures).
The market for local models is always gonna be a small niche, primarily for the paranoid.
Have you ever heard of industrial espionage? Pr privacy regulations? Or military applications?
(Also the US military runs claude as a local model)
I do not, I self host. My current client is also got rid from AWS packing up nice savings as a result
They raised $122B.
122 / 12*2 = 5 years to get your money back (I simplify, I know revenue <> profit)
They are so big that almost no one can afford to acquire them. It is similar as someone would like to acquire MSFT or AAPL.
WCGW?
"The round totaled $122 billion of committed capital, up from the $110 billion figure that the company announced in February. SoftBank co-led the round alongside other investors, including Andreessen Horowitz and D. E. Shaw Ventures, OpenAI said."
This IPO, if anyone underwrites it, is going to fleece retail so hard. Better make it a SPAC with the help of Chamath and Cantor & Fitzgerald.
Last announcement I reckon pre-IPO and the inevitable collapse.
But notice that no-one, not a single mention of Deepseek tells me that they are preparing to scare everyone again. Which is why Dario continues to scare-monger on local models.
Sometimes you do not need hundreds of billions of dollars for inference when it can be done locally with efficient software; and Google proved that. But where is the money in that? So continues the flawed belief in infinitely buying GPUs to scale which Nvidia needs you to do.
Only a matter of time for local models to reach Opus level. We are 1 or at most 2 years behind that and Anthropic knows that.
Can confirm. Kimi K2.5 is pretty intelligent and most of the time there's no difference between Opus and Kimi.
If anything there's a plateau between each model release.
Yesterday I asked claude to fix the color issues of graph. It failed miserably. Opus 4.6 wasn’t able to figure out why the text was grey. It made something up, instead of realizing the problem was simple, oklch wrapped inside a hsl color. hsl(oklch(…)) I easily figured this out by just looking at the css and adding some logs to js.
This is not intelligence. This is a tool that’s smart. Not sentient. AGI won’t be achieved by scaling alone.
Their latest desperate bid for relevance is a plugin for Claude Code that uses Codex as a second opinion. Please clap.
> The OpenAI flywheel is simple. More compute drives more intelligent models. More intelligent models drive better products. Better products drive faster adoption, more revenue and more cashflow.
FTX had a "flywheel". It fell off. Being saddled with hundreds of billions of debt makes this situation ten times worse.
What??
Doesn't really strike me as the kind of statement that comes out of a company that can sustain a ~$1T market cap...
-x-
In short, the musical chairs are still playing... Keep on walkin' round, y'all, till the music stops.
/s
I am so sick of AI writing.