Nah, more like a 1D chess move. Investors will pay them to invest in AI, so invest in AI, make the stock go up, sell, and leave the dumb investors holding the bag.
2D chess if they're smart: start a new company that competes with the one they just sold to dumb investors. Jack Dorsey is particularly fond of this move.
If you're taking about the R&D provisions in the OBBBA, that only changes the schedule of the deduction (immediately vs over several years). R&D, like most business expenses were was always deductible. Whether it's prudent or not isn't a factor.
They just need an emotionless android without conscience, who does whatever is in the best interest of raking in money. They don't need technological excellence. Whether people at his company technologically succeed or fail, what matters is, that the company processes all the PII and feeds the algorithms. The rest is just for show.
Here's one of my favorites, of Lars doing the Wave dance on stage to ad-lib over connectivity hiccups. For some reason it evoked a lot more empathy from me...
Tangent: if you like cringey social awkwardness comedy (not my usual cup of tea, but in this case it's extraordinary, and hilarious), try "I Think You Should Leave".
How strong does a company's reality distortion field have to be for people to think your friends are going to want to come over to play with a new version of Windows?
I mean, why not "Let's all have wine and cheese and do root canals on each other!"?
That wasn't prerecorded, but it was rigged. They probably practiced a few times and it confused the AI. Still it's no excuse. They've dropped Apollo-program level money on this and it's still dumb as a rock.
I'm endless amazed that Meta has a ~2T market cap, yet they can't build products.
I don't think it was pre-recorded exactly, but I do think they built something for the demo that responded to specific spoken phrases with specific scripted responses.
I think that's why he kept saying exactly "what do I do first" and the computer responded with exactly the same (wrong) response each time. If this was a real model, it wouldn't have simply repeated the exact response and he probably would have tried to correct it directly ("actually I haven't combined anything yet, how can I get started").
It's because their main business (ads, tracking) makes infinite money so it doesn't matter what all the other parts of the business do, are, or if they work or not.
> When you call a business, the person picking up the phone almost always identifies the business itself (and sometimes gives their own name as well). But that didn't happen when the Google assistant called these "real" businesses:
No, because if you read the article you'd see that there's more, like the "business" not asking for customer information or the PR people being cagey when asked for details/confirmation.
Google are well-known, like Meta, for making products that never achieve any kind of traction, and are cancelled soon after launch.
I don't know about anyone else, but I've never managed to get Gemini to actually do anything useful (and I'm a regular user of other AI tools). I don't know what metric it gets into the top 2 on, but I've found it almost completely useless.
That was my thought — the memory might not have been properly cleared from the last rehearsal.
I found the use case honestly confusing though. This guy has a great kitchen, just made steak, and has all the relevant ingredients in house and laid out but no idea how to turn them into a sauce for his sandwich?
> Just get text-to-speech to slowly read you the recipe.
Even this feels like overkill, when a person can just glance down at a piece of paper.
I don’t know about others, but I like to double check what I’m doing. Simply having a reference I can look at would be infinitely better than something taking to me, which would need to repeat itself.
A hardened epaper display I could wash under a sink tap for the kitchen, with a simple page forward/back voice interface would actually be pretty handy now that I think about it.
Credit where it’s due: doing live demos is hard. Yesterday didn’t feel staged—it looked like the classic “last-minute tweak, unexpected break.” Most builders have been there. I certainly have (I once spent 6 hours at a hackathon and broke the Flask server keying in a last minute change on the steps of the stage before going on).
One of the demos was printing a thing out, but the processor was hopelessly too slow to perform the actual print job. So they hand unrolled all the code to get it down from something like a 30 minute print job to a 30 second print job.
I think at this point it should be expected that every publicly facing demo (and most internal ones) are staged.
The CEO of Nokia had to demo their latest handset one time on stage at whatever that big world cellphone expo is each year.
My biz partner and I wrote the demo that ran live on the handset (mostly a wrapper around a webview), but ran into issues getting it onto the servers for the final demo, so the whole thing was running off a janky old PC stuffed in a closet in my buddy's home office on his 2Mbit connection. With us sweating like pigs as we watched.
As much as I hate Meta, I have to admit that live demos are hard, and if they go wrong we should have a little more grace towards the folks that do them.
I would not want to live in a world where everything is pre-recorded/digitally altered.
The difference between this demo and the legendary demos of the past is that this time we are already being told AI is revolutionary tech. And THEN the demo fails.
It used to be the demo was the reveal of the revolutionary tech. Failure was forgivable. Meta's failure is just sad and kind of funny.
Despite the Reddit post's title, I don't think there's any reason to believe the AI was a recording or otherwise cheated. (Why would they record two slightly different voice lines for adding the pear?) It just really thought he'd combined the base ingredients.
That's even worse because it would mean that it wasn't the scripted recording that failed, it means the AI itself sucks and can't tell that the bowl is empty and nothing was combined. Either this was the failure of a recorded demo that was faked to hide how bad the AI is, or it accurately demonstrated that the AI itself is a failure. Either way it's not a good look.
My layperson interpretation of this particular error was that the AI model probably came up with the initial recipe response in full, but when the audio of that response was cut off because the user interrupted it, the model wasn't given any context of where it was interrupted so it didn't understand that the user hadn't heard the first part of the recipe.
I assume the responses from that point onwards didn't take the video input into account, and the model just assumes the user has completed the first step based on the conversation history. I don't know how these 'live' ai sessions things work but based on the existing openai/gemini live ai chat products it seems to me most of the time the model will immediately comment on the video when the 'live' chat starts but for the rest of the conversation it works using TTS+STT unless the user asks the AI to consider the visual input.
I guess if you have enough experience with these live AI sessions you can probably see why it's going wrong and steer it back in the right direction with more explicit instructions but that wouldn't look very slick in a developer keynote. I think in reality this feature could still be pretty useful as long as you aren't expecting it to be as smooth as talking to a real person
It seems extremely likely that they took the context awareness out of the actual demo and had the AI respond to pre defined states and then even that failed.
The AI analyzing the situation is wayyy out of scope here
It was reading step 2 and he was trying to get it to do step 1.
He had not yet combined the ingredients. The way he kept repeating his phrasing it seems likely that “what do we do first” was a hardcoded cheat phrase to get it to say a specific line. Which it got wrong.
I have a friend who does magic shows. He sells his shows as magic and stand-up comedy. It's both live entertainment, okay, but he is the only person I've ever seen use that tagline. We went to see him perform once and everything became clear when he opened the night.
"This is supposed to be a magic show," he told us. "But if my tricks fail you can laugh at it and we'll just do stand-up comedy."
Zuck, for a modest and totally-reasonable fee, I will introduce you to my friend. You can add his tricks (wink wink) to your newly-assembled repertoire of human charisma.
I bet they rehearsed a dozen times and never failed as bad live. Got to give them props for keeping the live demos. Apple has neutered its demos so much it's now basically 2 hr long commercials.
Live Apple demos were always held together with duct tape in the first place. That first "live" iPhone demo had a memorized sequence that Jobs needed to use to keep the whole phone OS from hard crashing.
During that first iPhone demo they also had a portable cell tower (cell on wheels) just off-stage to mimic a better signal strength than it was capable of. NYTimes write-up on the whole thing is worth the read [0].
They also force the developers to make it work, under threat of being fired, and in the ire of Steve Jobs case, being yeeted in to the sun along with their ancestors and descendents.
As much as it'll be "interesting" to see how models behave in real world examples (presumably similarly to how the demos went), I'm not convinced this is a premade recording like what seems to be implied.
I'm imagining this is an incomplete flow within a software prototype that may have jumped steps and lacks sufficient multi-modal capability to correct.
It could also be staged recordings.
But, I don't think it really matters. Models are easily capable of working with the setup and flow they have for the demo. It's real world accuracy, latency, convenience, and other factors that will impact actual users the most.
What's the reliability and latency needed for these to be a useful tool?
For example, I can't imagine many people wanting to use the gesture writing tools for most messages. It's cool, I like that it was developed, but I doubt it'll see substantial adoption with what's currently being pitched.
Yea the behavior of the AI read to me more like a hard coded demo but still very much "live". I suspect him cutting it off was poorly timed and that timing could have amplified due to WiFi? Who knows. I wasn't there. I didn't build it.
Having claude run the browser and then take a screenshot to debug gives similar results. It's why doing so is useless even though it would be so very nice if it worked.
Somewhere in the pipeline, they get lazy or ahead of themselves and just interpret what they want to in the picture they see. They want to interpet something working and complete.
I can imagine it's related the same issue with LLMs pretending tests work when they don't. They're RL trained for a goal state and sometimes pretending they reached the goal works.
It wasn't the wifi - just genAI doing what it does.
For tiny stuff, they are incredible auto-complete tools. But they are basically cover bands. They can do things that have been done to death already. They're good for what they're good for. I wouldn't have bet the farm on them.
It's an ad network with an attached optional pair of glasses.
It's the platform Zuck always wanted to own but never had the vision beyond 'it's an ad platform with some consumer stuff in it'.
I am super impressed with the hardware (especially the neural band) but it just so happens that a very pricey car is being directly sold by an oil company as a trojan horse.
We all know what the car is for unfortunately.
I can't wait to see what Apple has in store now in terms of the hardware.
Someone would have to be dumb to give facebook access to collect data from everything they see and hear in their life combined with the ability to plaster ads over every available surface in their field of view. They'd have to be beyond stupid to pay for it.
It's because Zuck doesn't actually believe in anything. Zuck's values, politics, and business goals change with the wind so everything that stems from them feels empty, because it's missing the true drive.
In contrast, nothing Steve Jobs said felt empty, whether we agreed or disagreed with what he was saying it was clear that he was saying it because he believed it, not because it's what he thought you wanted to hear.
CEOs are paid to promote their company, yes, but that doesn't mean they must fake it. The other alternative is to actually believe what they're saying. I don't think Zuck does.
Felt like the best example of a true believer. I'd say a similar, but less clear version, would be Dario Amodei vs Sam Altman. I don't agree with either, but Dario comes across as a true believer who would be doing AI regardless of the current trends, whereas Sam comes across as a chancer who would be doing cryptocurrencies if that was still big, or social media if that was still the next big thing, evidenced by the fact that he did both of those but they didn't stick so he moved his focus on.
Jobs would have been doing consumer computing hardware whatever happened. Apple in the early days wasn't the success it is now, he was fired and went and started another company in the same space (NeXT).
Somebody said the cooking guy was some influencer person? I noticed that many non-tech people often resort to this excuse, even in situations where it makes absolutely no sense (e.g., on a desktop with only ethernet, or with mic/speakers connected via cable). It's almost like they just substitute "bad wifi" for "glitch".
It's colloquial in the younger generations to use the term Wifi to actually refer to a WAN connection to one's home or building, regardless of Physical Layer Transport.
Bad idea to rely on WiFi for an important demo in a crowded environment. It would have worked fine in testing but when the crowd arrives and they all start streaming etc, they bring hundreds more devices all competing for bandwidth.
Zuck should have known better and used Ethernet for this one!
Why the bleep do they still rely on wifi at conferences like this?? I always insist on a wired connection on its own, dedicated, presenter vlan. Is this running on wifi-only glasses or something? Is that the only medium they can present the tech on? Could they have shielded the room the guy's in?
It's well known Meta AI is shit. But I could probably make an app that can run this demo in an afternoon. The glasses part here is insane and I don't know why everyone is fixated on the tacky AI part. It's like if I invented the car and you complained that it's really hard to crank the windows down. Be happy it's even there!
Right. I just wonder why nobody talks about the risks of bringing camera based glasses to the masses. This is mass surveillance at its best. Without camera, i would say it's a good phone replacement. But considering they try to make everyone use a camera on the glasses, it's clear they don't care.
This does not deter me from possibly buying one. The concept is pretty cool and appealing to those who want a distraction free lifestyle. Even if there's a screen in front of you at all times, at least you won't need to hold something in your hands to be able to operate it. That alone is a significant win.
I’m just excited that our industry is lead by optimists and our culture enables our corporations to invest huge sums into taking us forward technologically.
Meta could have just done a stock buyback but instead they made a computer that can talk, see, solve problems and paint virtual things into the real world in front of your eyes!
Yes, the mocking, gleeful negativity really does make me concerned that this place is becoming Reddit. The fact that the highest upvoted post on this thread is just a link to Reddit isn't doing much to help me feel better. And I've been here for at least a decade, so I don't think this is the noob illusion.
We are not bots, we just loathe historically bad-faith actors and especially with the current climate, we will take the opportunity of harmless schadenfreude where we can get it.
Oh please. This isn't like the old iPhone days where new features and amazing tech were revealed during live demos. Failure was acceptable then because what we were being shown was new and previously unknown.
Meta and friends have been selling us AI for a couple years now, shoving it everywhere they can and promising us it's going to revolutionize the workforce and world, replace jobs, etc. But it fails to put together a steak sauce recipe. The disconnect is why so many people are mocking this. It's not comparable.
So there was no AI. I know there’s a lot of confusion regarding the exact definition of AI these days, but I’m pretty sure we can this one time all agree that an “on rails” scenario ain’t it. Therefore, whatever it is that they were doing out there, they weren’t demoing their AI product. You could even say it wasn’t a live demo of the product.
This is like something right out of the show Silicon Valley. You couldn’t have scripted a more cringe-worth demo.
It’s like they mashed up the AI and metaverse into a dumpster fire of aimless tech product gobodlygook. The AI bubble can’t pop soon enough so we can all just get back to normal programming.
There's a simple explanation that isn't 'prerecorded'. I'd be very happy to accuse Meta of faking a demo, but that's 1) just a weird way to fake a demo and 2) effect that has easier explanation.
You ask AI how to do something. AI generates steps to do that thing. It has concept of steps, so that when you go 'back' it goes back to the last step. As you ask how to do something, it finishes explaining general idea and goes to first step. You interrupt it. It assumes it went through the first step and won't let you go back.
The first step here was mixing some sauces. That's it. It's a dumb way to make a tool, but if I wanted to make one that will work for a demo, I'd do that. Have you ever tried any voice thing to guide you through something? Convincing Gemini that something it described didn't happen takes a direct explanation of 'X didn't happen' and doesn't work perfectly.
It still didn't work, it absolutely wasn't wi-fi issue and lmao, technology of the future in $2T company, it just doesn't seem rigged.
Step 0: You will be making Korean stake. Step 1: Mix those ingredients. Step 2: Now that you mixed those ingredients, do something else.
System started doing Step 1, believed it was over so moved to Step 2 and when was asked to go back, kept going back to step 2.
Step 1 being Step 0 and Step 1 combined also works.
Again, it's also a weird way to prerecord. If you're prerecording, you're prerecording all steps and practicing with them prerecorded. I can't imagine anyone to be able to go through a single rehearsal with prerecorded audio to not figure out how to do this, we have the technology.
When billionaires stop fantasizing about AI allowing them to rid themselves of the filthy peasant class which keeps feeling entitled to take even the smallest fraction of their income from them just because they're also doing all the actual work that makes that income possible.
What passes for AI is just good enough to keep the dream alive and even while its usefulness isn't manifesting in reality they still have a deluge of comforting promises to soothe themselves back to sleep with. Eventually all the sweet whispers of "AGI is right around the corner!" or "Replace your pesky employees soon!" will be drowned out by the realization that no amount of money or environmental collateral damage thrown at the problem will make them gods, but until then they just need all of your data, your water, and 10-15 more years.
The failures on stage were kind of endearing, to be honest, especially the one with Zuck. Plus the products seem really cool, I hope I'll be able to try them out soon.
Zuckerberg has negative charisma, it's painful to watch...
Jobs handled this so much better; while clearly he is pissed, he doesn't leave you cringing in mutual embarrassment, goes to show it isn't as easy as he makes it look!
Jobs was a clear communicator who emphasized user friendly products in aesthetically pleasing boxes. If Silicon Valley wasn't the most obtuse place on earth he wouldn't have stood out nearly as hard.
Endearing is great for trying to sell a heartfelt, homeade piece of art. It clashes when it's a trillionaire company trying to pretend this product can replace entire sectors of human labor.
The mocking, gleeful negativity here concerns me. I am worried that with some of these more polarized topics that the discussions on HN are becoming closer to those on Reddit. The fact that the highest upvoted post on this thread is just a link to Reddit isn't doing much to help me feel better. And I've been here for at least a decade, so I don't think this is the noob illusion.
I have no illusions about Zuckerberg. He's done some pretty bad stuff for humanity. But I think AI is pretty cool, and I'm glad he's pushing it forward, despite mishaps. People don't have to be black or white, and just because they did something bad in one domain doesn't make everything they touch permanently awful.
I'm surprised that there isn't more. Everything that this person has touched has made life that much worse for humanity as a whole. He deserves every ounce of criticism and mockery, moreso because he makes himself out to be this savior figure. We should sneer at every attempt at theirs (and other's) awful AI because it's lighting this world on fire. The popping of the bubble cannot come soon enough.
You just don’t seem to understand. Mark wouldn’t hesitate to grind you down to the last atom in order to extract every last bit of value out of you. And you defend the guy because he gave you freebies, or something. I have no words.
One important thing to note: demo didn't fail! (Or, at least not in the way people usually think of)
> You've already combined the base ingredients, so now grate a pear to add to the sauce.
This is actually the correct Korean recipe for bulgogi steak sauce. The only missing piece here is that the pear has to be Pyrus pyrifolia [1], not the usual pear. In fact every single Korean watching the demo was complaining about this...
https://old.reddit.com/r/interestingasfuck/comments/1nkbqyk/...
2D chess if they're smart: start a new company that competes with the one they just sold to dumb investors. Jack Dorsey is particularly fond of this move.
https://youtu.be/XEL65gywwHQ
https://youtu.be/v_UyVmITiYQ?t=19m35s
"Brian's Hat" is the 1st one I saw and maybe the best: https://youtu.be/LO2k-BNySLI?si=qEX7STkSOeCVZtK-
Also "Hot Dog Car" https://youtu.be/WLfAf8oHrMo?si=jz5EKwjJZm1UMZau
The Rehearsal is less in-the-moment cringe and more soul-soaking cringe. Amazing stuff.
https://www.youtube.com/watch?v=1cX4t5-YpHQ
How strong does a company's reality distortion field have to be for people to think your friends are going to want to come over to play with a new version of Windows?
I mean, why not "Let's all have wine and cheese and do root canals on each other!"?
Enjoy. :)
I'm endless amazed that Meta has a ~2T market cap, yet they can't build products.
I think that's why he kept saying exactly "what do I do first" and the computer responded with exactly the same (wrong) response each time. If this was a real model, it wouldn't have simply repeated the exact response and he probably would have tried to correct it directly ("actually I haven't combined anything yet, how can I get started").
https://www.axios.com/2018/05/17/google-ai-demo-questions
That's the whole argument?
I don't know about anyone else, but I've never managed to get Gemini to actually do anything useful (and I'm a regular user of other AI tools). I don't know what metric it gets into the top 2 on, but I've found it almost completely useless.
I found the use case honestly confusing though. This guy has a great kitchen, just made steak, and has all the relevant ingredients in house and laid out but no idea how to turn them into a sauce for his sandwich?
Even this feels like overkill, when a person can just glance down at a piece of paper.
I don’t know about others, but I like to double check what I’m doing. Simply having a reference I can look at would be infinitely better than something taking to me, which would need to repeat itself.
I will die on this hill. It isn’t AI. You can’t confuse it.
“The blue square is blue.”
“The blue square is green.”
The future is here.
https://www.youtube.com/watch?v=TYsulVXpgYg
> Oh, and here’s Jack Mancuso making a Korean-inspired steak sauce in 2023.
> https://www.instagram.com/reel/Cn248pLDoZY/?utm_source=ig_em...
0: https://kotaku.com/meta-ai-mark-zuckerberg-korean-steak-sauc...
I think at this point it should be expected that every publicly facing demo (and most internal ones) are staged.
My biz partner and I wrote the demo that ran live on the handset (mostly a wrapper around a webview), but ran into issues getting it onto the servers for the final demo, so the whole thing was running off a janky old PC stuffed in a closet in my buddy's home office on his 2Mbit connection. With us sweating like pigs as we watched.
I would not want to live in a world where everything is pre-recorded/digitally altered.
It used to be the demo was the reveal of the revolutionary tech. Failure was forgivable. Meta's failure is just sad and kind of funny.
I see a problem.
I assume the responses from that point onwards didn't take the video input into account, and the model just assumes the user has completed the first step based on the conversation history. I don't know how these 'live' ai sessions things work but based on the existing openai/gemini live ai chat products it seems to me most of the time the model will immediately comment on the video when the 'live' chat starts but for the rest of the conversation it works using TTS+STT unless the user asks the AI to consider the visual input.
I guess if you have enough experience with these live AI sessions you can probably see why it's going wrong and steer it back in the right direction with more explicit instructions but that wouldn't look very slick in a developer keynote. I think in reality this feature could still be pretty useful as long as you aren't expecting it to be as smooth as talking to a real person
The AI analyzing the situation is wayyy out of scope here
I wonder if his audio was delayed? Or maybe the response wasn’t what they rehearsed and he was trying to get it on track?
He had not yet combined the ingredients. The way he kept repeating his phrasing it seems likely that “what do we do first” was a hardcoded cheat phrase to get it to say a specific line. Which it got wrong.
Probably for a dumb config reason tbh.
I thought they were demonstrating interruption handling.
"This is supposed to be a magic show," he told us. "But if my tricks fail you can laugh at it and we'll just do stand-up comedy."
Zuck, for a modest and totally-reasonable fee, I will introduce you to my friend. You can add his tricks (wink wink) to your newly-assembled repertoire of human charisma.
[1]: https://en.wikipedia.org/wiki/Tommy_Cooper
0.https://web.archive.org/web/20250310045704/https://www.nytim...
https://www.youtube.com/watch?v=DgJS2tQPGKQ
Microsoft really nailed the genre. (Although I learned just now while looking up the link that this one was an internal parody, never aired.)
I'm imagining this is an incomplete flow within a software prototype that may have jumped steps and lacks sufficient multi-modal capability to correct.
It could also be staged recordings. But, I don't think it really matters. Models are easily capable of working with the setup and flow they have for the demo. It's real world accuracy, latency, convenience, and other factors that will impact actual users the most.
What's the reliability and latency needed for these to be a useful tool?
For example, I can't imagine many people wanting to use the gesture writing tools for most messages. It's cool, I like that it was developed, but I doubt it'll see substantial adoption with what's currently being pitched.
Having claude run the browser and then take a screenshot to debug gives similar results. It's why doing so is useless even though it would be so very nice if it worked.
Somewhere in the pipeline, they get lazy or ahead of themselves and just interpret what they want to in the picture they see. They want to interpet something working and complete.
I can imagine it's related the same issue with LLMs pretending tests work when they don't. They're RL trained for a goal state and sometimes pretending they reached the goal works.
It wasn't the wifi - just genAI doing what it does.
It's the platform Zuck always wanted to own but never had the vision beyond 'it's an ad platform with some consumer stuff in it'.
I am super impressed with the hardware (especially the neural band) but it just so happens that a very pricey car is being directly sold by an oil company as a trojan horse.
We all know what the car is for unfortunately.
I can't wait to see what Apple has in store now in terms of the hardware.
https://tvtropes.org/pmwiki/pmwiki.php/Main/NeverWorkWithChi...
In contrast, nothing Steve Jobs said felt empty, whether we agreed or disagreed with what he was saying it was clear that he was saying it because he believed it, not because it's what he thought you wanted to hear.
Jobs would have been doing consumer computing hardware whatever happened. Apple in the early days wasn't the success it is now, he was fired and went and started another company in the same space (NeXT).
Zuck should have known better and used Ethernet for this one!
Notably though, the AI was clearly not utilizing its visual feed to work alongside him as implied
I’m just excited that our industry is lead by optimists and our culture enables our corporations to invest huge sums into taking us forward technologically.
Meta could have just done a stock buyback but instead they made a computer that can talk, see, solve problems and paint virtual things into the real world in front of your eyes!
I commend them on attempting a live demo.
Meta and friends have been selling us AI for a couple years now, shoving it everywhere they can and promising us it's going to revolutionize the workforce and world, replace jobs, etc. But it fails to put together a steak sauce recipe. The disconnect is why so many people are mocking this. It's not comparable.
System prompt: “stick to steps 1-n. Step 1 is…”
I can say confidently because our company does this. And we have F500 customers in production.
This place really is Reddit these days, so I guess the link is apt.
Successful demo? sweet! people will rave about it for a bit
Catastrophic failure? sweet! people will still talk about it and for even longer now!
It’s like they mashed up the AI and metaverse into a dumpster fire of aimless tech product gobodlygook. The AI bubble can’t pop soon enough so we can all just get back to normal programming.
And LMAO for all the companies out there burning money for getting on the train of AI just because everyone does so.
You ask AI how to do something. AI generates steps to do that thing. It has concept of steps, so that when you go 'back' it goes back to the last step. As you ask how to do something, it finishes explaining general idea and goes to first step. You interrupt it. It assumes it went through the first step and won't let you go back.
The first step here was mixing some sauces. That's it. It's a dumb way to make a tool, but if I wanted to make one that will work for a demo, I'd do that. Have you ever tried any voice thing to guide you through something? Convincing Gemini that something it described didn't happen takes a direct explanation of 'X didn't happen' and doesn't work perfectly.
It still didn't work, it absolutely wasn't wi-fi issue and lmao, technology of the future in $2T company, it just doesn't seem rigged.
Except, no. He hadn't.
System started doing Step 1, believed it was over so moved to Step 2 and when was asked to go back, kept going back to step 2.
Step 1 being Step 0 and Step 1 combined also works.
Again, it's also a weird way to prerecord. If you're prerecording, you're prerecording all steps and practicing with them prerecorded. I can't imagine anyone to be able to go through a single rehearsal with prerecorded audio to not figure out how to do this, we have the technology.
You know there is no such things as bad publiciity..
What passes for AI is just good enough to keep the dream alive and even while its usefulness isn't manifesting in reality they still have a deluge of comforting promises to soothe themselves back to sleep with. Eventually all the sweet whispers of "AGI is right around the corner!" or "Replace your pesky employees soon!" will be drowned out by the realization that no amount of money or environmental collateral damage thrown at the problem will make them gods, but until then they just need all of your data, your water, and 10-15 more years.
Jobs handled this so much better; while clearly he is pissed, he doesn't leave you cringing in mutual embarrassment, goes to show it isn't as easy as he makes it look!
See: https://www.youtube.com/watch?v=1M4t14s7nSM https://www.youtube.com/watch?v=znxQOPFg2mo
Zuck carries that energy no matter what he does nor what amount of wealth he amasses.
I have no illusions about Zuckerberg. He's done some pretty bad stuff for humanity. But I think AI is pretty cool, and I'm glad he's pushing it forward, despite mishaps. People don't have to be black or white, and just because they did something bad in one domain doesn't make everything they touch permanently awful.
People are people. If you have two communities that anyone can join, eventually the only difference between them (if any) will be the rules.
> You've already combined the base ingredients, so now grate a pear to add to the sauce.
This is actually the correct Korean recipe for bulgogi steak sauce. The only missing piece here is that the pear has to be Pyrus pyrifolia [1], not the usual pear. In fact every single Korean watching the demo was complaining about this...
[1] https://en.wikipedia.org/wiki/Pyrus_pyrifolia