@@italiangentleman1501 Plus Google are way ahead on the "evil maturity curve", by about two decades. Them winning the AI race would be a worst case scenario outcome.
@@italiangentleman1501 I disagree. OpenAI tweeted that GPT-4o has been in the works for 18 months and Sam keeps iterating over and over that he's doing a slow rollout. Count the months. That's before the first version of ChatGPT was released to the public.
@@Vyshada The elephant in the room. Everyone can see it, they even talk about it, but have we made any progress on the one thing that will make or break any AGI/human success? I doubt it...we can't do human/human alignment well and have had lots of time to work on it.
I don't know if it "really" feels emotions, or how we would even know, but it's weird to keep saying AI is a tool not a creature, then turn around and build Samantha from Her.
I was surprised Ilya didn't leave sooner. I can't imagine how awkward internal dynamic was after that debacle. But Jan leaving, I see 2 possibilities. Either he was/is on Ilya side, then him leaving is normal/OK. Or "Superalignment" team is just a PR now, and Jan grew more and more frustrated. Which is more concerning.
It's clear this stuff is getting at the most fundamental math we've ever even flown close to. Last time we did that we got nukes. Math is dangerous but only in the context of humans to misuse it.
They helped Sam Altman create human like AI that they can use to make a great product. Sam is the modern Steve Jobs and Ilya is Wozniak. Off to work on something MORE than just a friendly computer interface. I think Ilya is off to create something much deeper!!
No idea @@literailly after leaving Apple, he didn't do much tbh, which is definitely a possiblity for Ilya... He might just spend the rest of his life traveling and trying new foods, idk lol but the man could do bigger things if he wanted to, the potential is there...
"After permanently leaving Apple in 1985, Wozniak founded CL 9 and created the first programmable universal remote, released in 1987. He then pursued several other businesses and philanthropic ventures throughout his career, focusing largely on technology in K-12 schools.[3]" -Wikipedia Sounds like he had a quiet but successful afterlife post Apple. Cool guy. Wish Ilya the best.
The open-source AI community needs influential leaders. With this in mind, I hope Ilia Sutskever doesn't join a company but instead leads a genuine open-source AI initiative that can drive safe and innovative AI development. If he truly believes that these companies pose a societal risk by prioritizing profit over decentralization, then the open-source community could be protected as most developers prefer open-source solutions over large corporations exploiting private data for profit. Like the eternal battle between light and dark in the Star Wars universe, I see open-source as the light side, rebelling against the corporate greed of the dark side.
Cost of compute may definitely prohibit any altruistic, non-profit, best for humanity approach. Oh wait, Is there a cure for human greed and power lust? Any evidence of it throughout human history?
@@skane3109 We are on the brink of incredible scientific advancements that can change the course of history, like discovering human longevity. All thanks to greedy humans. Such groundbreaking discoveries outweighs the negatives
Probably Ilya was under a contractual obligation to keep out of the headlines as well. He likely got quite a sweet severance payout in return. Something that he might appreciate if he is going to do his own thing. But yeah, in general I think you painted the whole thing quite well. The growing pains and social dynamics of fast growing startups can't be overstated. Kudos for bringing up Dunbar's number by the way. Too many people forget to include details like this in analysis. Moving forward it still seems like a powder keg and quite volatile. I'm quite certain we haven't seen the last dramatic event from them.
Funnily enough, the latency of my earbuds are about equal to that, and it feels so surreal to feel the sounds of spesific breaths really sync the video.
I don’t think Ilya was talking to Sam. And I don’t think Ilya gives a fig about his job prospects. I think he was simply trying to do the right thing. OpenAI is the most likely company to achieve AGI and he remained, not to try and restore his power, but because he felt he had some obligation to us all to try and steer it safely. His leaving coincides with the sexualisation of their product, which makes it all too clear that commerce is trumping ethics in the boardroom. The raft is in the rapids now, there is no more steering.
Nah I bet they asked him to stay to for like 6 months until the PR fire goes away. They scheduled their release to coincide to soften it. Probably woulda been fine until Jan
This is a positive direction for Ilya Sutskever, et;al. They now can go on and are free agents. Very excited to see where the smartest man in the world lands. Am eager to hear from his perspective, no need for judgment. The time is for Discernment instead. There is no tomorrow or past just ever present moment...
It must be frustrating to deal with the tradeoff between "safety" and "optimized". Imagine if you were in charge of Google Docs, and they wanted to make sure that no one ever wrote something that was "bad". If I started typing something like "men are smarter than women" the program would cut off my sentence and say I'm not allowed to write it. If you try to make everything safe algorithmically, then it must be the case that it frequently errs on the side of caution and significantly underperforms. Another example would be if you were using a drawing program like photoshop, and you were trying to make wheels on a car, and the program stopped you when you made 2 circles because it thought you were trying to make breasts or something. I'd rather see a higher performing model, even if it sometimes produced inappropriate responses. I can imagine where exactly to draw the line in these tradeoffs can cause a lot of friction at these companies.
I think part of the reason people are leaving has to do with they can see from the inside how the have and have nots is being divied, and they want to be in the haves section so they will make it for themselves.
David. I've never paid for a YT comment before but I love your videos and I would genuinely love for you to list some of your favourite books of varying topics such as maybe psychology, AI, futuristic topics anything but add a little variety and not all one topic if you can but honestly I will take any list you give 😭 I've read a few I've heard you mention here and there an I believe I would have an high interest in other books you find enjoyable as well. Thanks and thank you for always the great content.
This!! The way Altman's ego often shows in his statements scares me, especially when he talks about how much power this could give OpenAI. I'm a fan of the work they do, but I would almost prefer such power to go to one of the established players who are more predictable and already have some experience with power. This is 100% subjective, but he feels a bit like a wild card.
David, great channel and great episode as always. Wasn't Ilya "the brains" behind openai? If that's the case, does he not have all the power? He can essentially go anywhere he wants, write his own paycheck and he would be given the keys to the kingdom. Isn't this a huge loss for OpenAI? I hope Elon scoops Ilya up and together they figure out how to get Sam Altman-Fried in line, or out of OpenAI. I don't know why, but I do not trust Sammy as far as I could throw him.
Is it just me who thinks that there's a hint in GPT 4 "o". Like "o" is almost Q, its unfinished version. And the way they call it "ominous" is a synonym for "general".
one thing that still doesnt add up to me is, during sam's brief "vacation" from openai, no employees quit. why? is sams leadership just that good, or were they on the cusp of learning something big(Q*), or something else?
@@ryzikx well there was an immediate effort to get him back, no reason to quit at the time since they still have to wait for microsoft and sama to strike a deal, especially since there was a big pressure for the employees to threaten to quit if they don’t get him back. Essentially making OAI a ghost ship, which is bad for the new board and employees considering they also have stock
@@ryzikx Chairman Greg Brockman and Jakub Pachocki, the company’s director of research; Aleksander Madry, head of a team evaluating potential risks from AI, and Szymon Sidor, a seven-year researcher at OpenAI.
For all the reasons you mentioned, plus, having been through several software releases myself, it's very common for people to make job moves right before or after a release. It's a good time to go because usually your tasks are fairly wrapped up, so you leave fewer loose ends and gnarly bugs versus if you up and leave in the middle of a release.
I don't think it has to be as simple as humanity vs profits. I think that's a common jumping-the-boat conclusion. You can have two parties not consumed with making as much profit as possible but wanting to Progress in this dept, in terms of it's cause & competition while having a different, less-cautious POV. $$ Greed certainly doesn't have to play into it. And it could even be as simple as "you're overly cautious, and while I'm not the polar opposite, I believe I have the validated reason why my very different POV has adequate caution into play."
David, can you do a video on the societal risks of AI the media largely ignore and few people debate Jan Leike on why he left: "I believe much more of our bandwidth should be spent getting ready for the next generations of models, on security, monitoring, preparedness, safety, adversarial robustness, (super)alignment, confidentiality, societal impact, and related topics. Over the past few months my team has been sailing against the wind. Sometimes we were struggling for compute and it was getting harder and harder to get this crucial research done. OpenAI is shouldering an enormous responsibility on behalf of all of humanity. We are long overdue in getting incredibly serious about the implications of AGI. We must prioritize preparing for them as best we can. Only then can we ensure AGI benefits all of humanity. "
i don’t think typical corporate tech behavior applies well with people like Sam, Ilya, and Jan. They know what they are on the precipice of and their own interpretation of how that should be managed is what is driving decisions. I think next year we will look back at this moment as a canary in the coal mine. I’ve been off the doomer train for a while now, but this just upped my p value significantly.
I loved the comparison to mammalian behavior. I seek to break down complex-looking behaviors in these types of models as well. Most people I've encountered irl are not receptive to (and many are against) these types of psychological breakdowns; but, I think they offer a clue/direction for reasoning in the same sense first principle's thinking helps in science.
When i hear the word alignment I think of the next stage after the fine tunning process, perhaps they are now using GPT for alignment and a human team is no longer needed. Perhaps we are thinking about it too much I dunno
Your analysis makes sense. The other view I have heard is that since it is almost exactly six month since the coup, he and the others who just left were asked back then to stay six months for the sake of the appearance of company stability.
I learned many lessons this past month. One such lesson is that the phrase “the beatings will continue until morale improves” applies more often than it has any right to. - Ilya Sutskever (@ilyasut)December 6, 2023
I think you nailed it there pretty much. Altman definitely has a sizeable ego, outwardly it’s subtle and he masks it well but if you watch his interview performances you can spot it.
I’m glad to hear that you’re sharing my sentiments on the course of OpenAI and its “mission”. And this is coming from a guy who today makes his living consulting Copilot to companies. I’d love to hear your thoughts on WorldCoin and other ethically dubious things that Sam has in works. Dare to tackle that w/ a video? ⚖️👀
I've been talking about proof of useful work since before Bitcoin was cool. It's going to be a bigger deal than store of value. It closes the AI crypto loop, and completely commoditizes compute.
I didn't, but Dylan did! czcams.com/video/nxNcg98ImMM/video.htmlsi=7Fju9DbHHJkq_yXh Dylan covered almost everything I would have covered about it, but yeah. It's dystopian AF>
@@ryanmadden752 there’s HUGE fomo happening in the corporate world regarding AI adoption. I’d guess anyone following frontier AI news tubers like David AND having some experience tinkering with ChatGPT, Copilot or other applications could pay their bills preaching to the choir. I have a full team doing that.
@@ryanmadden752 huge FOMO happening at corporate world. I have a full team of consultants preaching to the choir. I bet anyone listening to frontier AI CZcamsrs like David and experimenting with ChatGPT, Copilot or other applications could earn a living. Ofc u need big cohones to do it as well 😅
Personally think this david shapiro guy could significantly benefit humanity if he were involved in very important projects like agi think he could do a solid job in super alignment or at least the theoretical side of it. Ive been thoroughly impressed by your predictive analysis videos and how you factor in various domains. Idk if or how but I trust this guy more than anyone at openai especially sam altman guys blinded by his own ego wanting to go down as the modern Oppenheimer
sama tweet on 5/14 with the Ilya announcement: "...I am forever grateful for what he did here and committed to finishing the mission we started together..." hard 'finishing the mission', AGI must be fully baked
I reckon Ilya quit/was fired back in Nov '23 and was serving his standard 6-month notice period ... explains the radio silence and timing of the exit announcement. If I were a betting man, I would expect Ilya to become active on social media from Jun onwards.
I think your read on it is pretty good. I've been following Sam ALtman for some time. I've never trusted the guy. He always struck me as slippery and dishonest. Anyone who is so openly agreeable to opposition and gives the appearance of altruism while also amassing absurd wealth and power is NOT to be trusted. I think many are gong to regret trusting OpenAI and its stated mission of developing safe AGI that benefits all humanity. To me, this kind of mission statement is analogous to Google's "Don't be evil." If you are making a point of telling me you are NOT evil, you are. OpenAI is already suggesting ideas like hardware level encryption on GPUs. I anticipate this will be something they pursue to ensure no consumer grade hardware can run open source AI models. So, I think where I disagree is that OpenAI is net positive for humanity.
The direction of travel was Obvious the minute Larry Summers got involved. I suspect Ilya is leaving now because some sort of lock in on his contract just expired
Did their stock options vest 100% before they left? For Ilia, Jan, Karpathy did they even have stock options? Do they get to keep their stock options if they leave?
In my opinion they got out before the shit hits the fan (aka new voice update). It's going to send shockwaves through humanity. I feel they resigned under protest because we're not (as a whole) ready for this "Her" level of AI interactiveness.
I don't have an issue with them making money, I just don't see how this works in the long run as if we only use AI to get our results then we aren't going to the source websites to find out stuff but then that means they aren't getting our ad money which means they will close down and AI won't have anything new to tell us.
I think Ilya wants to create an AI that learns like a child. Not driven by data, but stories about growing up. He will be the father of the first ai child that becomes super ai over time. It fits his childhood stories about how he experienced becoming conscious. Like the movie chappie.
Honestly, I believe that AI should have remained in the hands of universities and labs. Yes, it would have gone much slower, but originating AI as a product is going to have very unfortunate effects.
Microsoft is definitely around them. If they want to reagin their independence, they need to become even bigger. Even more profitable. And they have a chance to do that...
They need to purchase the taiwanese company and become OpenAIAOpen. I know what you mean though, it's now or never before awareness increases too much. The "smartest" man alive taught us the fastest way to burn 44 billion dollars is playing games with the name after literally everyone knows it.
I hate to say this but I think you got it slightly wrong. I can tell you that founding creators get disposed of once the company doesn’t need them anymore. That’s just how it works. If they kept them on, they would have to reward them.
Interesting analysis in terms of social dynamics. From my perspective it just seems Ilya got fired right after the coup attempt or at least it was obvious he would be marginalized. It's just that they didn't want to make it look vengeful bc that would have been bad PR so they waited 6 months and made it official a day after successful launch while everyone is distracted by Samantha. That was the gentlest firing ever.
clearly Sam had some kind of "hold" on Ilya the diff between this and early Facebook is that OpenAI has actual technology and the ability to use AI to move human progress forward
Dave, just want to add one more detail to the puzzle - Ilya leaving now is surely because of some contractual obligation. Altman was fired Nov 17th and Ilya left exactly 6 months later to date. Not sure if it's noncompete, NDA or just coordinated PR stunt on both sides but to me it seems this was decided back in the day during the drama.
We're witnessing the birth of the Tyrell Corporation
You counting out Google too soon. They are not that far behind OpenAI.
@@italiangentleman1501 especially google devs invented the transformer architecture
@@italiangentleman1501 Plus Google are way ahead on the "evil maturity curve", by about two decades. Them winning the AI race would be a worst case scenario outcome.
@@italiangentleman1501 google will fall
@@italiangentleman1501 I disagree. OpenAI tweeted that GPT-4o has been in the works for 18 months and Sam keeps iterating over and over that he's doing a slow rollout. Count the months. That's before the first version of ChatGPT was released to the public.
“The beatings shall continue until morale improves” what an amazing quote 😂
An oldie but a goodie.
On the next episode, what OpenAI’s quest for world domination will look like.
GPT-5 omni + Agentic functionality + Q* = AGI
i think it can be done with 4o
+ good memory addon
+ self-learning (aka continious self-training)
+ motivation module
@@Sajuuknah, consciousness is not needed.
@@_I_Blue and autonomy and agency is the same thing
Pepperoni pizza, chocolate chip cookies, lemon lime soda.
With him knowledge leaves and will spread. That's a good thing. We need more competitors 💟🌌☮️
The way you have arranged these hexagons gives me anxiety 😃
They are supposed to reduce anxiety...
alignment issues (*ba-dum-tss*)
@@Vyshada 🥁🐍
@@Vyshada The elephant in the room. Everyone can see it, they even talk about it, but have we made any progress on the one thing that will make or break any AGI/human success? I doubt it...we can't do human/human alignment well and have had lots of time to work on it.
@@rhaedas9085 honestly, my best guess is that no major corporation will achieve AGI. Instead. Day0 FOSS engineers as a hivemind will.
I don't know if it "really" feels emotions, or how we would even know, but it's weird to keep saying AI is a tool not a creature, then turn around and build Samantha from Her.
I was surprised Ilya didn't leave sooner. I can't imagine how awkward internal dynamic was after that debacle. But Jan leaving, I see 2 possibilities. Either he was/is on Ilya side, then him leaving is normal/OK. Or "Superalignment" team is just a PR now, and Jan grew more and more frustrated. Which is more concerning.
Honestly, dont see why it wouldn't be a bit of both. Its quite clear that OpenAI has some ulterior motives at this point.
I read somewhere that Ilya has not returned to his workplace since the expulsion incident
well, it was scenario 2
Yeah, we need a better reason for Jan leaving.
It's clear this stuff is getting at the most fundamental math we've ever even flown close to. Last time we did that we got nukes. Math is dangerous but only in the context of humans to misuse it.
They helped Sam Altman create human like AI that they can use to make a great product. Sam is the modern Steve Jobs and Ilya is Wozniak. Off to work on something MORE than just a friendly computer interface. I think Ilya is off to create something much deeper!!
What did Woz do after?
No idea @@literailly after leaving Apple, he didn't do much tbh, which is definitely a possiblity for Ilya... He might just spend the rest of his life traveling and trying new foods, idk lol but the man could do bigger things if he wanted to, the potential is there...
"After permanently leaving Apple in 1985, Wozniak founded CL 9 and created the first programmable universal remote, released in 1987. He then pursued several other businesses and philanthropic ventures throughout his career, focusing largely on technology in K-12 schools.[3]" -Wikipedia
Sounds like he had a quiet but successful afterlife post Apple. Cool guy.
Wish Ilya the best.
The Apple comparison is interesting, but the stakes are much higher - human survival
Steve Jobs actually had skills and vision. Sam has vocal fry and an addiction to go on podcasts
The open-source AI community needs influential leaders. With this in mind, I hope Ilia Sutskever doesn't join a company but instead leads a genuine open-source AI initiative that can drive safe and innovative AI development. If he truly believes that these companies pose a societal risk by prioritizing profit over decentralization, then the open-source community could be protected as most developers prefer open-source solutions over large corporations exploiting private data for profit. Like the eternal battle between light and dark in the Star Wars universe, I see open-source as the light side, rebelling against the corporate greed of the dark side.
Duality. Humanity will rise above the good/bad narrative soon enough...
Cost of compute may definitely prohibit any altruistic, non-profit, best for humanity approach. Oh wait, Is there a cure for human greed and power lust? Any evidence of it throughout human history?
This is unlikely as Ilya has stated many times publicly that he is against open source
He is anti open source
@@skane3109 We are on the brink of incredible scientific advancements that can change the course of history, like discovering human longevity. All thanks to greedy humans. Such groundbreaking discoveries outweighs the negatives
Probably Ilya was under a contractual obligation to keep out of the headlines as well. He likely got quite a sweet severance payout in return. Something that he might appreciate if he is going to do his own thing. But yeah, in general I think you painted the whole thing quite well. The growing pains and social dynamics of fast growing startups can't be overstated. Kudos for bringing up Dunbar's number by the way. Too many people forget to include details like this in analysis. Moving forward it still seems like a powder keg and quite volatile. I'm quite certain we haven't seen the last dramatic event from them.
All good points
As a cofounder, he probably had to sign a non-disparagement clause as well.
I love your book mentioned. What other books do you recommend! Thanks.
i feel like the audio is just slightly ahead of the video by ~150-250ms, it feels ultra weird seeing the lips move and not match once you notice.
probably happened during the edit? not sure.
CZcams can't be bothered. Audio lag could be a trivial to correct on the user side.
Funnily enough, the latency of my earbuds are about equal to that, and it feels so surreal to feel the sounds of spesific breaths really sync the video.
Ugh - alpha this alpha that - some people don't think like this despite what your book says
Humans are mammals - and mammals have a wide variety of behavior pattern.
My 7 year old asked why we have a tail bone, I said "well we are mammals and many years ago we had tails and ... Actually never mind. I'm not sure"
Humans are primates in the great ape family and exhibits the same broad behavioural traits as other great apes.
I don’t think Ilya was talking to Sam. And I don’t think Ilya gives a fig about his job prospects. I think he was simply trying to do the right thing. OpenAI is the most likely company to achieve AGI and he remained, not to try and restore his power, but because he felt he had some obligation to us all to try and steer it safely. His leaving coincides with the sexualisation of their product, which makes it all too clear that commerce is trumping ethics in the boardroom. The raft is in the rapids now, there is no more steering.
I have the same impression, upon watching that demo of GPT-4o or "her" instantly had this slight vibe of "she sounds like e-date girl" lol
Nah I bet they asked him to stay to for like 6 months until the PR fire goes away. They scheduled their release to coincide to soften it. Probably woulda been fine until Jan
This is a positive direction for Ilya Sutskever, et;al. They now can go on and are free agents. Very excited to see where the smartest man in the world lands. Am eager to hear from his perspective, no need for judgment. The time is for Discernment instead. There is no tomorrow or past just ever present moment...
Do you think Ilya’s coup resulted from a disagreement with Sam regarding the company’s for-profit direction or AGI alignment matters?
Yeah it's pretty obvious that the tension is between profit-motive and for-humanity motive
It must be frustrating to deal with the tradeoff between "safety" and "optimized". Imagine if you were in charge of Google Docs, and they wanted to make sure that no one ever wrote something that was "bad". If I started typing something like "men are smarter than women" the program would cut off my sentence and say I'm not allowed to write it. If you try to make everything safe algorithmically, then it must be the case that it frequently errs on the side of caution and significantly underperforms. Another example would be if you were using a drawing program like photoshop, and you were trying to make wheels on a car, and the program stopped you when you made 2 circles because it thought you were trying to make breasts or something. I'd rather see a higher performing model, even if it sometimes produced inappropriate responses. I can imagine where exactly to draw the line in these tradeoffs can cause a lot of friction at these companies.
They should interview Illya, see what he has to say.
reminder that karpathy is still a free agent🤔
this wasnt an analysis this was pure headcannon lmao
😂
It all makes sense though
Lol this is funny.
Welcome to a lot of David Shapiro vids lol
Bruh what do you think "my analysis" means? Analysis is personal headcanon. That's... literally the point. I can't analyze it from your POV.
A day before or a day after the announcement of GPT 4o?
I read that it was the 14th of May
The good people leave because the evil people rule. In every company.
That's too oversimplified to be useful.
100% agree
I think part of the reason people are leaving has to do with they can see from the inside how the have and have nots is being divied, and they want to be in the haves section so they will make it for themselves.
This made something I went through at a startup make so much sense. Plenty of other great insights as always too.
David. I've never paid for a YT comment before but I love your videos and I would genuinely love for you to list some of your favourite books of varying topics such as maybe psychology, AI, futuristic topics anything but add a little variety and not all one topic if you can but honestly I will take any list you give 😭 I've read a few I've heard you mention here and there an I believe I would have an high interest in other books you find enjoyable as well. Thanks and thank you for always the great content.
This!! The way Altman's ego often shows in his statements scares me, especially when he talks about how much power this could give OpenAI. I'm a fan of the work they do, but I would almost prefer such power to go to one of the established players who are more predictable and already have some experience with power. This is 100% subjective, but he feels a bit like a wild card.
I dont buy that 4o is what they have achieved over the past year
David, great channel and great episode as always. Wasn't Ilya "the brains" behind openai? If that's the case, does he not have all the power? He can essentially go anywhere he wants, write his own paycheck and he would be given the keys to the kingdom. Isn't this a huge loss for OpenAI?
I hope Elon scoops Ilya up and together they figure out how to get Sam Altman-Fried in line, or out of OpenAI. I don't know why, but I do not trust Sammy as far as I could throw him.
Is it just me who thinks that there's a hint in GPT 4 "o". Like "o" is almost Q, its unfinished version. And the way they call it "ominous" is a synonym for "general".
David what career to choose its confusing when sitting alone with thoughts I m a front end developer but very confused
A bit worrying to say the least
Hopefully the old board wasnt right about Sam when they tried to axe him lol
one thing that still doesnt add up to me is, during sam's brief "vacation" from openai, no employees quit.
why? is sams leadership just that good, or were they on the cusp of learning something big(Q*), or something else?
@@ryzikx It reminds me of Blake Lemoine from google
@@ryzikx well there was an immediate effort to get him back, no reason to quit at the time since they still have to wait for microsoft and sama to strike a deal, especially since there was a big pressure for the employees to threaten to quit if they don’t get him back. Essentially making OAI a ghost ship, which is bad for the new board and employees considering they also have stock
@@ryzikx Chairman Greg Brockman and Jakub Pachocki, the company’s director of research; Aleksander Madry, head of a team evaluating potential risks from AI, and Szymon Sidor, a seven-year researcher at OpenAI.
I appreciate this video. You talked my.mind.
I like your psychological/sociological analysis very much, thanks.
Excellent explanation of business life cycle from startup onward.
For all the reasons you mentioned, plus, having been through several software releases myself, it's very common for people to make job moves right before or after a release. It's a good time to go because usually your tasks are fairly wrapped up, so you leave fewer loose ends and gnarly bugs versus if you up and leave in the middle of a release.
I feel like their AI became so advanced that Ilya might not be needed anymore!
Ironic that the guy smartest guy in the world who helped built it was the first to lose his job because of it.
Heads up, there's a slight delay between the audio and video. It's especially apparently when looking at your mouth when you speak.
I don't think it has to be as simple as humanity vs profits. I think that's a common jumping-the-boat conclusion. You can have two parties not consumed with making as much profit as possible but wanting to Progress in this dept, in terms of it's cause & competition while having a different, less-cautious POV. $$ Greed certainly doesn't have to play into it. And it could even be as simple as "you're overly cautious, and while I'm not the polar opposite, I believe I have the validated reason why my very different POV has adequate caution into play."
David, can you do a video on the societal risks of AI the media largely ignore and few people debate
Jan Leike on why he left:
"I believe much more of our bandwidth should be spent getting ready for the next generations of models, on security, monitoring, preparedness, safety, adversarial robustness, (super)alignment, confidentiality, societal impact, and related topics.
Over the past few months my team has been sailing against the wind. Sometimes we were struggling for compute and it was getting harder and harder to get this crucial research done.
OpenAI is shouldering an enormous responsibility on behalf of all of humanity.
We are long overdue in getting incredibly serious about the implications of AGI.
We must prioritize preparing for them as best we can.
Only then can we ensure AGI benefits all of humanity.
"
Man, this was really interesting. Really liked that.
i don’t think typical corporate tech behavior applies well with people like Sam, Ilya, and Jan.
They know what they are on the precipice of and their own interpretation of how that should be managed is what is driving decisions.
I think next year we will look back at this moment as a canary in the coal mine.
I’ve been off the doomer train for a while now, but this just upped my p value significantly.
I loved the comparison to mammalian behavior. I seek to break down complex-looking behaviors in these types of models as well. Most people I've encountered irl are not receptive to (and many are against) these types of psychological breakdowns; but, I think they offer a clue/direction for reasoning in the same sense first principle's thinking helps in science.
When i hear the word alignment I think of the next stage after the fine tunning process, perhaps they are now using GPT for alignment and a human team is no longer needed. Perhaps we are thinking about it too much I dunno
You summed it up perfectly.
Very insightful as to how we relate to groups and billions of dollars in our face.
Love you analysis! Think it is spot on but I'm not so deep into internals.
Would love to see some analysis of this situation with Google announcements and what they are working on. Cheers. Thanks always.
Your analysis makes sense. The other view I have heard is that since it is almost exactly six month since the coup, he and the others who just left were asked back then to stay six months for the sake of the appearance of company stability.
I learned many lessons this past month. One such lesson is that the phrase “the beatings will continue until morale improves” applies more often than it has any right to.
- Ilya Sutskever (@ilyasut)December 6, 2023
Really nice analysis, super smart figured out
Plausible conjectures without the typical hype. So, the billion dollar question: where’s Ilya heading?
Did they announce their resignations _before_ ChatGPT 4o? The resignations came on 15th and ChatGPT 4o was announced on the 13th. Am I wrong?
Previously, on OpenAi.....
I'm thinking that Jan may have left because he didn't get the post-Ilya promotion.
Just one small correction - he resigned the day after, not just before.
I think you nailed it there pretty much.
Altman definitely has a sizeable ego, outwardly it’s subtle and he masks it well but if you watch his interview performances you can spot it.
Great read on the situation
I’m glad to hear that you’re sharing my sentiments on the course of OpenAI and its “mission”. And this is coming from a guy who today makes his living consulting Copilot to companies.
I’d love to hear your thoughts on WorldCoin and other ethically dubious things that Sam has in works. Dare to tackle that w/ a video? ⚖️👀
I've been talking about proof of useful work since before Bitcoin was cool.
It's going to be a bigger deal than store of value.
It closes the AI crypto loop, and completely commoditizes compute.
I didn't, but Dylan did! czcams.com/video/nxNcg98ImMM/video.htmlsi=7Fju9DbHHJkq_yXh
Dylan covered almost everything I would have covered about it, but yeah. It's dystopian AF>
How do you make a living consulting copilot to companies?
@@ryanmadden752 there’s HUGE fomo happening in the corporate world regarding AI adoption. I’d guess anyone following frontier AI news tubers like David AND having some experience tinkering with ChatGPT, Copilot or other applications could pay their bills preaching to the choir. I have a full team doing that.
@@ryanmadden752 huge FOMO happening at corporate world. I have a full team of consultants preaching to the choir. I bet anyone listening to frontier AI CZcamsrs like David and experimenting with ChatGPT, Copilot or other applications could earn a living. Ofc u need big cohones to do it as well 😅
Personally think this david shapiro guy could significantly benefit humanity if he were involved in very important projects like agi think he could do a solid job in super alignment or at least the theoretical side of it. Ive been thoroughly impressed by your predictive analysis videos and how you factor in various domains. Idk if or how but I trust this guy more than anyone at openai especially sam altman guys blinded by his own ego wanting to go down as the modern Oppenheimer
Your reasoning reminds me of the book: The Elephant in the Brain
Thank you.
sama tweet on 5/14 with the Ilya announcement:
"...I am forever grateful for what he did here and committed to finishing the mission we started together..."
hard 'finishing the mission', AGI must be fully baked
Good to have it explained this way
I reckon Ilya quit/was fired back in Nov '23 and was serving his standard 6-month notice period ... explains the radio silence and timing of the exit announcement.
If I were a betting man, I would expect Ilya to become active on social media from Jun onwards.
Yeah that, or Ilya finished super alignment and he seeks new challenges. Wonder 🤔 .. did they use Godel completeness?
Do you still think agi in 5 months?
Maybe in November on Dev day.
2028. No AGI until then.
Sam Altman. He's Tricksie.
Love your analysis!
I think your read on it is pretty good. I've been following Sam ALtman for some time. I've never trusted the guy. He always struck me as slippery and dishonest. Anyone who is so openly agreeable to opposition and gives the appearance of altruism while also amassing absurd wealth and power is NOT to be trusted. I think many are gong to regret trusting OpenAI and its stated mission of developing safe AGI that benefits all humanity. To me, this kind of mission statement is analogous to Google's "Don't be evil." If you are making a point of telling me you are NOT evil, you are. OpenAI is already suggesting ideas like hardware level encryption on GPUs. I anticipate this will be something they pursue to ensure no consumer grade hardware can run open source AI models.
So, I think where I disagree is that OpenAI is net positive for humanity.
Spot on!
The direction of travel was
Obvious the minute Larry Summers got involved. I suspect Ilya is leaving now because some sort of lock in on his contract just expired
Did their stock options vest 100% before they left? For Ilia, Jan, Karpathy did they even have stock options? Do they get to keep their stock options if they leave?
*We are fucked as a species!*
In leaving, Leike explained safety taken a back seat to "shiny objects".
In my opinion they got out before the shit hits the fan (aka new voice update). It's going to send shockwaves through humanity. I feel they resigned under protest because we're not (as a whole) ready for this "Her" level of AI interactiveness.
Is that the same Jan with the really good Twitter account?
Good analysis!
Remark; audio and visuals are slightly out of sync.
Ilya will pop up at Anthropic
Let go of the only person that isn't brain damaged. Great
That was my suspicion as well, Ilya agreed to remain until the event. He may have left a while ago but only made it official/public this week.
Ugh can’t wait to get to this
I don't have an issue with them making money, I just don't see how this works in the long run as if we only use AI to get our results then we aren't going to the source websites to find out stuff but then that means they aren't getting our ad money which means they will close down and AI won't have anything new to tell us.
Ha, I thought he was done when made super-alignment lead. I want to see him somewhere on the leading-edge and not just there to shout, "It's sharp!"
I think Ilya wants to create an AI that learns like a child. Not driven by data, but stories about growing up. He will be the father of the first ai child that becomes super ai over time. It fits his childhood stories about how he experienced becoming conscious. Like the movie chappie.
Honestly, I believe that AI should have remained in the hands of universities and labs. Yes, it would have gone much slower, but originating AI as a product is going to have very unfortunate effects.
2 of the 6 founders have turned on Sam at this point. Makes you wonder.
Microsoft is definitely around them. If they want to reagin their independence, they need to become even bigger. Even more profitable. And they have a chance to do that...
Sam Altman really gives me Ted Faro from the game Zero Dawn vibes.
OpenAI drama explain with gorillas. That's why I love this channel!
Interesting but with OpenAI's deals with Microsoft and now Apple I think they'll be able keep a good momentum.
I wonder when the enshittification will take hold of things, probably already has
They made a deal with the devil hopping in bed with Apple.
Model Alignement is solved ?
Highly doubtful
I fully expect OpenAI to change their name, possibly before the year is out.
They need to purchase the taiwanese company and become OpenAIAOpen. I know what you mean though, it's now or never before awareness increases too much. The "smartest" man alive taught us the fastest way to burn 44 billion dollars is playing games with the name after literally everyone knows it.
@@MrBrukmann
I assume you are talking about Musk and Twitter's rebranding to... X
I hate to say this but I think you got it slightly wrong. I can tell you that founding creators get disposed of once the company doesn’t need them anymore. That’s just how it works. If they kept them on, they would have to reward them.
why would you think they wouldnt need a genius like Ilya anymore? These LLMs are far from perfect, plenty more to do...
@@CosmicCells there's a millions of geniuses that you don't need to pay millions of dollars for
And you can't sell genius. They have the product.
Thank you for this video
I guess Ilya for xAI's Grok
Seems like a fair analysis.
Interesting analysis in terms of social dynamics. From my perspective it just seems Ilya got fired right after the coup attempt or at least it was obvious he would be marginalized. It's just that they didn't want to make it look vengeful bc that would have been bad PR so they waited 6 months and made it official a day after successful launch while everyone is distracted by Samantha. That was the gentlest firing ever.
clearly Sam had some kind of "hold" on Ilya
the diff between this and early Facebook is that OpenAI has
actual technology and the ability to use AI to move human progress forward
Dave, just want to add one more detail to the puzzle - Ilya leaving now is surely because of some contractual obligation. Altman was fired Nov 17th and Ilya left exactly 6 months later to date. Not sure if it's noncompete, NDA or just coordinated PR stunt on both sides but to me it seems this was decided back in the day during the drama.