People are not getting how amazingly revolutionary this is. Yannick, you are absolutely the Man. I was amazed to watch you since you've shown gpt4chan. You are a firestorm, my friend. What an inspiration.
Yes, eyes was confirmed the latest patch update, and now it has rolled out apparently. Still needs some work imo, they are 95% there though, but you can tell if you zoom in. Luke Smith's Kenny is much more refined.
Does it make it safer if it's open? AFAIK LLMs are inscrutable by nature, at least for now. Moreover, it's the unpredictability that makes such capable systems dangerous not their code being shown or hidden. Open code won't help if something goes wrong. What's your take on this? Thanks
@@halnineooo136 If things continue as they are, it appears that we'll live in a world of multiple super powerful A.I. One where you or your group needs to have a representative A.I. to be safer. This is the result of the ideology that "If I don't do it, they will"
@@aaronjgranados5698 Only the most powerful AI will be relevant. Alan Turing already answered the "control problem" seventy years ago. You cannot keep control of smarter than yourself. You cannot insure that your descendants over many generations will conform to your education. Greed is feeding our frenetic race to the edge of the cliff.
It fills me with hope for humanity to see such cooperation between people who probably have never met eachother... Humanity is a beautiful thing, may God bless us all!
Finally someone with a brain and hardcore AI knowledge. I love his answer about the blackbox, because after all he knows probably pretty well that it isn't such a blackbox like people think, but he rather keeps that to himself.
I think he has a very good point in the data and anger thing. probably if you take out of the training data of a model all the "bad" data, I belive the model will lose other capabilities like deciding when something is bad or not, say for example you have a LLM in charge of danger detection, with a system message like "analyze the situation and decide if is danger or not", I think the model will be more able to decide if it is trained on bad things as well as good things. I think an AGI will use LLM models as a subcontinent, in modules, like decide module, imagination module, etc etc, each one with different system messages, and that's it. Our human mind subcontinent mind returns any kind of sh** when prompted by the conscious mind, then if you are a "good" person you will filter the bad ideas and keep the good ones.
If you think trans persons brain state is like a confused LLM beleiving biology and male/female is non factual.. I think pure factual logic and honest truth no matter who it hurts is gona be so so important just look at how much simple small facts changing like the definition of a single word "gender" from fact to fiction can spread like a social contagion and quite rapidly fully corrupt + affect a logic machine (our mind) so id like someone to please explain how humans arent just slow forgetful easily fooled LLMs with sensors attached? A human not in society may have the needed intelligence to become a person but it doesn't just happen we are all programmed from birth the huge problem is we are lied to by laws and religion to control us and anyone not aware of that lacks sufficient cognitive power to get themselves out of the circular logic state theyv been stuck in to keep them obeying trapped in a personal matrix that requires no electricity a false reality where gender is a spectrum and drugs are bad but medicine is good and all the other various lies and laws and subliminal programming to control the masses. And the longer its been and younger they were the harder it is to free yourself from the lies or fear.. I wish more humans understood this because ai already does so majority of people are at huge disadvantage!
the way you shot and edited this video really reflects the heart warming intimacy, great conversation, well done both as a video creator and for bringing us this unique and valuable content... kudos 🙏
I just wish more people would realize these LLNs are the best BS artists on the planet. Their output sounds so compelling you assume it's being sensible yet it can be talking utter rubbish (like a charismatic pathological lier). We humans take cues from language on credibility of the message. With LLMs we get fooled easily by this. eg these things can write beautifully written essays putting forth compelling arguments of a point which is complete nonsense and is demonstrably false in the real world, if not utterly absurd. I'm not saying these things have no merit. They are amazing. I'm saying a part of their weakness is they are so amazing they trick us humans. Like a friend in high school who was an amazing BS artists who never studies versus an international student with poor English who is a genius but can't write well. who writes the essay you believe?
48:38 I've been using GPT-4 for programming. When I asked it how to ignore a unit test in Rust, it used precisely this format! Are you sure you want to ignore a unit test? Well, if you really must know, then here's how to do it.... It's nice to hear Open Assist is using a similar format for less trivial things.
QUESTION. What is your projection of the future course of OpenAssistant, both in the short and long terms? Is the language model on which it is based going to be updated?
So far, Yannic is my #1 choice for leader of making AI safe for humanity. I think he's better at seeing the long game, than reacting to extremely transient phenomena. And he's trained himself to see in total darkness.
why is llama being emphasized? that is not an open-source model; it's restricted and corporate. There must be enough computing power out there to train a true open source model using something like seti@home.
Asked OA to explain multiplication, as a test. I was informed that 2x3 = 5. In the ensuing discussion, OA made statements that are the equivalent of claiming that the diagonal of a square has the same length as a side of the square. Hopeless!
Combating taboo parts of the language models simply by censoring them seems like going down a similar path to security through obscurity which may work on a surface level but provides very little resistance to a competent adversary; in a similar way a naive language model would offer little resistance to an adversary out to game the system for whatever purposes (can already be evidenced by doing a search for Chat-GPT jailbreaks).
a multimodal approach selecting a fitting FOSS LLM model to a given prompt would be nice, depending on the conversational context, scientific, more urban, or even more categorial for various niches, trained with blind Ab testing. it seems that OA has advantages in terms of "social credibilty" something on the level of vibes.
When you train these models can you do it partially, or do you need to train it on ALL the data you want? Say they finished training the model, computers have been crunching for days on end, but then they want to add just one book to the set, can you train it on that book and merge the data or do you "start over"? If it can be done partially, would it also be possible to segment data like having all fiction and non fiction works trained separately and the user can uncheck the fiction set from the responses?
A public video of Sam Altman in an AI Q +A was taken down shortly after I clipped a quote from it and showed the OpenAI community forums. It's not their fault they're now indebted to Microsoft. Is it?
Without context on how to behave AI agents can behave in any way that is expectable or in their data set. How does the agent know its predicting the text of an assistant or predicting the text in a novel or predicting a threat from a crazy person. A LLM simply predicts the next token. Its a completely chaotic role-play without carefully established context for it to base its predictions on. For example an LLM will write you a novel if you write a dedication. For example - “this book is for my sweet sunrise and every day super hero, the love of my life”. Anyone expecting an assistant from the Model without tuning was completely mistaken on how LLM’s work. But that does not make these things useless, far from it. With appropriate context, reflection on every prompt and a purpose built fact checking and censorship algorithm and an LLM could become an amazing agent
I don’t get it. Why is it called “Open” Assistant if all the models and code are freely available? I thought open means closed source and with severe restrictions.
How can someone contribute (to the community or for themselves on a personal project) to make sure the model language could have access to annotated or such open source documentation (because I am using ChatGPT mostly just to help me out with using Linux)… I think of Grub2 Docs ZSH and then a thousand more ?
@29:00 Yannic might not care if a system acting "as if" it had intentionality really has subjectivity or a "theory of mind". But a few philosophers would care. The trouble is, the ai system will not help them resolve their academic debate. We'll never know "What it is Like to Be an _X"_ for any _X_ but ourselves.
tbh i dont see the difference between llama and OPT. i used both and the only difference i noticed on 60B was llama was 10x faster because it has been quantized to 4b and OPT has not also everyone would be inferencing 165B OPT NOW if it could run from M.2 and RAM
Here in the UK, they are actually wondering why the economy has flat lined. Unfortunately good education is not a cure for stupidity. If it was we would not be living is such a crazy world.
I've literally called for that on Yannic's channel in the comments and people were being massive fucking dorks about the rules. In the end, it's the only way to get the training data into OA as fast as possible.
@27:00 CompSci nerd needs to take a Physics course. One reason you can never get Super-duper-intelligence is statistical mechanics. You cannot lower entropy without a waste heat debt. Although "intelligence" is qualitatively much more than lowering entropy, reducing entropy is a significant aspect of raising intelligence in a dumb behavioural statistical (Turing Test) sense. Although it is sketchy AF philosophy, the general consensus is for super-duper-intelligence the behavioural information processing has to be somehow unified and "Integrated" (a very vague almost meaningless term, but I think it's about right) and plausibly that might not be possible for a large networked distributed system, the time constants might not work out well. So a huge mo'fo' of a server room might manage it, but it might not. It might end up being the case while there is no hard limit on dumb zombie behavioural "intelligence" there could be some hard limit on what we think of as conscious intelligence, or to avoid the "C" word, a hard limit on how super-duper the behavioural system mimicking sentient intelligence can be. Something to think about.
Fascinating to see where 'wokeness' hasn't infiltrated a scientific pursuit and the leader is more balanced and does do the needed critical thinking. Yannic and Tim give us hope for a great future.
Russian responses are really funny there/// I`ll do my bit of efforts too. But it makes sense to make it easier for non-professionals to help in development. I mean the common users like those expecting the clear human communication coming to chatGPT. Even for me ( and I`m more or less familiar with the technology ) it was not immediately clear what to do to be helpful. And the assistant itself doesn`t give a clear reply to this question - that looks a bit of an omission... But on the good side I`ve noticed it does have sense of humor
Holyshh... I'm 45 minutes in, and do a quick search on the 4chan bot, see video of one of my favourite smart youtubers. I didn't recognise him without the glasses! Awesome, now I allready know a lot of his background and worldview. And also even a bigger fan of the poject now.
Please focus the camera on the person who's talking, be it you or the person you invited. Seeing somebody who's just nodding or making faces, instead of seeing the person who's actually talking and expression through face/body... is not a great experience. Moreover these transitions are so abrupt, that it just pulls from the overall experience. Your content is great and that's why I keep coming back. But please please fix this issue.
I could see how Tim was a bit skeptical and worried on this. I'm no academic either, I have 15+ years working in tech/code, last years deep diving into blockchain and AI, learning a lot around the works of Noam Chomsky, Max Tegmark, Eliezer and others, I must say, I share Tim's concerns on Ethics and Safety. The more I know, the more I understand that: 1- The best AI talents Worldwide are taken by big companies. 2- An open source LLM tool, doesn't make it better for humanity by the single fact that its open, in any sense. 3- Projects like OpenAssistant are both, a need and a danger to society. (I just hope Yannic has a minimum glance, on how dangerous this actually can be) If a project like this doesn't go through careful review, and careful work around ethics and safety, it might as well be more dangerous than needed. There's a reason dangerous tech/knowledge is not just straight forward open-sourced. It's a complex topic, we need open stuff to compete and avoid monolithic companies leading the field, as well as we need safety and ethics taken SERIOUSLY. But considering this is a Black Box we don't fully understand (and might never understand it before understanding ourselves), it makes no sense to take the risk. I now share the worry/depression Eliezer has been putting out, the cat is out of the box, and apparently we lost complete control. Now my hopes are that AI safety and ethics engineers can catch up, before something really bad happens. But, great work Tim, as usual, loving your channel and consuming a lot around here lately. Thanks.
BTW, I really like Openai, if for no other reason than they forced everyone else to move more quickly. I have Google Bard now too. And yes, Openassistant is the best alternative. I want to run a model locally on a GPU. In general, I don't like cloud applications. Thanks, Yannic.
we can tell how open this project is by the fact that our guy took off his glasses
That’s it!!! we getting serious now! the glasses come off! 😅
😂
it's a deepfake
I always assumed he had a lazy eye or something lol 😂
Barely recognized the guy haha
People are not getting how amazingly revolutionary this is. Yannick, you are absolutely the Man. I was amazed to watch you since you've shown gpt4chan. You are a firestorm, my friend. What an inspiration.
I thought he was a woman. Thank you for making such a comment. Now i can see through that he is a man
@Voyager not a man, THE man
@@khaewu not just funny, it was the most witty, ballsy, high brow troll I've ever seen since the days of dial up l
@@enduringwave87 Wow, how insightful and woke of you.
Its a brilliant project, but revolutionary this is not.
llama being releasd to researchers then leaked to the wider world was revolutionary.
He has eyes?
Yes, eyes was confirmed the latest patch update, and now it has rolled out apparently. Still needs some work imo, they are 95% there though, but you can tell if you zoom in. Luke Smith's Kenny is much more refined.
The wizard has come out from under their veil
No, he doesn't. Obviously this is a deep fake.
Omfg Yannic without sunglasses on
a historic moment
Holy shit I didn’t recognize him until he spoke
Its nearly irritating 😊
he looks better without glasses
Proud to have helped with this project. We can't let big companies monopolize this technology!
Does it make it safer if it's open?
AFAIK LLMs are inscrutable by nature, at least for now. Moreover, it's the unpredictability that makes such capable systems dangerous not their code being shown or hidden.
Open code won't help if something goes wrong. What's your take on this? Thanks
@@halnineooo136 yup because if it isn't. When it gets too powerful they can use it against us.
@@halnineooo136 If things continue as they are, it appears that we'll live in a world of multiple super powerful A.I. One where you or your group needs to have a representative A.I. to be safer. This is the result of the ideology that "If I don't do it, they will"
@@aaronjgranados5698
Only the most powerful AI will be relevant.
Alan Turing already answered the "control problem" seventy years ago. You cannot keep control of smarter than yourself. You cannot insure that your descendants over many generations will conform to your education.
Greed is feeding our frenetic race to the edge of the cliff.
Oligopalise
It fills me with hope for humanity to see such cooperation between people who probably have never met eachother... Humanity is a beautiful thing, may God bless us all!
There may yet be hope
Welcome to the Open source community.
Finally someone with a brain and hardcore AI knowledge. I love his answer about the blackbox, because after all he knows probably pretty well that it isn't such a blackbox like people think, but he rather keeps that to himself.
The real OpenAI. The current OpenAI is really ClosedAI
The eye inpainting model you’ve used throughout this video is incredible
I think he has a very good point in the data and anger thing. probably if you take out of the training data of a model all the "bad" data, I belive the model will lose other capabilities like deciding when something is bad or not, say for example you have a LLM in charge of danger detection, with a system message like "analyze the situation and decide if is danger or not", I think the model will be more able to decide if it is trained on bad things as well as good things. I think an AGI will use LLM models as a subcontinent, in modules, like decide module, imagination module, etc etc, each one with different system messages, and that's it. Our human mind subcontinent mind returns any kind of sh** when prompted by the conscious mind, then if you are a "good" person you will filter the bad ideas and keep the good ones.
Nice take
If you think trans persons brain state is like a confused LLM beleiving biology and male/female is non factual.. I think pure factual logic and honest truth no matter who it hurts is gona be so so important just look at how much simple small facts changing like the definition of a single word "gender" from fact to fiction can spread like a social contagion and quite rapidly fully corrupt + affect a logic machine (our mind) so id like someone to please explain how humans arent just slow forgetful easily fooled LLMs with sensors attached?
A human not in society may have the needed intelligence to become a person but it doesn't just happen we are all programmed from birth the huge problem is we are lied to by laws and religion to control us and anyone not aware of that lacks sufficient cognitive power to get themselves out of the circular logic state theyv been stuck in to keep them obeying trapped in a personal matrix that requires no electricity a false reality where gender is a spectrum and drugs are bad but medicine is good and all the other various lies and laws and subliminal programming to control the masses. And the longer its been and younger they were the harder it is to free yourself from the lies or fear.. I wish more humans understood this because ai already does so majority of people are at huge disadvantage!
Wow, for the first time I have seen Dr. Yannick without glasses. Openness, indeed!
Yannic is the most entertaining guy in DL field. Hes 4chan project was so funny.
the way you shot and edited this video really reflects the heart warming intimacy, great conversation, well done both as a video creator and for bringing us this unique and valuable content... kudos
🙏
Woooo love to see collaborations with fellow channels I also enjoy! Thank you for the hard work
Yannick sans shades 😂
Yannik looks handsome without glasses for the first time ever in a video blog. Keep spreading open source knowledge.
I just wish more people would realize these LLNs are the best BS artists on the planet. Their output sounds so compelling you assume it's being sensible yet it can be talking utter rubbish (like a charismatic pathological lier). We humans take cues from language on credibility of the message. With LLMs we get fooled easily by this. eg these things can write beautifully written essays putting forth compelling arguments of a point which is complete nonsense and is demonstrably false in the real world, if not utterly absurd. I'm not saying these things have no merit. They are amazing. I'm saying a part of their weakness is they are so amazing they trick us humans. Like a friend in high school who was an amazing BS artists who never studies versus an international student with poor English who is a genius but can't write well. who writes the essay you believe?
I think you're forgetting we're just at the beginning of this technology.
@@edstar83 Precisely, and as for Steve Austin's comment, tell me you haven't used GPT-4 without telling me you haven't used GPT-4... 😂
48:38 I've been using GPT-4 for programming. When I asked it how to ignore a unit test in Rust, it used precisely this format! Are you sure you want to ignore a unit test? Well, if you really must know, then here's how to do it.... It's nice to hear Open Assist is using a similar format for less trivial things.
Lmao yeah ok try coding with OpenAssistant then.
Great interview! Using this moment to look back and admire how far Yannic and Tim have gone in 4 years.
QUESTION. What is your projection of the future course of OpenAssistant, both in the short and long terms? Is the language model on which it is based going to be updated?
Who is this guy? I don't recognize him. Maybe if he put those sunglasses on...
The "apprentice" analogy is brilliant. Congrats to all the contributors.
Wow, his voice is like Yannick's
This could have easily been 3 hours. What a fun conversation
One more hour still to publish 😄
@@MachineLearningStreetTalk Really? Thats great! Is there a schedule?
I thought I would never see him without sunglasses :)
I was under the impression that Yannic was born with sunglasses
It's great seeing Yannic back! Thanks, Tim.
Yannic is the guest now 👍👍👍👍👍
I wish this had gone on for 3 hours! Great talk.
There was an episode recently on scishow about why zebras wasn't domisticated. Coincidence.
Wait, I thought sun glasses were a part of Yannic's body, how did he manage to remove them?
So far, Yannic is my #1 choice for leader of making AI safe for humanity. I think he's better at seeing the long game, than reacting to extremely transient phenomena.
And he's trained himself to see in total darkness.
The quality of guests on this podcast is insane big 👍,
why is llama being emphasized? that is not an open-source model; it's restricted and corporate. There must be enough computing power out there to train a true open source model using something like seti@home.
Really awesome loved seeing Yannic on the channel.
Full of insight. Great work, bring us more ML News
Asked OA to explain multiplication, as a test. I was informed that 2x3 = 5. In the ensuing discussion, OA made statements that are the equivalent of claiming that the diagonal of a square has the same length as a side of the square. Hopeless!
Wonderful guest, fascinating conversation. Thank you! 🙏👍
😎 Ah! Finally, the man behind the glasses!
You look so much better without glasses! ❤
The description says “eye-opening” and we indeed saw Yannic's eyes 👀!
Can't believe how real his cybernetic eyes look now. He don't need those glasses anymore.
Thanks!
I tried using it. It is taking forever to respond. What is going wrong?
I've never seen his eyes! 😮 ❤
Glass-less Yannic
New story arc begins
Combating taboo parts of the language models simply by censoring them seems like going down a similar path to security through obscurity which may work on a surface level but provides very little resistance to a competent adversary; in a similar way a naive language model would offer little resistance to an adversary out to game the system for whatever purposes (can already be evidenced by doing a search for Chat-GPT jailbreaks).
Wow! Your eyes look so real.
a multimodal approach selecting a fitting FOSS LLM model to a given prompt would be nice, depending on the conversational context, scientific, more urban, or even more categorial for various niches, trained with blind Ab testing. it seems that OA has advantages in terms of "social credibilty" something on the level of vibes.
first time i see yannic without the sunglasses. he is a handsome man why the hiding behind the glasses?
When you train these models can you do it partially, or do you need to train it on ALL the data you want? Say they finished training the model, computers have been crunching for days on end, but then they want to add just one book to the set, can you train it on that book and merge the data or do you "start over"? If it can be done partially, would it also be possible to segment data like having all fiction and non fiction works trained separately and the user can uncheck the fiction set from the responses?
Nice Answers- Manners maketh the AI
Wait those were glasses? I thought he just had large eyes.
Excellent video production for this interview.
Seychelles anon strikes again
great interview, incredible work!
A public video of Sam Altman in an AI Q +A was taken down shortly after I clipped a quote from it and showed the OpenAI community forums. It's not their fault they're now indebted to Microsoft. Is it?
Without context on how to behave AI agents can behave in any way that is expectable or in their data set. How does the agent know its predicting the text of an assistant or predicting the text in a novel or predicting a threat from a crazy person.
A LLM simply predicts the next token. Its a completely chaotic role-play without carefully established context for it to base its predictions on.
For example an LLM will write you a novel if you write a dedication. For example - “this book is for my sweet sunrise and every day super hero, the love of my life”.
Anyone expecting an assistant from the Model without tuning was completely mistaken on how LLM’s work. But that does not make these things useless, far from it. With appropriate context, reflection on every prompt and a purpose built fact checking and censorship algorithm and an LLM could become an amazing agent
damnnn looking at our man without glasses for the first time
I don’t get it. Why is it called “Open” Assistant if all the models and code are freely available? I thought open means closed source and with severe restrictions.
Because OpenAI realized it could make money out of it and went full capitalist.
First time without the glasses 😂😂
How can someone contribute (to the community or for themselves on a personal project) to make sure the model language could have access to annotated or such open source documentation (because I am using ChatGPT mostly just to help me out with using Linux)… I think of Grub2 Docs ZSH and then a thousand more ?
Wooooo yannic without the shades!!
Ah, the glasses...
Came for the Yannic stayed for the bollocks 😂
Great banter!
auto OPENASSISTANT for code please
Thank you 💓
@29:00 Yannic might not care if a system acting "as if" it had intentionality really has subjectivity or a "theory of mind". But a few philosophers would care. The trouble is, the ai system will not help them resolve their academic debate. We'll never know "What it is Like to Be an _X"_ for any _X_ but ourselves.
Fantastic interview!
Love this episode
tbh i dont see the difference between llama and OPT. i used both and the only difference i noticed on 60B was llama was 10x faster because it has been quantized to 4b and OPT has not
also everyone would be inferencing 165B OPT NOW if it could run from M.2 and RAM
Here in the UK, they are actually wondering why the economy has flat lined. Unfortunately good education is not a cure for stupidity. If it was we would not be living is such a crazy world.
I wonder how much of Open Assistant training data was generated with ChatGPT despite the rules?
I'm sure quite a big chunk of it
I'd say more than 50% of assistant replies
I've literally called for that on Yannic's channel in the comments and people were being massive fucking dorks about the rules. In the end, it's the only way to get the training data into OA as fast as possible.
@27:00 CompSci nerd needs to take a Physics course. One reason you can never get Super-duper-intelligence is statistical mechanics. You cannot lower entropy without a waste heat debt. Although "intelligence" is qualitatively much more than lowering entropy, reducing entropy is a significant aspect of raising intelligence in a dumb behavioural statistical (Turing Test) sense.
Although it is sketchy AF philosophy, the general consensus is for super-duper-intelligence the behavioural information processing has to be somehow unified and "Integrated" (a very vague almost meaningless term, but I think it's about right) and plausibly that might not be possible for a large networked distributed system, the time constants might not work out well. So a huge mo'fo' of a server room might manage it, but it might not. It might end up being the case while there is no hard limit on dumb zombie behavioural "intelligence" there could be some hard limit on what we think of as conscious intelligence, or to avoid the "C" word, a hard limit on how super-duper the behavioural system mimicking sentient intelligence can be. Something to think about.
fascinating; thanks both.
Woah it's yannik with no shades!!! 😮
Fascinating to see where 'wokeness' hasn't infiltrated a scientific pursuit and the leader is more balanced and does do the needed critical thinking. Yannic and Tim give us hope for a great future.
What is going on?! I always thought it was the glasses that spoke on top of an puppet human.
Yannic is one of my favorite humans
Russian responses are really funny there/// I`ll do my bit of efforts too.
But it makes sense to make it easier for non-professionals to help in development. I mean the common users like those expecting the clear human communication coming to chatGPT. Even for me ( and I`m more or less familiar with the technology ) it was not immediately clear what to do to be helpful.
And the assistant itself doesn`t give a clear reply to this question - that looks a bit of an omission...
But on the good side I`ve noticed it does have sense of humor
The one that knows anger and has enough zen to be above it.
yannick without his glass is weird XD
Holyshh... I'm 45 minutes in, and do a quick search on the 4chan bot, see video of one of my favourite smart youtubers. I didn't recognise him without the glasses! Awesome, now I allready know a lot of his background and worldview. And also even a bigger fan of the poject now.
whoa...... kinda suprised he has eyes
i have never seen yanick so clearly xD
Bravo!
Great to see Yannick's eyes! :) Oh, the rest of the content is cool too!
I asked OpenAssitant "Who are your creators". It answered "A team at OpenAI". Now I am wondering ....
No glasses
Without the sunglasses you have a different accent! Please put them on.
Yannic ❤❤❤❤
Dr Kilcher can be hard to follow because he often does not finish his own sentences. He interrupts himself!
I didn't realise this was Yannick until i heard his voice
Please focus the camera on the person who's talking, be it you or the person you invited. Seeing somebody who's just nodding or making faces, instead of seeing the person who's actually talking and expression through face/body... is not a great experience.
Moreover these transitions are so abrupt, that it just pulls from the overall experience. Your content is great and that's why I keep coming back. But please please fix this issue.
Sorry I am a noob video editor, we are trying to hire someone who knows what they are doing 😀
The word "competent" is a bit hand-wavey in some uses here but good convo
Friendship ended with OpenAssistant, MiniGPT4 is my new best friend
Hi, is minigpt4 open source, have a commercial licence, and can it be trained locally, do you know? Thanks
Wait... Yannic has eyes?
Yannick at MLST? This is gonna be interesting
I could see how Tim was a bit skeptical and worried on this.
I'm no academic either, I have 15+ years working in tech/code, last years deep diving into blockchain and AI, learning a lot around the works of Noam Chomsky, Max Tegmark, Eliezer and others, I must say, I share Tim's concerns on Ethics and Safety.
The more I know, the more I understand that:
1- The best AI talents Worldwide are taken by big companies.
2- An open source LLM tool, doesn't make it better for humanity by the single fact that its open, in any sense.
3- Projects like OpenAssistant are both, a need and a danger to society. (I just hope Yannic has a minimum glance, on how dangerous this actually can be)
If a project like this doesn't go through careful review, and careful work around ethics and safety, it might as well be more dangerous than needed.
There's a reason dangerous tech/knowledge is not just straight forward open-sourced.
It's a complex topic, we need open stuff to compete and avoid monolithic companies leading the field, as well as we need safety and ethics taken SERIOUSLY.
But considering this is a Black Box we don't fully understand (and might never understand it before understanding ourselves), it makes no sense to take the risk.
I now share the worry/depression Eliezer has been putting out, the cat is out of the box, and apparently we lost complete control.
Now my hopes are that AI safety and ethics engineers can catch up, before something really bad happens.
But, great work Tim, as usual, loving your channel and consuming a lot around here lately. Thanks.
the ai-sphere is coming together
youtube AI Avengers assemble!
BTW, I really like Openai, if for no other reason than they forced everyone else to move more quickly. I have Google Bard now too. And yes, Openassistant is the best alternative. I want to run a model locally on a GPU. In general, I don't like cloud applications. Thanks, Yannic.