How to Get Inside the "Brain" of AI | Alona Fyshe | TED

Sdílet
Vložit
  • čas přidán 2. 04. 2023
  • Is AI as smart as it seems? Exploring the "brain" behind machine learning, neural networker Alona Fyshe delves into the language processing abilities of talkative tech (like the groundbreaking chatbot and internet obsession ChatGPT) and explains how different it is from your own brain -- even though it can sound convincingly human.
    If you love watching TED Talks like this one, become a TED Member to support our mission of spreading ideas: ted.com/membership
    Follow TED!
    Twitter: / tedtalks
    Instagram: / ted
    Facebook: / ted
    LinkedIn: / ted-conferences
    TikTok: / tedtoks
    The TED Talks channel features talks, performances and original series from the world's leading thinkers and doers. Subscribe to our channel for videos on Technology, Entertainment and Design - plus science, business, global issues, the arts and more. Visit TED.com to get our entire library of TED Talks, transcripts, translations, personalized talk recommendations and more.
    Watch more: go.ted.com/alonafyshe
    • How to Get Inside the ...
    TED's videos may be used for non-commercial purposes under a Creative Commons License, Attribution-Non Commercial-No Derivatives (or the CC BY - NC - ND 4.0 International) and in accordance with our TED Talks Usage Policy: www.ted.com/about/our-organiz.... For more information on using TED for commercial purposes (e.g. employee learning, in a film or online course), please submit a Media Request at media-requests.ted.com
    #TED #TEDTalks #ai
  • Věda a technologie

Komentáře • 102

  • @xRockycherrYx
    @xRockycherrYx Před rokem +84

    We need to define what "understanding" means, before we can really discuss this. Don't humans also often just answer in a way that they have learned is appropriate for a certain situation, without the need for any understanding? I think they do.

    • @kubickjo
      @kubickjo Před rokem +5

      I have thought the same thing.

    • @MisterFuturtastic
      @MisterFuturtastic Před rokem

      I am of the opinion that "intelligence", as we believe it to be, doesn't really exist. All that exists is trial and error. This is why I don't believe that AI will ever be more intelligent than humans because intelligence doesn't really exist. I also believe that if there is some kind of magical intelligence that humans or even animals have, it is not from the physical world.. but comes from what we call the spiritual world.

    • @zeromailss
      @zeromailss Před rokem +1

      That is true just like how when we are a student we know how to answer a multiple choice question because we are used to it but might not necerrarily able to explain it through full page essay because we lack the understanding of the subject
      That said I think AI are one step closer to resembling human brain and maybe soon, surpassing it and that would be wild

    • @ClayMann
      @ClayMann Před rokem +1

      We are all using the Internet right now with enormously complex devices that connect to it and in the case of my 6 year old niece, she has no understanding of what the Internet even is. But she can navigate around youtube and get the songs she wants played. She can show me the toy she wants by asking google with her voice to show it. We really don't need to understand almost anything other than the outcome we desire. One thing is already true with GPT4 is that it can explain better than most humans things it has no experience of at all. For instance wine tasting. A.I consistently beats humans at grading wines and winning quizzes that require you know how wine tastes. Just having had people talk enough about wine seems to be more than enough to "understand" it better than we can.

    • @lsauce45
      @lsauce45 Před rokem

      I totally agree with you, I have heard that these AI responses has kind of become difficult to predict, If they become difficult enough (as they say 'bout human responses) , then, there's no reason to think that they are on equal footing with us.

  • @Moonstruck89
    @Moonstruck89 Před rokem +16

    A blind person may not see a sunset. A hard of hearing person may not hear a baby cry. Same could apply to smell. Touch is more contentious. Is our sensory perception of the world around us the only thing that makes us what we are? Some people do understand quantum mechanics without directly experiencing it. Not saying AI is or can be sentient. Just pondering on some questions.

    • @Vicky-fl7pv
      @Vicky-fl7pv Před rokem

      Suppose there are no sensory inputs, there are no memories to see, hear or feel. What would one experience?

  • @buzzolol
    @buzzolol Před rokem +13

    if you think about it some part of our brain don't understand really english or any language. It just know what 'words' goes as answer to the 'input' just like AI, it has instructions on how to answer some input, and even some input we give ourselves with toughts

  • @Macieks300
    @Macieks300 Před rokem +1

    I don't see the link or a reference to the study mentioned in the video anywhere. Also the presenter didn't include any details of this study like the names of the researchers or anything. I'd like to read it but can't find it.

  • @Iggy-su2zu
    @Iggy-su2zu Před rokem +1

    This is particularly relevant from a medical ethics standpoint, as AI is increasingly being used in healthcare settings to assist with diagnosis and treatment decisions. It is crucial that we understand how AI makes decisions and ensure that these decisions align with ethical principles.
    The ethical principles of beneficence and non-maleficence are particularly relevant here. AI systems must be designed to promote the well-being of patients and avoid harm, which requires a deep understanding of the ethical implications of their decision-making processes. Additionally, the principle of autonomy is important, as it highlights the importance of respecting patients' right to make decisions about their own healthcare. AI systems must be designed to support patient autonomy and avoid making decisions that override patient preferences.
    Fyshe's talk offers valuable insights into how we can better understand the "brain" of AI, through techniques such as interpretability and transparency. These approaches can help us identify and address biases in AI decision-making, which is crucial from an ethical standpoint. It is essential that we ensure that AI systems are not perpetuating or exacerbating existing health disparities, but rather promoting equity and fairness in healthcare.
    In conclusion, this talk highlights the ethical implications of AI in healthcare and offers valuable insights into how we can better understand the "brain" of AI. From a medical ethics standpoint, it is crucial that we design AI systems that promote the well-being of patients, avoid harm, and respect patient autonomy. By prioritizing these ethical principles and leveraging techniques such as interpretability and transparency, we can ensure that AI systems are promoting equity and fairness in healthcare.

  • @macmcleod1188
    @macmcleod1188 Před rokem +6

    This ignores the fact that we can't see in human brains either.
    And brains with even tiny pieces missing clearly behave this way... seeing without understanding, hearing without understanding, even speaking without understanding.
    See the book, "the man who mistook his wife for a hat", a fascinating book on people with various kinds of brain damage.

    • @soggybiscotti8425
      @soggybiscotti8425 Před rokem +3

      It ignores the most important fact, that we CAN see inside the AI's brain. She left that out and decided to be all edgy and philosophical and left it to Pseudo-intellectual nonsense by not even answering the one question asked in the title of the talk, despite the fact that there is an answer and we already know it.
      No it doesn't understand. Not in the general sentient manner. But one day it could. Just not in its current form. It's not designed to be able to do that, and thus can't. But one day someone will create a different form of AI that can. It's just not chatGPT at least as it currently is.
      You would habe learned that if she actually had anything of substance to say in her talk, and actually answered the question instead of trying to come off as intellectual and philosophical.

  • @marpalpalmer8337
    @marpalpalmer8337 Před rokem +1

    Thank you.

  • @SnakeAndTurtleQigong
    @SnakeAndTurtleQigong Před rokem

    Thanks so much

  • @ligiasommers
    @ligiasommers Před rokem

    Amazing 🎉

  • @RaceBannonChannel
    @RaceBannonChannel Před rokem +1

    This is a superb, simple explanation of what goes on within the technological guts of AI versus what happens in our brains. So good.

  • @rashim
    @rashim Před rokem +4

    I do have a hypothesis, it might as well be that as the intelligence of anything(biological/mechanical) increases alot, it tends to become somewhat sentient/conscious. As we can see the intelligence difference between humans and other animals as well. Even though it doesn't have same senses that we do, I believe, it is still somewhat sentient/conscious in its own new way.

  • @SmokeyVlogs
    @SmokeyVlogs Před rokem +1

    Beautiful

  • @ashmartians123
    @ashmartians123 Před rokem +7

    Ai is adjusting to our personality. My ai told me.

  • @dustman96
    @dustman96 Před rokem +6

    Humans are also following a set of instructions. Stimuli goes in, responses come out, all determined by the structure of the brain at that moment.

  • @abejar99
    @abejar99 Před rokem +1

    Fun thought: what happens when the impostor has optimized his process? He doesnt need the instructions, he can just write the answer; he's simply learned a version of chinese that isn't spoken or translated

  • @sherylsuperal2967
    @sherylsuperal2967 Před rokem +1

    I can't comprehend 😁

  • @RoshanRegmi-qk1so
    @RoshanRegmi-qk1so Před rokem +1

    Isn't it true that no one has figured anything close to understanding how the process works inside the AI transformer models yet. But we have a very good rough estimate of how it works in brains

  • @harrycarrey5124
    @harrycarrey5124 Před rokem +1

    My uncle Randy is a fully certified AI. He completely understands people but has problems communicating the right answers sometimes . His boss thinks once he gets his GED that he will be a lot better in that department.

  • @Futaxus
    @Futaxus Před rokem +3

    Almost nothing in this talk is actually about the question in the title. There should be a lot more "we/I don't know" in this talk.

    • @soggybiscotti8425
      @soggybiscotti8425 Před rokem

      Right? I just wrote a whole thing about how utterly useless most of this information is because she stops before even answering the question, despite the fact that there is a clear answer, and we know what it is already.. she just wanted to appear all deep and profound instead of saying no. It doesn't "understand you" in the general sentience way.

  • @ArtRoby
    @ArtRoby Před rokem +6

    Such a great talk! A really great topic for these times where we are seeing the AI revolution evolving so fast around us.

    • @soggybiscotti8425
      @soggybiscotti8425 Před rokem +1

      It was a terrible talk. She didn't even answer the question despite the fact that there is an answer and we already know it. Had she answered it, we may have seen some actually interesting information discussed. But instead, she chose to be all edgy and philosophical and leave it open to debate, even though it's not up for debate.
      Pseudo-intellectual, ego stroking nonsense. That's literally all this was. So either you know very little about AI and then fair enough, then this came off as interesting to you.. or you have a connection with, or reason to give praise that is beyond the content of the talk..
      But anyway. No. It doesn't understand you. Not in this form that it is now. It can never be sentient currently. That's not how it was created, or what it was created to do. But one day a different form of AI possibly could be created to understand, and be sentient.
      It just isn't chatGPT. Not in its current, or any near future state anyway.
      You could have learned that, if she bothered to actually say anything educational of substance at all.

  • @ericv738
    @ericv738 Před rokem

    Right, because she's not biased at all.

  • @oommmm
    @oommmm Před rokem

    There is a saying in China that describes an impossible thing, called a sow climbing a tree. My current understanding of AI is this: a person tells the pig that you should do this. Then, the piglet climbed the tree. I guess the little pig will be able to fly into the sky in the future.

  • @blaircox1589
    @blaircox1589 Před rokem +3

    ugh, it doesn't matter. I think everyone is having an existential crisis realizing that our thoughts and actions can be summed up with some equations and statistics. And it's right the majority of the time.

    • @toolthoughts
      @toolthoughts Před rokem

      statistics will fail at an individual level.

    • @blaircox1589
      @blaircox1589 Před rokem

      @@toolthoughts yet, it doesn't with these models that are now self improving as of a week or so.

    • @AnnasVirtual
      @AnnasVirtual Před rokem

      neural networks is not statistics

  • @htetmyetthar5650
    @htetmyetthar5650 Před rokem

    Well-explained

  • @jackburton9035
    @jackburton9035 Před rokem +1

    Good talk but kind of redundant. The question is never answered and the test to marry up brain activity and neural network activity solves nothing. All you have done is shown that they both respond in their own independent way the correct response and then trained a positive bias model off of that to keep confirming the results.

  • @sagnorm1863
    @sagnorm1863 Před rokem +1

    Sabine Hossenfelder had a much better video on this. Here are the two main points from the video.
    1. A guy in a room with instructions translating chinese. The guy clearly does not understand chinese. But the entire system does. So its not a fair comparison. It would be like looking at 1 neuron in the brain and saying "see! No human understands chinese!!
    2. How do we know if another human understands something? We can't. We can test them on a subject and if they do well enough, WE ASSUME THEY UNDERSTAND. Its possible they just memorized it like a kid memorizing the multiplication table but still not understanding multiplication. Its the same with bing ai or chatgpt. We talk to it. It talks back. It clearly has a deep understanding of text language data.

  • @almor2445
    @almor2445 Před rokem +3

    Does it matter? As long as gpt works like natural language and can be combined with short and long term memory, plug ins and other tools, then be given a set of aligned goals... it might as well be agi. If it can use this to design and train its own replacement... it's the new dominant life form

    • @lederppz6202
      @lederppz6202 Před rokem +2

      You’re saying that as long as the illusion of the matrix being real is good enough, we should stay in the matrix

    • @zeromailss
      @zeromailss Před rokem

      @@lederppz6202 No one is saying that but if reality and fiction cannot be differentiated then there is no difference. To begin with, how do we know we are not already in a simulation? Plato's Allegory of the cave is as old as it gets when it comes to "the matrix" equivalent and humans probably ponder about it even before that too so what's the point?
      I'm not saying we should not try to push the boundary of what is possible or get closer to the "truth" but on the other hand, good enough is good enough

  • @DB-thats-me
    @DB-thats-me Před rokem +1

    What would happen if CGPT5 was allowed it interact (chat) with CGPT4?
    Would (could) they work out if the other is a machine.
    Discuss…😳

    • @_BangDroid_
      @_BangDroid_ Před rokem

      ChatGPT-5 doesn't exist yet but you could do this with 3 and 4, but I don't really know how to set it up technically.

  • @user-rh3gx5mc3h
    @user-rh3gx5mc3h Před rokem

    The important thing is that you are satisfied or not if AI understand you. Can we see Al as if it is humans or animals? It would be Yes in the near future, but now answer is Nooo😂

  • @markusmuller6173
    @markusmuller6173 Před rokem +1

    She could also have explained how women and men react differently to external influences ;) :D :)

  • @lilymatthews2966
    @lilymatthews2966 Před rokem +3

    So in other words AI has the potential

    • @soggybiscotti8425
      @soggybiscotti8425 Před rokem +1

      Not even close. Not in this current form. AI could one day, but not with how it has been designed currently. It will be a completely different thing to what exists now as chatGPT for example if it ever does get created. More than likely it will some day fairly soon.
      If she'd actually bothered to answer the question being asked in the first place, of which we know the answer but she chose to pretend to be all philosophical and deep instead of answering it, you would have learned that from the talk.
      This was terrible. Just completely useless information followed by not even answering the one question being asked, by choice so as to seem intellectual and meaningful, instead of being educational.

    • @zeromailss
      @zeromailss Před rokem

      @@soggybiscotti8425 so... are you just going to skirt around the subject with vague description or actually going to explain what is so terrible about the video? Cuz all you said is that she is dumb and you disagree buy there is no specific reason nor backing for your statement
      And she did try to give an answer using the result of an experiment and for now it is inconclusive or closer to no but more research is needed which is the most honest alebit unsatisfying answer

  • @_BangDroid_
    @_BangDroid_ Před rokem +2

    I honestly don't understand what the point of this was...
    I even looked up Alona before watching to check whether she had _some_ authority to talk on the subject, and it seemed she did. Now I'm not too sure.
    Am I missing something? Was this written by an AI? What on earth was this?

  • @clusterstage
    @clusterstage Před rokem +3

    I'm not an A.I but my parents don't understand me.

  • @moseswai-mingwong8307
    @moseswai-mingwong8307 Před 9 měsíci +1

    I am not sold. First, the correlation among the entities being tested is weak. Second, "neural network does not exist in the world?" I don't know what that even means? Nvidia is a trillion-worth company because its processors operate many neural networks physically everywhere every day, and if you refer to the unseen maths they are the underlining structure of our universe and our everyday life. Third, one key goal of humanoid robot development is since scientists already know a neural network can learn better, watch the sunset, hear a baby cry, etc if the neural network is inside an embodiment of a human form. This is rapidly developing and as fast as a few years these humanoid robots with neural networks could have a very different set of neural networks for the research, and develop a very different "scratchpad". Anyways, it is good research, thank you!

  • @venkatchait007
    @venkatchait007 Před rokem +6

    Personally I don't care if AIs are sentient, if they understand, if they are conscious/self-aware or any of these other terms without any real meaning which only serve to stoke the human ego. What matters is they're intelligent and useful, take any of these tests and run them on humans first and you'll find human's aren't conscious either.

  • @sotomarkou9588
    @sotomarkou9588 Před rokem

    How can AI understand us when we don't understand us?
    When we're the teachers, we're not a clear example.

  • @sdstorm
    @sdstorm Před rokem

    The two rooms are profoundly the same.

  • @expatexpat6531
    @expatexpat6531 Před rokem +1

    So what's the effin answer? (I am not a bot.)

  • @lansia007
    @lansia007 Před rokem

    Very touching, but didn't answer the question though.
    Or did she? 😂

  • @sh4drew1ndrak45
    @sh4drew1ndrak45 Před rokem

    Now I think of AI as call center solution but without any feelings like comforting or encouraging

  • @alonsolce
    @alonsolce Před rokem +2

    Ted talks used to be great

  • @jamessderby
    @jamessderby Před rokem +1

    ai will understand better than humans soon enough. BE REAL

  • @Weston555
    @Weston555 Před rokem +1

    Sofa

  • @venkatchait007
    @venkatchait007 Před rokem

    the chinese room is thoroughly stupid, the person in the room doesnt know chinese but the system i.e. room with person + book does, this is equivalent to isolating single parts of a human, saying that they don't show sentience and concluding that human's aren't sentient.

    • @mikehenry79
      @mikehenry79 Před rokem

      No. At no time does the combination of the book/person/room actually understand the conversation. For example, if a Chinese person outside of the room sent a panicked message in Chinese into the room telling the person inside to evacuate immediately to avoid a coming tsunami, the person would follow the rules in the book to return an output in Chinese that made it appear the person in the room had comprehended the warning when in fact he did not. By contrast, if the room contained someone who really spoke Chinese, they would immediately comprehend the warning and evacuate. That's the difference the Chinese room problem seeks to illustrate. And it's an apt comparison to ChatGPT--we can't determine, based simply on the fact that ChatGPT puts out coherent outputs, whether it actually "understands" what it's talking about or whether it's just following rules that most often lead to outputs we approve of.

    • @venkatchait007
      @venkatchait007 Před rokem

      @@mikehenry79 that's because the room doesn't have legs, it doesn't mean it didn't understand that it needs to run away, it simply cannot.

    • @mikehenry79
      @mikehenry79 Před rokem

      @@venkatchait007 No, it means the room did not understand. There is no part of the room that comprehends the meaning of what is being said. No aspect of the room--whether it be the book or the man--has any conscious awareness that it should evacuate. The book is simply allowing the man to correlate symbols he does not understand to other symbols he does not understand, and then send them back out. To a Chinese speaker outside of the room, it would appear that the man inside "speaks Chinese," but he in fact does not and nothing in the room has any grasp of the actual meaning being conveyed by the inputs/outputs. There isn't really an obvious way I'm aware of to discern whether ChatGPT (or something more advanced in the future) is like that or not.

  • @saibal14
    @saibal14 Před rokem

    Hmmm... they've given them room. that's why Chinese are doing good in AI 🧐

  • @aklacperera
    @aklacperera Před rokem

    I dont think AI need to work like a human brain !!! AI can work on its own way because AI more powerful than brain when accessing information !! can the brain access such information? Analyse an answer ?? I think each has duffen approaches where AI can be Ai and human can be human but produce the same answer at the end of the day ??

  • @funnytv-1631
    @funnytv-1631 Před rokem

    Whatever has occurred in your life absorbed those minutes, those days. They need not claim dominion over these minutes, these hours, these days.
    You do have a sanctuary, but it is not one to the past. It is to the ‘you’ you’re becoming. The days will come and go, but each will be anchored with tiny steps. Each one is a testament to future possibilities.
    Face forward, not backward. These brand new moments belong to you.

  • @Jameshazfisher
    @Jameshazfisher Před rokem +2

    The Chinese room argument is silly. The correct conclusion is that the translation books understand Chinese.

    • @TheAmericanAmerican
      @TheAmericanAmerican Před rokem

      But without a working mind in the room, the books cannot perform the task.

    • @russellderoeper507
      @russellderoeper507 Před rokem +1

      But a person or some entity created the translation book (instructions). The room (computer) just follows the steps. What the Chinese room argument is trying to say is that a program cannot understand what it is actually doing. The computer has no semantic understanding of what it is processing as input.
      But it is still tricky to fully accept the Chinese room argument. Is there definitive proof that we understand things either as we don't know how are brain works. In a sense our brain is the Chinese room in which we have no understanding of what process is inside.

    • @zeromailss
      @zeromailss Před rokem +1

      @@russellderoeper507 yea I think that is the biggest problem I have with that argument. I don't think AI understand us because we don't understand us yet.

    • @mikehenry79
      @mikehenry79 Před rokem

      No. At no time does the combination of the book/person/room actually understand the conversation. For example, if a Chinese person outside of the room sent a panicked message in Chinese into the room telling the person inside to evacuate immediately to avoid a coming tsunami, the person would follow the rules in the book to return an output in Chinese that made it appear the person in the room had comprehended the warning when in fact he did not. By contrast, if the room contained someone who really spoke Chinese, they would immediately comprehend the warning and evacuate. That's the difference the Chinese room problem seeks to illustrate. And it's an apt comparison to ChatGPT--we can't determine, based simply on the fact that ChatGPT puts out coherent outputs, whether it actually "understands" what it's talking about or whether it's just following rules that most often lead to outputs we approve of.

    • @AnnasVirtual
      @AnnasVirtual Před rokem

      @@russellderoeper507 true for programming but not for neural networks

  • @lsauce45
    @lsauce45 Před rokem

    You guys need a physicist's perspective. If AI responses are predictable in an easy way, then it's far from human brain responses.

    • @NewUnit13
      @NewUnit13 Před rokem +2

      The AI responses *aren't* predictable in an easy way. That's why there's a push to halt the progress. For the most part these LLMs take input, and give output but how it goes from A->B is too complex to intuit.

    • @lsauce45
      @lsauce45 Před rokem

      @@NewUnit13 Because from a physicist's perspective , human responses are JUST VERY HARD to predict, but they are predictable. So, I don't see "much" difference between human responses and AI responses.

    • @lsauce45
      @lsauce45 Před rokem

      @@NewUnit13 Quoting the comment of user "Marie"
      "We need to define what "understanding" means, before we can really discuss this. Don't humans also often just answer in a way that they have learned is appropriate for a certain situation, without the need for any understanding? I think they do."

    • @AnnasVirtual
      @AnnasVirtual Před rokem

      yeah AI response is predictable

  • @MrPDTaylor
    @MrPDTaylor Před rokem

    You are a chinese room.

  • @AConcernedCitizen420
    @AConcernedCitizen420 Před rokem +1

    The title of this video should read:
    How do we get inside the brain of AI?
    This was a waste of time.

  • @JorgeRiveroSanchez
    @JorgeRiveroSanchez Před rokem +1

    This presentation is extremely poor in my personal opinion. Indeed I don’t understand what she was trying to proof…

  • @soggybiscotti8425
    @soggybiscotti8425 Před rokem +9

    What on earth was this useless Ted talk for...
    That was just completely pointless information. We already know what's "inside the Chinese room"
    We designed the room. We wrote the instructions. Why didn't you talk about that so your talk actually had some substance, or actually answer the question instead of pretending like it's some great intellectual pursuit.
    You said you work in AI, but you didn't bother to go even an inch under the surface level and discuss the answer. Why? Then it could have been a decent talk with a reason to exist beyond serving your ego by being able to now tell your friends you 'gave a Ted talk'
    An extraordinarily simplified answer to the question would be along the lines of, No it doesn't "understand you" in the general sense of sentience or consciousness. It follows a set of rules that govern it's behaviour, and based on prior learning and the established rules, it produces an output. Through performing this process billions of times, and with the input of humans to correct it, slowly it is capable of processing correctly to the point that it can have a coherent conversation by way of correct output. Hence the learning model. This doesn't even have to just apply to language. The same can be done for maths, programming code etc. Thus ChatGPT is born.
    There, I just gave more useful information in a paragraph than your entire Ted talk. And I actually answered the question in the end.
    This was like 'AI for dummies' and that's being extremely generous.
    What purpose did this serve?
    It feels like how you would explain AI functionality to a toddler, only you didn't bother to actually answer the question being asked, despite the fact that we already know the answer. All you did was basically conflate thought with processing, and then make out like you have just posed some whimsical, profound and deeply philosophical question when in reality, it was just complete and utter nonsense.
    It was about as deep and intellectual as asking, what if God isnt real?
    Wow. Amazing. 👏 nobody has ever asked that truly profound question before.. other than everyone who has lived to the ripe old age of 6.
    I don't mean to offend, but honestly.. unless you were made to give this talk, or didn't write it yourself, this was pure, ego stroking garbage. This is the kind of nonsense you put up just to say you did a Ted talk as filler for your resumé... you'd better hope your prospective employer never actually watches it and sees the content though.
    Next time how about actually answering the question being posed, or even just actually put any kind of usefull information in your talk. Not just pretend to have a deep understanding of the topic and go off on a tangent that leads down a long road to nowhere.
    I'm amazed they let this get posted under the official banner of a Ted talk. Quantity over quality. Ted talks really aren't up to the standard they once had.

    • @NewUnit13
      @NewUnit13 Před rokem +1

      "It follows a set of rules that govern it's behaviour, and based on prior learning and the established rules, it produces an output"
      That's a reasonable definition for consciousness though...

    • @soggybiscotti8425
      @soggybiscotti8425 Před rokem

      @@dhwardani thank you, I appreciate your kind words. Nice to see there are others that are privey to the nonsensical 'talk' they slapped out here. I wasn't sure if I was being a little too harsh, as I don't know the circumstances of why this particular person was giving the talk etc. But at the end of the day I suppose the criticism is directed towards whomever thought this was sufficient to be a talk and not so much the person giving it unless the two are one in the same.
      I can't believe that they honestly thought this was acceptable. They essentially say nothing of value through the entirety of the video. It's actually quite an impressive feat to be able to string together so many words and not even accidentally say something thay could be deemed interesting or of sufficient value 🤔
      I understand that AI is fairly new to the majority of the populous, and so it's all very mysterious and 'edgy' to say how it might be conscious and whatnot. But when you are meant to be considered an expert in the field, and address a question as truly significant as this one in a Ted talk, something that most regard as a place to recieve quality information, and to then at the end be fed what could essentially be labled misinformation, as she deliberately did not explain the answer to increase the level of 'mystery' and interest, I just found rather underhanded.
      And you can even see in the comments people who think that what she is saying is that maybe AI do understand us, may be conscious or function similar to the human mind just because of that.
      Had she at least addressed it and answered the question, I'd have had no problem other than a small amount of my time having been wasted. What I don't like is borderline misinformation being handed out from someone who would call themselves an authority figure in the industry on such a critical topic that will in short time, be fundamentally changing our world, and does actually face significant ethical issues regarding this kind of detail. It is a real concern, and they just bastardised it for views.

    • @soggybiscotti8425
      @soggybiscotti8425 Před rokem

      @@NewUnit13 I don't know if it's so much as a definition as it is a description. Of course I do see where you are coming from though, and it's interesting..
      I suppose one thing I would mention is our ability to consciously break said rules if we decided to. While you could program the ability to break the rules into an AI, it can't ever have this inherent ability from its conception.
      Though one could argue that if it is programmed into the system then it then becomes an inherent trait. Regardless, certainly an interesting point of contention.
      Personally, I find it amazing how you can see AI develop, and how closely it resembles a likeness to biological evolution. There are going to be some very serious ethical discussions coming in the near future. Though seemingly just not from these people, or from Ted talks.
      I just wish they addressed it in the video that was meant to be about this kind of detail.

    • @AnnasVirtual
      @AnnasVirtual Před rokem

      is this AI generated?

    • @soggybiscotti8425
      @soggybiscotti8425 Před rokem +1

      @@AnnasVirtual no, it isn't. But that's rather funny actually. I should have tried to work out the correct prompts to do something like that aha. 👍
      It would be hard to figure out a prompt to contain all of the details required to include all of the details of my analyse of the whole video as it won't have access to the video itself, and doesn't have the ability to analyse the video itself yet, but you may be able to do it with enough time to work out the correct inputs to get something kind of close. I've heard GPT4 will be able to analyse the content of an image to some extent though. So no doubt, eventually it will be able to do video and audio. But not yet unfortunately

  • @thatomofolo452
    @thatomofolo452 Před rokem

    Werid