Generative AI is not the panacea we’ve been promised | Eric Siegel for Big Think+

Sdílet
Vložit
  • čas přidán 10. 09. 2024

Komentáře • 927

  • @charleshoward3157
    @charleshoward3157 Před 21 dnem +53

    Sitting in front of the white backdrop, but a zoomed out shot is hilarious to me for some reason.

    • @clray123
      @clray123 Před 14 dny

      The made him sit on it in order to keep dirt from his shoes and spit from his mouth from getting on the elegant floor.

    • @mpprof9769
      @mpprof9769 Před 21 hodinou +1

      It's nonsensical, as if it is supposed to make it look more "authentic".

  • @Joe29587
    @Joe29587 Před 27 dny +459

    This video is completely biased, given that the guy is the co-founder of a predictive AI model.

    • @vidak92
      @vidak92 Před 24 dny +56

      Yeah but it's also completely true

    • @magnetsec
      @magnetsec Před 23 dny +10

      bro really just said y = wx + b, where b = +infinity for predictive AI

    • @MoreFootWork
      @MoreFootWork Před 23 dny +2

      BT is sht

    • @luizgustavoarantes
      @luizgustavoarantes Před 22 dny +13

      he literally said that most of it is hype lol

    • @therainman7777
      @therainman7777 Před 22 dny +1

      Yeah, what a joke.

  • @tonyxavier6509
    @tonyxavier6509 Před 23 dny +59

    If you hear "this is just the beginning" so frequently, highly likely that you are hearing about a tech bubble

    • @DaleIsWigging
      @DaleIsWigging Před 16 dny +4

      The bubble is the investors, not the tech. There is no such thing as an open source bubble.
      They are more like bricks that never get used to build a house but might get repurposed later.

    • @sandponics
      @sandponics Před 5 dny

      In the beginning was the word, the word was with God, and God was the word.

  • @ChefAndyLunique
    @ChefAndyLunique Před 22 dny +89

    He hasn’t given me confidence in the idea that AI isn’t going to completely change our lives.

    • @katehamilton7240
      @katehamilton7240 Před 21 dnem +23

      Relax. Computers are limited because maths is limited. There are also physical limits (Entropy, energy) AGI is a pipe dream.

    • @danieljames4050
      @danieljames4050 Před 21 dnem

      It already did change our lives ten years ago when social media giants utilised it to hijack our attention.

    • @munarong
      @munarong Před 20 dny +9

      For them the upper class, yes, have less change, for normal people like lower middle class down, I believe it will change a lot. I myself lost my job partially because of AI. AI in human out. ( no joke )

    • @HouseRavensong
      @HouseRavensong Před 18 dny +1

      When it comes to the economy, the Fed is always the last to know. They are like a Victorian detective who explains the crime days after it happened.

    • @josersleal
      @josersleal Před 15 dny

      @@munarong how did you loose you job=? incompetence?

  • @crazycool1128
    @crazycool1128 Před 25 dny +261

    The problem started when they rebranded machine learning as AI

    • @rocketman-766
      @rocketman-766 Před 24 dny +9

      Ikr since when every manchine learning stuff is rebranded into AI.
      "We use AI to reduce noises in your vocal recording" is more fancier than "We use machine learning alogrithm to reduce noises in your vocal recording".

    • @matheussanthiago9685
      @matheussanthiago9685 Před 24 dny +18

      "The biggest trick linear algebra ever pulled was convincing people it was ever 'intelligent' "

    • @martiendejong8857
      @martiendejong8857 Před 23 dny +5

      But then what is AI?

    • @nickgirdwood3082
      @nickgirdwood3082 Před 23 dny +14

      The problem is you don't understand what you're talking about.

    • @wanderlust0120
      @wanderlust0120 Před 23 dny +2

      Idk this is not discussed more. The most devious rebranding exercise in history

  • @johndhoward
    @johndhoward Před 27 dny +507

    This is a nice commercial for his products, but it's interesting he criticized GenAI for not getting it right every time. Predictive AI doesn't either, its predictions are self evidently best guesses too. The difference isn't capability, it's expectations.

    • @shivangchaturvedi237
      @shivangchaturvedi237 Před 27 dny +10

      I agree!

    • @AndersRosendalBJJ
      @AndersRosendalBJJ Před 27 dny +26

      Yeah he seems very untruthful

    • @bsmithhammer
      @bsmithhammer Před 27 dny +10

      And it doesn't help when those expectations are grossly distorted by the media, pundits and advertisers.

    • @geoffwatches
      @geoffwatches Před 27 dny +11

      I mean predictions are inherently fallible and everyone knows this. Whereas the other day chat gpt (which we expect to be pretty accurate) did a sum for me which was wrong, I asked it to recheck and it was like oh sorry I was wrong. Bizarre.

    • @thereal415er
      @thereal415er Před 27 dny +31

      He literally just said that in this video. Did you not watch the whole video? it seems from this comment you rushed to put in your two cents before actually seeing the entire video. He clearly mentioned that predictive a.i. gets things wrong but the benefit of it in certain contexts like that of UPS outweighs the downfalls to the tune of 350 million dollars saved per year. LISTEN.

  • @AdiSadalage
    @AdiSadalage Před 27 dny +387

    "Predictive AI", or previously branded as "Big data analytics"🤦‍♂

  • @AlexWilkinsonYYC
    @AlexWilkinsonYYC Před 17 dny +18

    Man confuses statistics, decision trees, analytics and algorithms with genuine AGI efforts (since 1991).

    • @EricSiegelPredicts
      @EricSiegelPredicts Před 16 dny +5

      I'm the guy in the video. I would say since the 1960s!

    • @jimj2683
      @jimj2683 Před 5 dny

      @@EricSiegelPredicts Great video! Do you have a business idea for a fresh graduate (in robotics and ai)?

    • @theawebster1505
      @theawebster1505 Před 18 hodinami

      Everything is AI today. Even if it is pure statistics with a switch operator.

  • @adambuser6137
    @adambuser6137 Před 24 dny +52

    Wow. Steve Martin really can do anything.

    • @EricSiegelPredicts
      @EricSiegelPredicts Před 23 dny +3

      I'm the guy in the video. You're the second comment comparing me to Steve Martin. I grew up quoting him and will be passing your comment along to my old friends and family! :)

    • @tigretarot8947
      @tigretarot8947 Před 22 dny +2

      @@EricSiegelPredictsI was thinking an older Ryan Reynolds haha

    • @EricSiegelPredicts
      @EricSiegelPredicts Před 21 dnem

      @@tigretarot8947 Haha. Thanks, but that might be bad news. I a big fan of his, and I think he's good-looking, but strangely my wife has told me that she doesn't like his looks! 🤯

  • @myleshungerford7784
    @myleshungerford7784 Před 26 dny +158

    Generative AI as good for "first drafts" is a pretty good description for where we're at now. But at the old saying goes, "this is the worst its ever going to be."

    • @rahulbhatia4775
      @rahulbhatia4775 Před 26 dny

      Really bro, A.I. can't understand anything. It's human like responses are created by designers and programmers. It doesn't understand anything and may never will. Cryptocurrently was the next thing to money right, did it take off? Countless businesses have professed revolutions, but most have failed.

    • @cactusdoodle8619
      @cactusdoodle8619 Před 24 dny +8

      but is there still that fundamental flaw in LLMs that will never allow the "100%" that is needed for AGI? I totally agree with where I think you're goin that its going to improve and improve from here on out, but I have a feeling there will have to be some kinda MAJOR shake up before the next huge leap that we all felt in the early months of 2023.

    • @myleshungerford7784
      @myleshungerford7784 Před 24 dny +5

      @@cactusdoodle8619 Honestly I don’t know. AGI is beyond my expertise. But as someone who build predictive models I can confirm generative AI hasn’t helped with it. For writing and debugging code, however, it speeds things up immensely. Incremental improvements on that are a big deal even if we don’t get AGI which would change everything.

    • @cactusdoodle8619
      @cactusdoodle8619 Před 24 dny

      @@myleshungerford7784 yeah for sure. I’m using chatgpt as a programming buddy, it’s allowing me to get things done that would otherwise take weeks to figure out. I can’t quite say it serves as a mentor because I still need to know jussssssst enough to catch it when it misunderstands something. But the pace I’m working with it is letting me really learn some amazing things. It blows my
      Mind the most when it comes to brainstorming up functions and test cases.

    • @joseahi349
      @joseahi349 Před 24 dny +10

      For me AI is currently I tool that helps me a lot with coding, meaning I can develop software that helps my repetitive job. But I have to find patterns in my tasks to make this useful. On addition to that, I try it for consultancy... and in this area it fails a lot. What is worse, it gives you confident answers that are false, it makes up things as if it was trying to please you rather than answer correctly.

  • @gummylens5465
    @gummylens5465 Před 27 dny +359

    "I want AI to do my laundry and dishes so that I can do art and writing,
    not for AI to do my art and writing so that I can do my laundry and dishes."

    • @TomatoTomas20
      @TomatoTomas20 Před 27 dny +5

      Haha you’re funny 😂

    • @onlyrick
      @onlyrick Před 27 dny +4

      @gummylens5465 - A most excellent observation! Be Cool.

    • @61zu
      @61zu Před 27 dny +13

      I want AI to do art and writing and laundry and dishes and everything else that I don't want to do.

    • @SOURCEw00t
      @SOURCEw00t Před 27 dny +7

      You're better off doing your dishes yourself. It can be therapeutic and help you be more creative with your art and writing.

    • @ianmatejka3533
      @ianmatejka3533 Před 27 dny +3

      It’s not that simple.
      Doing the dishes is an incredibly hard task from an AI standpoint.
      First you need mechanical robotics that have a sufficient level of balance and dexterity. This is incredibly difficult, the proof being we’ve worked on robotics for decades, yet do not have a single commercial general purpose robot yet.
      In conjunction with a general purpose robot, you need a sufficiently advanced computer vision system to allow the robot to perceive the environment. Tesla had been trying to implement self driving cars for years, yet that last little bit of precision is difficult to achieve.
      On top of computer vision, you’ll need some form of general intelligence that is capable of planning to a degree. In my opinion ChatGPT is good enough for this task, but you’ll still need to embody it with RAG and a long term memory system.
      The point is, generative AI is a stepping stone to the kind of AI we all want. We cannot achieve robots that are capable of general tasks, without sufficient advancements in computer vision.

  • @litpapi1849
    @litpapi1849 Před 27 dny +73

    Generative AI is literally built on predictive models, using them to create new content by predicting the next element in a sequence. It’s an evolution of predictive AI. It’s just predictive AI doing something more creative. So saying predictive AI is better makes no sense lol

    • @RadiantNij
      @RadiantNij Před 27 dny +3

      My thought exactly. Gen Ai uses hypothetical situations (in a sense hallucinating with a purpose) plus reasoning to know what to predict, I think arguing for predictive simpler system makes more sense when you are talking about security and reliability. In such a case I think more sophisticated expert systems should be in play.

    • @ultramegax
      @ultramegax Před 26 dny +5

      Yep, I agree. His argument makes little sense, if you have any understanding of how ChatGPT and its ilk work.

    • @JohnDoe-my5ip
      @JohnDoe-my5ip Před 21 dnem +1

      It’s predictive in the same way that autocorrect is predictive. It is a fundamentally random process, like a Markov chain. This is so fundamentally different from how ML/predictive analytics works, that it’s almost the dual of genAI.

    • @litpapi1849
      @litpapi1849 Před 21 dnem +2

      Generative AI might introduce elements that seem random, like autocorrect or a Markov chain, but it’s not purely random. It’s guided by probabilistic models, which are structured forms of prediction. While there are differences, I wouldn’t call them ‘dual’ systems-generative AI builds on predictive principles, taking them in a more creative direction rather than opposing them.

    • @animusadvertere3371
      @animusadvertere3371 Před 21 dnem +1

      Not

  • @JudgeFredd
    @JudgeFredd Před 25 dny +5

    The bubble will explode sooner or later…

  • @M1k3M1z
    @M1k3M1z Před 22 minutami +1

    This is NOT generative AI at all. “Predictive AI” is no different than a super smart auto complete. No one has been able to develop GenAI because THAT will be the true turning point in AI. GenAI doesn’t just predict it creates NEW thought. Not new output like people are claiming in the comments. GenAI can have a uniquely, new thought. That’s what everyone needs to fear.

  • @cruzilla6265
    @cruzilla6265 Před 23 dny +17

    What's with people pointing to the potential for generative ai to make errors but seemingly ignore the number of errors (particularly per unit time) that humans would make?

    • @Jupa
      @Jupa Před 21 dnem +5

      Depends entirely on the task.
      Humans use language to communicate an understanding. AI adapts natural language into codified algorithms. They are playing two different games. Tasks that require abstract thought, continuity and compels action would be better suited a teenage girl than GPT9.9

    • @toreon1978
      @toreon1978 Před 21 dnem +1

      @@Jupaand you know this… why?

    • @uxjared
      @uxjared Před 20 dny +3

      @@toreon1978 It's only predicting on a word by word bases. when it "aces" the bar and doctorate exams? It's searching its data scraped from the internet to predict answers. I'm pretty sure I could ace the bar exam with internet access. One of the main reasons this seems bigger than it is right now, is because we are fooled by the language it generates, thinking that it can think and reason at a high level. It can't. The little errors or hallucinations it makes proves that even a low level, it just can't cut the cake. Those errors mean you have to go over everything it does and that you can't trust it to complete the task. It's a word predictor and internet data pool. Which, is still valuable.
      The problem is gen AI is inflated like Dot Com in the 90s. It's not delivering for any of the companies in terms of revenue after billions have been poured in. It's not that useful right now. There are also huge bottlenecks in development and power consumption.

    • @Jupa
      @Jupa Před 20 dny

      @@toreon1978 actually my name is Gideon, and I know everything.

    • @clray123
      @clray123 Před 14 dny +2

      It's more about the kind of errors. The kind of errors current LLMs make cause them to be nearly useless in my work.

  • @HAWXLEADER
    @HAWXLEADER Před 20 dny +4

    What i love about generative ai is that it takes place of boiler plate stuff.
    Summarize this, fluff this up, make a Sunday of what this code does, make this code more efficient using hashing etc...
    I could do these myself but since i don't have to and have just to proof read it, i can do much more.
    I'm not afraid that it'll replace my job, it'll just make the job faster and hence more interesting.

  • @mrparkerdan
    @mrparkerdan Před 26 dny +74

    i used predictive AI to invest in the stock market ... so far, lost 72% of my investments 🤦🏻‍♂️

    • @madalinradion
      @madalinradion Před 24 dny +18

      It predicted you'll lose your money, ai is working as intended boss 😂😂

    • @ivandejesusalvarez9313
      @ivandejesusalvarez9313 Před 23 dny +1

      No you did not lose 72% of your investments. Were you NOT paying attention during the investment part of the whole thing? Were you making a sandwich?

    • @user-qv6fe9dy8l
      @user-qv6fe9dy8l Před 23 dny +3

      Not that kinda prediction bro😂😂😂

    • @user-qv6fe9dy8l
      @user-qv6fe9dy8l Před 23 dny

      ​@@madalinradion😂😂

    • @NealBurkard-ut1oo
      @NealBurkard-ut1oo Před 23 dny +1

      Haha, if it worked, whoever created it would be using it, not selling it. I wonder if it accounts for all the other licensed users feeding them the same info

  • @epoyworld
    @epoyworld Před 2 dny +1

    If Ryan Reynolds and Mark Ruffalo had a son, he would be the guy in the video.

  • @mitchs6112
    @mitchs6112 Před 23 dny +6

    I don't need Generative AI to be "autonomous", it's already 2 or 3x'ed my output as a Product Manager.
    Everything is always boiled down to binaries: If it's not autonomous, then it's a failure? I don't understand that argument and can only assume that there are other motivations for this negative perspective from this guy like VC funding for his own company.

    • @EricSiegelPredicts
      @EricSiegelPredicts Před 22 dny +1

      I’m the guy in the video. I didn’t say genAI is a failure (or not valuable)!

    • @jimmorrison2657
      @jimmorrison2657 Před 15 dny +1

      "it's already 2 or 3x'ed my output as a Product Manager" - Has it really though?

    • @fofopads4450
      @fofopads4450 Před 8 dny

      yeah what did it do? makes your bullet lists?

    • @mitchs6112
      @mitchs6112 Před 7 dny

      You’ve basically described the peak of inflated expectations stage of the Gartner Hype Cycle.
      Generative AI helps me to synthesize larger docs into dot points, create templates for product requirement docs, explain technical terminology and things like that.

    • @jimmorrison2657
      @jimmorrison2657 Před 7 dny +1

      ​@@mitchs6112Yes but after you've proof read and corrected it I find it hard to believe it makes you two to three times more productive. I use it every day for my work too and I love it but I would say it makes me ten percent more productive.

  • @salmajaleel5800
    @salmajaleel5800 Před 12 dny +1

    Predictive AI is equally ''hype'' as GenAI. This video felt like an ad and very biased. I say this because in his speech there was a constant critique for GenAI while he continued to admit that predictive AI has the exact same limitations, and he never once gave actual facts but just continued on promoting.

    • @EricSiegelPredicts
      @EricSiegelPredicts Před 10 dny

      I’m the guy in the video. I'm struck by seeing several comments like this accusing me of bias (i.e., ulterior motives). I'm used to being an educator who's trusted, so... here's the thing: I’m not saying that predictive AI should get at least as much attention as generative AI because I’m personally more invested in predictive AI - it’s the other way around! And in fact, most of my writing and work leads with predictive's limitation: It isn't a crystal ball. However, a little prediction goes a long way -- predicting better than guessing is generally more than sufficient to improve the effectiveness of large-scale operations.

  • @Mr.Andrew.
    @Mr.Andrew. Před 25 dny +20

    I stopped the show at 1:10 to just say "duh" that predictive AI is likely to benefit society more than stochastic based generative AI. But it does come with risks of reinforcing stereotypes, so we must be careful in how we apply it. For example, predicting what goods to ship where and when based on market demand is great for logistic efficiencies at scale. To assume that an individual wants a certain type of product and to only present them with options similar to that is restricting potential and opportunity. So we must be careful in applying any AI, predictive or stochastic in nature.

    • @katehamilton7240
      @katehamilton7240 Před 21 dnem

      Computers are limited because maths is limited. There are also physical limits (Entropy, energy) AGI is a pipe dream.

    • @Mr.Andrew.
      @Mr.Andrew. Před 21 dnem

      @@katehamilton7240 depends on the definition of "AGI". Everything has limits, doesn't mean you can't create artificial intelligence that can "generalize" just because entropy exists. Nature and evolution did it with us despite this limit. The real question is should we keep trying and what should we do with this technology once we have it. Cat is already out of the bag, just a matter of time and decisions on how to use and control it.

  • @JohnTell
    @JohnTell Před 27 dny +71

    Thank you Big Think for surfacing this topic. I work in Software sales, and my linkedin page is bombarded with AI hype and staged progress. Big media is just looking at those staged AI visions, amplifying the hype, to the extent where most of the people I know now believes that AGI will arrive any minute. For a lot of people this is causing distress..

    • @olafsigursons
      @olafsigursons Před 27 dny +4

      It's just starting. It's like looking at a Ford model T and thinking the automobile is just an hype. LOL.

    • @MarioTsota
      @MarioTsota Před 27 dny +10

      @@olafsigursons gpt 4 was trained on almost all of the internet already. There is not much more data for AI to be trained on. If they choose to train AI models on AI outputs studies show that it degenerates quickly. The models are most likely going to plateau, since the data and energy demands are exponential, while their supply is not.

    • @xxgaelixsxx8151
      @xxgaelixsxx8151 Před 27 dny

      But lets be realistic, how much time till we get an AGI?

    • @mistycloud4455
      @mistycloud4455 Před 26 dny

      AGI Will be man's last invention

    • @williampatton7476
      @williampatton7476 Před 26 dny

      I disagree. I'm not a bit fan of it because I think it will make the world boring. But while there is some hype it's not all hype. And why should we be so sure we understand anything? Like when he's saying the AI doesn't 'understand' what he's saying is it isn't concious. But it's in good faith we assume that of anyone else. Why refuse that of an intelligence just because it's made of silicon? And it's in it's early days too. To say that it's hype while parlty true doesn't seem to capture the genuinely ground breaking and actaully interesting and mind bending things it's presenting. But again for all that I hate what it is doing to human creativity and think it will make the world boring.

  • @jamesmonschke747
    @jamesmonschke747 Před 27 dny +97

    I have been saying for a long time that "generative AI" / "Large Language Models" are NOT "artificial intelligence". They are "imitation intelligence".
    They can only imitate the data that they were trained on, constrained by a query.
    Edit / expansion : Consider that intelligence is orthogonal to knowledge. I.e. a person can be intelligent, but ignorant (due to lack of education), or can be knowledgeable (educated), but not intelligent. I would argue that LLMs / Generative AI may be considered to be a form of knowledge representation, but without intelligence. If they had intelligence as well, then we might not see things like recipes for pizza that use Elmer's glue.

    • @Will140f
      @Will140f Před 27 dny +8

      They Synthesize. They don’t generate. There is nothing they can say that isn’t based on training data.

    • @ianmatejka3533
      @ianmatejka3533 Před 27 dny +16

      Their data is the entire internet of human writing.
      LLMs operate in an “embedding space”. The embedding space is a high dimensional vector representation of written words that was formed during the training process.
      When an LLM generates a token, it can be thought of as “walking a path” along the geometry of the embedding space.
      Although the embedding space is fixed after training, you can still “teach” it new things temporarily by providing examples within the context. The LLM will pick up on the pattern and “navigate” the embedding space differently

    • @bobsavage3317
      @bobsavage3317 Před 27 dny +12

      Back in the 70s we used to talk about "AI", but we collectively realized that was far too ambitious a goal, so we started acknowledging we were working on something "weaker" than true AI, and switched to talking about "machine learning". Now tech bros talk about "AI" without having solved all of the "hard" problems associated with it. Why? Because $$$. It is a scam. We are no closer to "AI" (now re-branded as "GAI"). Hype -> $$$.

    • @darkevilbunnyrabbit
      @darkevilbunnyrabbit Před 27 dny +13

      People keep saying this despite the fact that emergent thinking crops up with higher parameter counts. Humans also learn by analysing patterns in their environment (data set) and predicting the appropriate response to current stimuli. You could make humans out to be unintelligent machines with the same reductive reasoning.

    • @ManjaroBlack
      @ManjaroBlack Před 27 dny

      Literally describing humans. We can only imitate and generate from the data we were trained on. No one is pulling out new ideas from thin air.

  • @MotocrossElf
    @MotocrossElf Před 23 dny +2

    He's full of hype, too, just for his own work. Framing predictive and generative AI is if they're in conflict is fallacious. They're different use cases of sophisticated computing, so there ought to be room for both. And if we ever do get AGI, you'd better believe it'll be able to do both and know when to apply each in turn.

  • @HollywoodCameraWork
    @HollywoodCameraWork Před 25 dny +13

    Lots of comments here seem to think that AI will improve linearly towards AGI, but this won't happen without some new fundamental discovery yet to be had, which could happen in 5 days or 50 years. LLMs and diffusion models are maxing out, and have nearly stopped improving, even with trillions of training examples. Yet, a small child can learn from a single example, which shows that our brains' architecture is different in critical ways. Kudos to humanity for discovering a small piece of the puzzle, but that's all it is. All our work is in front of us, and it's not linear. We're on the plateau now.

    • @SEALCOOL13
      @SEALCOOL13 Před 23 dny

      Bro this reads like a speech the leader of the resistance gives before going on a last-ditch effort against the machines in a sci-fi post-apocalyptic movie. Particularly the last line has such a 'Adam-McKay-movie-ending-about-how-some-global-tool-was-mishandled' type shit

  • @capn_shawn
    @capn_shawn Před 3 dny +1

    Every time I read something from Generative AI, it reminds me of Christian Bale speaking in "American Psycho".
    Lots of words, no depth or understanding.

  • @CharlF932
    @CharlF932 Před 27 dny +47

    Totally agree. The hype is so hyped that we've "decided" everything surrounding AI even though it's still in development. This is life today, where a movie is not yet in theaters but people already talk about how good or bad it is.

    • @darkevilbunnyrabbit
      @darkevilbunnyrabbit Před 27 dny +1

      It's like people want AI to flop.

    • @cesar4729
      @cesar4729 Před 26 dny +1

      ​@@darkevilbunnyrabbit Which is neurologically explainable. Our subconscious brain is like a baby uncomfortable with the threats of uncertainty. Faced with a phenomenon that threatens our very foundation, it is difficult to expect a wise and measured reaction.

    • @victorkaranja1420
      @victorkaranja1420 Před 23 dny +1

      ​@@darkevilbunnyrabbityeah it's almost like we can already see the negative consequences from a mile away and would rather we avoid it like the crap we get from social media or current day internet or something ( except somehow worse cus this realistically has less benefit for the layman)

    • @katehamilton7240
      @katehamilton7240 Před 21 dnem

      IKR? Computers are limited because maths is limited. There are also physical limits (Entropy, energy) AGI is a pipe dream.

  • @alvingalang5106
    @alvingalang5106 Před 21 dnem +17

    I am a software engineer, I used to be skeptical about AI. But then I used copilot, generative AI from Microsoft. It blew my mind that it now can make piece of working code. Simple? Yes, but we can then elaborate to add more features. That adding features part, is something that intellectual one can do. I mean creating simple code can be done as simple as grabbing the code in the internet. But then modifying it to match our expectations? That’s different.
    Maybe it’s now not that sophisticated, but I am afraid, if it can code, then theoretically it can make itself better, in a good way or bad way.
    Also, for you who use gen AI, are going to have more advantages than those who don’t. It’s nevertheless worth to explore.

    • @tiredfox2202
      @tiredfox2202 Před 20 dny +5

      meanwhile, copilot suggests me stuff like "x = condition ? true : false", so I'm not impressed.
      It also fails 80% of the time when you actually have a problem you need to solve.
      I feel like at the end of the day, it's just fancy autocomplete that needs to be carefully proofread.

    • @babybirdhome
      @babybirdhome Před 20 dny

      I use ChatGPT for coding assistance all the time, but what I’ve found is that it quite often doesn’t pass thorough quality control tests. It needs numerous iterations to get things right for quite a lot of the things you ask it to do. But then, I’m also not a software developer - I only write code for utility to get things done in my job, not to write releasable software.
      But having used production software since the 1980s, I will say that the quality of what’s released is on a continual decline and has been increasingly in decline in the most recent years. Yes, things like agile and devops does speed up getting fixes into the software, but I don’t think anyone has actually stopped to do a deep analysis of the lost productivity of:
      1. Time lost to software not doing what it’s supposed to do the first time
      2. Time lost to constantly having to re-learn how to use the software every few months with constant new releases and updates changing things - often just for the sake of changing things, or because they weren’t thought through sufficiently in the beginning
      3. Frustration and having to constantly develop workarounds to software bugs or missing functionality because everything is released as minimum viable product instead of something carefully thought out and fully developed
      All of these inefficiencies are “improved” for software development companies because they’re not wasting time and money developing the wrong features or wasting time to get their product to the market, which, if done properly can get the most basic functionality into people’s hands earlier so they can begin to benefit sooner. However, everyone completely ignores the flip side of that coin - all the productivity lost due to the issues listed above multiplied across the entirety of humanity trying to get things done. And this also ignores the tangential negative impacts of:
      1. Drastically increased frustration of workers who used to be able to just learn how to do the job they needed to do and then become an expert in it and do it with maximum efficiency for themselves
      2. The fact that these are strictly limited resources (that can only be improved so far through effective education and human skills development) in the end - everyone cannot be Einstein. We all have limits to what we’re capable of learning and adapting to, which limits what we’re capable of accomplishing. The tools we create are typically applied to helping us accomplish more, but as we inadvertently dedicate more of those finite human development resources to learning and adapting to constantly changing tools and workflows, we’re taking those away from actual productive, beneficial learning and outcomes.
      3. Loss of capacity because there’s no time for anyone to become an expert in anything if they’re part of the ordinary working class, because everything they do and every tool they use to do it is in a constant state of flux and re-engineering and change - both necessary and unnecessary as dictated by marketing teams and data saying “we need to rebrand to stay relevant or popular and keep making line go up and number get big”
      4. The fact that the capacity of everyone is not equal, and we have likely reached a saturation point already in terms of who is capable of what and to what extent, meaning that further “improvements” to be made by using these paradigms is likely to have the opposite effect when stretched across industries and around the globe, but it will have become a steadfast religion by then, and you’ll never be able to convince anyone to change anything to produce better results - the same way agile and devops struggled to do so when they were first created and introduced.
      5. The knock-on effect of the above as their impact stretches out beyond the end user of the tools these paradigms are creating - in terms of a frustrated person getting off work and still dealing with and recovering from that stress and frustration as they interact with others in the world who may not be so directly impacted by those shortcuts and cut corners
      6. The health impacts of the above on the aggregate population of humanity
      7. The impact to how the next generation encounters the world, what it finds acceptable, the values that living in that world creates and necessesitates without having ever considered what they would be, and the sociological problems those will create and the cost of mitigating or treating them
      And numerous others, if I wanted to spend the time to keep writing longer comments into the void of CZcams comments.
      In the end, I’m not positive, but am quite convinced and becoming more so as time goes on, that the benefits do not outweigh the negatives here, but we’re unlikely to find that out until it’s too late.
      Don’t get me wrong - generative AI is a useful tool when used judiciously and carefully. But history has taught us nothing if not that overestimating its value will lead to terrible and preventable results. It is likely to be like any other powerful tool - it will not make us better if we do not value and focus our efforts on making US better. It will likely only take our existing problems and multiply and magnify them to the extent that it can increase efficiency in anything. These tools cannot and will not make human beings better - they will only make human beings more of what we already are, and historically, that has not been a good thing for most people. It has managed to smooth out some of the spikes of human experience, but it has done so by leveling the peaks to a higher average of terrible. For example, wars are less common, but the quality of life is lower. We have a ton more productivity and “simplify life” tools than we’ve ever had before, but all we’re doing is having to work longer and harder for less. Again, the peaks are fewer and lower, but the aggregate average is still not better.

    • @andybaldman
      @andybaldman Před 19 dny

      AI will replace coders. But that job had evolved into mostly googling and copying existing code anyway. It's also not exactly surprising that a system made of code would master coding first.

    • @nicholasmassie
      @nicholasmassie Před 19 dny +1

      Im a software engineer who has used gpt and copilot from day one. I have built llm apps that went to production. The limitations are alot and they are not replacing devs anytime soon. Not to mention half the time it gets in my way and i was better off without it. We are at the end of this hype cycle.

    • @nicholasmassie
      @nicholasmassie Před 19 dny

      @@andybaldman That is so untrue. Most devs are using google to read documentation not copy and paste entire modules of code. Maybe you are just thinking about some basic web dev stuff lol.

  • @HashemMasoud
    @HashemMasoud Před 25 dny +3

    2:52 how can you expect people to proofread AI answers when those people are lazy and used AI for the purpose of reducing their mental efforts?!

  • @davidmead6337
    @davidmead6337 Před 23 dny +1

    I am so hopeful that A I can speed up medical research outcomes. There are so many questions in medicine which cannot actually go to the sources of disease, particularly within the general area of the immune system. So many problems are just treating symptoms without really knowing the real causes. I am a retired M.D. On we go.

  • @howtoactuallyinvest
    @howtoactuallyinvest Před 27 dny +26

    This prob won't age well. Meta used 16,000 Nvdia H100 chips for Llama 3... They have over 600,000 chips that are going towards future models and they're still buying as many chips as they can get their hands on. Then there's advancements on the algorithmic side that will continue to add step change improvements

    • @onlyrick
      @onlyrick Před 27 dny +7

      @howtoactuallyinvest - The whole enterprise is advancing so rapidly that I expect nothing said today will be pertinent for long. Exciting times, indeed! Be Cool.

    • @amdenis
      @amdenis Před 27 dny +2

      You are so correct. Sad how many "experts" with limited DL/NN R&D and related experience are repeatedly so wrong.

    • @felipebodelon3407
      @felipebodelon3407 Před 27 dny +11

      Well the point is not how much they are investing but how much value they can generate out of that investment. Right now it may be too early to tell, but the overhype is already here. You could even say the hype is proportional to the amount of money invested, tho hype doesn't assure value.

    • @darkevilbunnyrabbit
      @darkevilbunnyrabbit Před 27 dny +9

      People are coping hard in a self-soothing kind of way. It's as if they think the rate of AI progress is suddenly going to flatline overnight and people aren't working toward new architectures or integrated applications. Are people really going to scream AI winter when it's only a few weeks between big AI announcements now?

    • @AdamJorgensen
      @AdamJorgensen Před 27 dny +6

      Keep chugging that Kool Aid 😂

  • @toreon1978
    @toreon1978 Před 21 dnem +1

    7:44 ha ha ha. Of cause it’s about value. But what the heck does it have to do with genAI not being able to do all this if it gets more and more enhancements? No proof whatsoever. Only conjecture. Sorry, but this sounds so self-serving.

  • @bnjiodyn
    @bnjiodyn Před 27 dny +29

    So, Gen AI will never get any better. Right... I wonder how much Big Think got paid for this ad

    • @squamish4244
      @squamish4244 Před 26 dny +1

      Yeah. All these contrarians always assume that AI will stop developing RIGHT NOW. Gen AI RIGHT NOW is not 100% perfect, therefore it will never get any better - which gets your attention for like a week.

    • @rahulbhatia4775
      @rahulbhatia4775 Před 26 dny +6

      ​@squamish4244
      If that's the case, then why many great technologies have failed like nuclear energy. That is more important than this technology anyway. Ai today is made anyway from scraping the internet which is unethical in the first place, and even then I've used chat gpt extensively and it is just a realistic Jarvis from Iron man. By that I mean it doesn't understand anything and just gives generic answers which are useful by the way. The tech companies are lying to you and this will be abandoned in the future. It's just a modern day Google glass. Even vr sucks and it's been a decade since it's inception

    • @squamish4244
      @squamish4244 Před 26 dny +7

      @@rahulbhatia4775 Nuclear energy did not fail due to any technological problem. It failed for political reasons. Nuclear fearmongering and bureaucratic fuck-ups led to the cost of reactors soaring, as they had to be made ridiculously safe, way beyond the standards required of coal plants, for instance.
      And every Chernobyl, Three Mile Island and Fukushima was met with hysteria, even though coal smoke kills 800,000 people a year and contributes heavily to global warming. Nuclear is clean and had the building of plants in the 60s continued at that trend, today nuclear and hydro would power the entire USA and its emissions would be much lower. It would also power Europe and China and Putin would not have gotten the idea in his head to hold Europe hostage with gas and oil. India has its own vigorous nuclear program and many plants are being built. Research into thorium reactors continues.
      If anything, nuclear is a cautionary tale about f*cking up and blowing it on a wonder technology.
      As for ChatGPT, like many others on here have said, it is the dumbest AI will ever be. It's merely the beginning.
      Scraping the Internet is controversial, but I would argue that it is not unethical. We voluntarily put our information online. It was our choice. We didn't have to. We simply didn't care and decided the risk was worth it. NVIDIA is scraping a human lifetime's worth of data from CZcams a day, and CZcams has 14 billion videos - that WE put on there. CZcams is open source. Facebook is open source. Google Maps is open source. I think we should get paid for our data, and that is my argument. They are making money off our data, so we should get paid for it. But it's not illegal to scrape it.

    • @seonteeaika
      @seonteeaika Před 22 dny +2

      People seem to have that idea that LLM like ChatGPT is only going to be fed with more data, and that's all the development it will ever get. Same faults with slightly more accuracy over time but never perfected. Like it escapes their mind that there also exist coders in companies, however they're not the people who add or manage the data! They refine and redefine how to use it, and how to collect it. Like do they even understand how new the whole concept still even is, that it would soon already stagnate?

    • @clone3_7
      @clone3_7 Před 21 dnem

      @@squamish4244 I disagree about the scraping the internet part, it is clear that most AI seems to have access to books, which otherwise would have required purchase online, and I doubt these AI-s have paid a penny, yet they know books and can quote from them fairly easily.

  • @mahiaravaarava
    @mahiaravaarava Před 16 dny +1

    it's important to acknowledge its limitations and challenges as well. I agree with Eric Siegel's perspective that we must temper our expectations and approach this technology with a balanced view.

  • @phantomoftheparadise5056
    @phantomoftheparadise5056 Před 27 dny +7

    Again, no vision, we can't think tomorrow in a linear way. It is not because predictive AI has been the best option so far that it will always be the case. There is no use case of predictive AI that has not been replicated as a proof of concept with GPTs.
    The argument is always the same : we need human intervention to watch what the AI is doing, is it not the same with human employees ? Let's talk about it again when autonomous AI agents hit the market EOY or 2025...

    • @lucacarey9366
      @lucacarey9366 Před 25 dny

      Nothing would be worse for the powers that be if people had the ability to sit and think about things and the way the world is structured. That’s why even when we’re dealing with technology, though obviously imperfect, that has gone from making meme content for nyuks to seriously giving every category of thinking professional a run for their money in just a few short years, “cooler heads” must remind us it’s not actually a big deal

    • @englishsteve1465
      @englishsteve1465 Před 11 dny

      @@lucacarey9366 But we do have that ability. What we might not have is enough information to formulate a coherent "picture" and enough experience to tell good info from bad, or worse, the deliberately misleading. We also need enough honesty to recognise that what we don't fully understand is likely to change that "picture" enormously.

  • @NickRobinson-ri4hu
    @NickRobinson-ri4hu Před 22 dny +1

    Look at where this guy is sat. Middle of the room, white drop background. This is nothing more than a dramatic take. If you actually listen to what is being said there is nothing particularly new. Ai will get better, simple as that.

  • @SciTechVault
    @SciTechVault Před 26 dny +14

    Nice advertisement. I am impressed. However, I also need to mention that even predictive AI can (and does) go wrong. This is what I don't like about marketing. People only showcase the benefits of the product/service they are trying to sell while conveniently ignoring its limitations. Not fair. Unethical.

    • @EricSiegelPredicts
      @EricSiegelPredicts Před 25 dny +2

      Predictive AI is (only) better than guessing -- much better. That's valuable.

    • @ruanvermeulen7594
      @ruanvermeulen7594 Před 23 dny +1

      I hear what you're saying. However, predictive AI is indeed more promising to me on the basis of how it works fundamentally. Predictive AI produces wrong output, too, but it is actually working with something closer to the real problem; i.e. it's not just playing with words. But maybe we need to look further than predictive AI, too (hence avoiding the marketing pitfall). But generative AI is not all-powerful, and its limitations are being realised at last.
      Take a look at Sabiene Hossenfelder's video here (I think she is spot on).

    • @EricSiegelPredicts
      @EricSiegelPredicts Před 22 dny +1

      I'm the guy in the video. I do always work very diligently to be forthcoming about its limited ability to predict -- not like a magic crystal ball, but generally only better than guessing, which is more than sufficient to be valuable for most use cases. I call this The Prediction Effect (introduced in my earlier book, "Predictive Analytics").

    • @ruanvermeulen7594
      @ruanvermeulen7594 Před 21 dnem +1

      @@EricSiegelPredicts Apologies if I came across as also saying you're just doing marketing. I just meant to say that, recognizing that anyone giving a talk would naturally like to promote their brand as well, I can still point out that there is more to what you said than just spreading your brand.
      I like to hear this coming from experts like you, because it looks like too many experts are still too invested in generative (especially LLMs). Although I found it fascinating to see how far they got with LLMs I think we can now see that these things have limits that are not going to be overcome by larger datasets, finetuning and agentic designs including mostly LLMs.

  • @burnindownthehouse
    @burnindownthehouse Před 7 dny

    AI has progressed much faster than computer scientists who design it have predicted. So whenever you see a computer scientist say, "We're many years away from AI being ______," remember their predictions have always lagged behind the real progress of AI. AI will become self-aware much faster than we think it will. We will reach the singularity soon.

  • @olafsigursons
    @olafsigursons Před 27 dny +27

    It's like saying in 1996 that internetr is not what we were promised. It's just starting.

    • @JohannPascual
      @JohannPascual Před 27 dny +4

      Yeah. This video is kinda stupid.

    • @DoctorMandible
      @DoctorMandible Před 27 dny +6

      Except that LLM's existed before 1996 and predate the internet by decades. ELIZA was in 1966! Markov models are older than you seem to think.

    • @jmg9509
      @jmg9509 Před 26 dny +1

      Yes, but the internet had and has a pivotal role in why Ai is finally at the level it is at today. It wouldn’t be possible without the internet + sufficient time for sharing of trillions of data, because machine learning required massive amounts of data (which the internet finally provided) to train on. In other words, the massive amounts of data was the main thing missing in the +/- 90s.

    • @ultramegax
      @ultramegax Před 26 dny +2

      ​@@DoctorMandible​While ELIZA was incredibly impressive for the time, comparing current LLMs to precursors decades old makes little sense. ELIZA did not make use of reinforcement learning, neural nets, etc.

    • @matheussanthiago9685
      @matheussanthiago9685 Před 24 dny

      Or saying in 2014 that VR is never going to be the next smartphone
      Or it's like saying in 2021 that the metaverse is not really going anywhere
      Or saying in 2022 that blockchain and NFTs are useless and worthless in 97% of cases
      Or saying in 2010 that Elizabeth Holmes was full of Shit
      Or saying thar Elon musk is a conman and a vaporware salesman in any given year
      Or....

  • @rishidixit7939
    @rishidixit7939 Před 23 dny +1

    Well Generative AI is also predicting the next word. But yes the points are valid

  • @jaysonp9426
    @jaysonp9426 Před 27 dny +21

    Lol, I work with AI all day every day. Good luck calling it hype

    • @rahulbhatia4775
      @rahulbhatia4775 Před 26 dny +12

      Bro I use ai all the time too and it's useless in most cases. Gives basic answers to complex questions. All its responses are what people have posted on the internet anyway. It's nowhere near the capability of humans. It is a good assistant tho cause it's free.

    • @jaysonp9426
      @jaysonp9426 Před 26 dny

      @@rahulbhatia4775 lol, if you're using it for free then you just invalidated everything you said. Like I said, good luck 👍

    • @user-ru4wv6fr4t
      @user-ru4wv6fr4t Před 25 dny

      it's all about how u use it

    • @asaddat87
      @asaddat87 Před 25 dny +1

      I think this is elaborating on the differences between generative vs predictive ai, even if you use ai everyday, there are limits of generative ai that you might not know unless you are a hardcore developer. On the other hand predictive ai has tangible applications in industry which promises to make our industrial endeavours more efficient. Nobody is saying ai is hype. They are saying generative ai is hype while predictive ai is a more pragmatic.

    • @jaysonp9426
      @jaysonp9426 Před 25 dny +1

      @@asaddat87 and they'd still be wrong. They're saying generative AI is hype because they think ChatGPT is generative AI the same way people thought the light bulb was electricity. ChatGPT is a demo for one use case. Generative AI is electricity, not a light bulb. The people who are upset are saying "I want a light bulb to wash my clothes for me" instead of building a machine that uses electricity to wash their clothes for them.

  • @Vysair
    @Vysair Před 24 dny +2

    We've only just unlocked the AI Tech Tree similarly to when we first mess with nuclear energy

  • @brianmelendy1194
    @brianmelendy1194 Před 27 dny +5

    AI is not your friend. The only people to benefit from it are CEOs & techies.

    • @katehamilton7240
      @katehamilton7240 Před 21 dnem

      Relax. Computers are limited because maths is limited. There are also physical limits (Entropy, energy) AGI is a pipe dream.

  • @tanyabodrova9947
    @tanyabodrova9947 Před 14 dny +1

    It's such a relief when the annoying music stops - but then it starts again.

  • @TheGuggo
    @TheGuggo Před 25 dny +11

    The most detailed example of what predictive AI can do is about a large corporation making bigger profits.
    It all boils down to make rich people richer.

    • @NealBurkard-ut1oo
      @NealBurkard-ut1oo Před 22 dny

      That's what people hear. It's much harder to quantify when using terms of safety, traffic patterns, etc. Plus the savings can be sent through to the customer, which may give then a larger market share

    • @clray123
      @clray123 Před 14 dny

      Like most of human activity overall

  • @JSDudeca
    @JSDudeca Před 21 dnem +1

    The biggest predictive AI company in the world? Tesla. Albeit, highly specialized but still the biggest.

  • @bujin5455
    @bujin5455 Před 23 dny +4

    That whole UPS story is what we've been using with superscalar processors since the 1990s, where we predispatch instructions to the CPU for processing, in a process called "pipelining." This requires branch prediction, where we have to guess what the next most likely instruction call is going to be, so that we can already have the instruction staged and processing by the time the calling instruction has finished executing. It's funny how these techniques get applied more widely, and all of a sudden it's novel again. lol

  • @bizsmartworld6137
    @bizsmartworld6137 Před 23 dny

    The term "AI" is just for marketing.
    Smart Software, Code, Scripts, Algorithms, and Prediction software are not new. Google is a good example.

  • @marklapis7569
    @marklapis7569 Před 27 dny +11

    Well Eric, I really hope you're wrong because I've bet my entire future on AI completely transforming the landscape soon. Dropped out of college late last year for software engineering because the job market is tough and most of what I'm learning should be automated soon, but have been waiting for the new normal of AGI to either resume or change career paths once everything stabilizes. I thought it was the right decision at the time but I'm kind of drowning in debt and can't hold out much longer like this...

    • @Niiwastaken
      @Niiwastaken Před 27 dny

      Definitely made the decision too early but honestly you're not wrong

    • @Rudzani
      @Rudzani Před 27 dny +4

      He probably is wrong, but he’s also incentivised to be wrong.

    • @larsfaye292
      @larsfaye292 Před 27 dny +13

      You completely destroyed your career and life over probabilistic plagiarism algorithms. It's like the Darwin Awards for careers. Well, more work for those of us that remain in the industry (which isn't going ANYWHERE). You truly lost the plot to make such a ridiculous decision over all this obvious hype.

    • @marklapis7569
      @marklapis7569 Před 27 dny +4

      @@larsfaye292 You can't really deny the tech industry is changing rapidly with the advent of AI tools, and they're getting better and better. So my plan was to give it a few years for everything to stabilize. It's temporary, I'll try to go back later (which will pause my student loan repayment) and reconsider my career path. I'm not that hopeless, hopefully.

    • @WhatIsRealAnymore
      @WhatIsRealAnymore Před 27 dny

      ​@@marklapis7569don't listen to Lars. Like everyone else on the internet and in the real world he doesn't know what he doesn't know. LLM are about 2 years away (when memory and agency is introduced) from replacing most software work and most other work really once loaded into robust robotic mediums. There is absolutely guaranteed to be almost no work in most fields. Everyone studying today is wasting their time considerably. So well done on making an informed decision. No one will be able to pay student loans, home loans or any other finance tools. The entire earth's economy will collapse unless a large basic income is quickly distributed to avoid chaos and the end of our modern world. So studying today is a huge waste of energy and time. Rather go into the trade school work line so long as you can learn it quickly and get earning money as you wait for all this to play out. I do agree with Lars in that I think you might have pulled out a bit early. But who am I to say really. Please do something with your time as I say. The worst thing you can do is sit idle. Much love from sunny Cape town. ❤

  • @TimothyCollins
    @TimothyCollins Před 20 dny

    Well, of course not. Generative AI simply takes what we have already said, remixes it and says it back to us. If anything it's an enhanced echo chamber. It probably the most dangerous thing ever invented, just not for the reasons some people think when they hear "AI". We are just having things repeated back to us and that is dangerous since we aren't seeing new ideas.

  • @sapphyrus
    @sapphyrus Před 27 dny +17

    Person in 1905: "Wright Brothers' plane isn't the panacea we hoped for."
    Way too early to write it off, the rate of improvement can open up new possibilities.

    • @liwyatan
      @liwyatan Před 27 dny

      Wright brother plane flew. We don't know what "natural" intelligence is.
      I like to tackle problems on energy and efficiency. In the past few years we have learned that our brain "runs" between 4000 and 5000 models. This models are "similar" to LLMs, with the exception that they are able to train themselves constantly, and are bigger, far more complex. We do know that this models are not what makes us conscious. So it seems that they are used for far "mundane" tasks (just look at how many trillions of cells live in our body). Our brain has to spend a lot of time and energy making us work. A task so complex that we have "other brains" as subsidiaries in other parts of our body. Returning to consciousness it's incredibly complex (some theories say that our brains emulate quantum process at room temperature using nanostructures in our neurons).
      To train 1 LLM, we use computers that consume around 100.000 KW/h. They have to run for days. Our brain does this for thousands of more complex models all the time. It runs our consciousness and all of it using 20W ... what a Macbook Pro uses when is doing nearly nothing ...
      So, that's how far away we are from AI/GAI whatever you wanna call it. It's, optimistically, hundreds of years away.

    • @dewithx
      @dewithx Před 23 dny

      Don't extrapolate with one data point, that's dummy. Nobody knows where or when the new real advancements will come towards AGI.
      Maybe one day we'll find the cure for Cancer, but that can happen in 5 or 50 years.
      Real progress in any field is slow, non-linear and should not be taken for granted.

    • @tom_verlaine_again
      @tom_verlaine_again Před 22 dny +5

      That's because their invention was useless in practice. Santos Dumont's one, however, took off (sorry I just had to).

  • @mattm597
    @mattm597 Před 21 dnem +1

    A.I. = The same old computer technology from 40 years ago--just a little more sophisticated.
    (REAL A.I. does not exist yet, and it's uncertain if it is even possible.)

  • @7TheWhiteWolf
    @7TheWhiteWolf Před 27 dny +17

    The problem is everyone wanted AGI and we wound up with image and text generators over saturating the internet. LLMs and Diffusion Models aren’t AGI and it’s silly to think so.
    The thing is marketing has to spin it so that they get more investors so they have to keep the hype going, it’s all about presentation.

    • @bsmithhammer
      @bsmithhammer Před 27 dny +1

      People often want what they barely understand.

    • @darkevilbunnyrabbit
      @darkevilbunnyrabbit Před 27 dny +3

      People have such short attention spans, AI has only been in the public eye for 2 years or so with genAI and people already demand AGI and dismiss all AI products as 'hype'. It speaks volumes about the actual rate of progress when expectations are this ludicrous that people are mad if AI isn't some omniscient societal paradigm shift that develops overnight.
      Things like generative music, video, images and personal assistants are decade-defining phenomena in themselves and they're still in their infancy. Hype is a silly word to describe such amazing inventions especially when uses in Biotech/healthcare are already pronounced (Alphafold 3 for instance).

    • @darkevilbunnyrabbit
      @darkevilbunnyrabbit Před 27 dny +4

      No one claims these things are AGI, something doesn't have to be an AGI to be disruptive. A lot of current jobs can be automated without general intelligence.

    • @7TheWhiteWolf
      @7TheWhiteWolf Před 27 dny +2

      @@darkevilbunnyrabbit Yeah, it’s been 2 years, and the internet has already been turned to crap and slop from gAI.
      It’s not AI, it’s a fake imitation chat bot.

    • @squamish4244
      @squamish4244 Před 26 dny

      @@darkevilbunnyrabbit All of these people have been careful to avoid taking cheap shots at AlphaFold (which is not generative AI), ESM, RoseTTaFold, or generative AI used in searching through research papers. They're too much of a net good, and anyway attacking medical AI would make you look like a terrible person.

  • @512Squared
    @512Squared Před 24 dny

    From my own understanding, this is my list of AI limitations that people should be aware of:
    - AI cannot yet serve as true researchers: AI cannot independently set research goals, follow multi-step plans, or obtain and analyze data to answer specific research questions.
    - AI cannot yet self-improve: While AI can generate programs and could theoretically assist in improving its own cognitive architectures, it is not yet capable of autonomously enhancing its capabilities or collaborating in the development of AI alongside humans. This is an active area of ongoing research.
    - AI can apply a theory but cannot invent one: AI can extend and apply existing theories but lacks the capacity to create fundamentally new concepts or theories.
    - AI can generate a question, but it would not know if it was an interesting question: AI can be prompted to generate questions, but it lacks the awareness or understanding to determine the significance or relevance of those questions.
    - AI can recombine elements in novel ways: AI can combine existing ideas in ways that may seem new or innovative, but these are still based on pre-existing knowledge.
    - The map is not the territory: AI maps patterns found in written materials produced by humans but does not understand the underlying reality those patterns represent.

  • @zephyr4813
    @zephyr4813 Před 27 dny +62

    I think this video will be good comedy in 10-20 years

    • @JohannPascual
      @JohannPascual Před 27 dny +15

      Try 3-5 years.

    • @zephyr4813
      @zephyr4813 Před 27 dny +6

      @@JohannPascual i hope so. That would be an even more exciting pace

    • @jayhu2296
      @jayhu2296 Před 26 dny +7

      it already is tho?

    • @hitesh6245
      @hitesh6245 Před 26 dny +7

      Let's see about that. Maybe it will humble a lot of CEOs (sorry if i offend anyone by saying this).

    • @Pratim-z7l
      @Pratim-z7l Před 25 dny

      ​@hitesh6245 you don't have to say sorry mate

  • @michaelbindner9883
    @michaelbindner9883 Před 22 dny

    Generative AI with big data may emulate extraverted intuition and extraverted thinking, but it lacks the balance of sensory or the introverted intuition that works well with extraverted thinking. Introverted thinking also can balance extraverted thinking.
    Without sensory and feeling/values, AI is not cognitively complete or safe or ready for prime time.

  • @kure7586
    @kure7586 Před 27 dny +8

    That is really really old news What's going to be next ? Don't tell me Titanic actually isn't unsinkable ! 😂

    • @clray123
      @clray123 Před 14 dny

      I think they sent him out as a means of "damage reduction"... maybe they're preparing to pull the rug from below the AI stock market...

  • @wadecodez
    @wadecodez Před 15 dny

    My current problem with GPTs is the side effects of heavy use or misuse. The systems are designed to vomit data. Nothing else. There is no way to avoid the projectile data. It is mind numbing. There is a breaking point.
    So IMO there needs to be a way to change the conversation dynamic. Let the model ask me questions and I will be the one to dump thoughts. It's symbiotic. Whatever is prompting is training.
    If this becomes a feature, what happens when the human stops responding? Does AI behave like a well trained house pet or is this what causes all of humanity to become enslaved? Will Iron Man be able to save us?

  • @AdelineGomez-ps5ir
    @AdelineGomez-ps5ir Před 22 dny +48

    Thanks for the continuous update! I am super excited about how my stock investment is going so far, making over $32k weekly is an amazing gain.

    • @Bethany391
      @Bethany391 Před 22 dny

      Do you invest with a professional broker? I would appreciate it if you could show me how to go about it.

    • @AdelineGomez-ps5ir
      @AdelineGomez-ps5ir Před 22 dny

      Thanks to Mrs. Elizabeth Regina Nelsen's time in my life, which had a profound impact on me.

    • @MartinaJuarez-pm1bh
      @MartinaJuarez-pm1bh Před 22 dny

      Wow! Kind of in shock you mentioned expert, Elizabeth Regina Nelsen. What a coincidence!!

    • @Jamesphilip-jz8wh
      @Jamesphilip-jz8wh Před 22 dny

      Elizabeth Regina Nelsen has really set the standard for others to follow, we love her here in the UK as she has been really helpful and changed lots of lives

    • @Evansdavis568
      @Evansdavis568 Před 22 dny

      Life is easier when the cash keeps popping
      in, thanks to Elizabeth Regina Nelsen's services. Glad she's getting the recognition she deserves

  • @nlbm
    @nlbm Před 16 dny +1

    It’s useful to hear another voice amidst all the hype.

  • @Xtensionwire
    @Xtensionwire Před 26 dny +3

    "Hype = mismanaged expectations"
    Thank you for this.

    • @Kylo27
      @Kylo27 Před 25 dny

      lolwut… that’s not what hype means at all./..

  • @animeshbhatt3383
    @animeshbhatt3383 Před 25 dny +1

    So who is saying AI will replace everything?It's the folks from Microsoft, Nvida, Amazon,... They are actually pitching in for their own AI based product.

  • @npaulp
    @npaulp Před 23 dny +10

    Generative AI represents far more than just a glorified chatbot prone to hallucinations. It marks a significant breakthrough in AI research. While it's true that the technology has been somewhat overhyped- common with any groundbreaking innovation- it undeniably opens up new possibilities. The debate among AI researchers regarding whether Generative AI can eventually lead to Artificial General Intelligence continues, and only time will reveal the truth of this potential. However, from my vantage point, the prospects are incredibly promising.
    Generative AI appears to have uncovered a mechanism that mirrors the way the human brain operates more closely than previous AI technologies. While earlier AI milestones-such as chess-playing machines, IBM's Watson, self-driving cars, and virtual assistants like Alexa-were noteworthy, Generative AI taps into something far more profound. This new frontier may well be the "panacea" that has long been anticipated in the realm of AI, and I remain optimistic about its future.

    • @katehamilton7240
      @katehamilton7240 Před 21 dnem +4

      Computers are limited because maths is limited. There are also physical limits (Entropy, energy) AGI is a pipe dream.

    • @npaulp
      @npaulp Před 21 dnem

      @@katehamilton7240 Your assertion that "maths is limited" doesn’t quite apply here. Generative AI, particularly neural networks, operates in ways that aren’t directly constrained by the mathematical limitations you suggest. While energy consumption present real challenges, ongoing advancements in alternative energy sources offer promising solutions for the future. Regarding AGI being a "pipe dream," this perspective seems overly pessimistic, especially in light of the remarkable strides made in just the last few years. The progress we've seen in AI- developments that were almost unimaginable a decade ago- indicates that we’ve only begun to tap into its potential.

    • @LeonardoMarquesdeSouza
      @LeonardoMarquesdeSouza Před 21 dnem +1

      AGI is a dream, there's a lot problems to solve first. All Gen AI does not create nothing today, in fact no Gan AI can learn in fact, and that's ONE problem to solve.

    • @unityman3133
      @unityman3133 Před 19 dny

      @@katehamilton7240 what do you think the human brain operates on? fairy dust and jesus energy?

  • @siriusfeline
    @siriusfeline Před 24 dny +1

    BUT, predictive AI/machine learning can ONLY ever be logical in its derivations. It can NEVER be intuitive, which is a very different reality when sensing into something and determining where it is headed, what might happen and what might be needed. I'll bet anything, most people reading this will have NO idea what this difference is, including the guy narrating the video.

    • @katehamilton7240
      @katehamilton7240 Před 21 dnem

      Computers are limited because maths is limited. There are also physical limits (Entropy, energy) AGI is a pipe dream.

  • @wastedaga1n
    @wastedaga1n Před 27 dny +8

    Eric Siegel forget to mention he is Steve Martin's first son.

    • @mr.c2485
      @mr.c2485 Před 27 dny

      Love child ❤

    • @pmg6665
      @pmg6665 Před 25 dny

      Haha, I was hoping someone would mention that 😂

    • @rowanans
      @rowanans Před 25 dny

      oh really? Steve Martin of Only Murders in the Building?

  • @goodfellas5702
    @goodfellas5702 Před 7 dny

    Anyone else think it's completely off the wall to shoot a video in a beautiful room, then use a cyc as your backdrop only to show the entire setup. What useful purpose is the cyc is serving in this video. I'm clearly missing something.....

  • @bsmithhammer
    @bsmithhammer Před 27 dny +6

    Agreed. There are a lot of very fundamental misunderstandings about what GAI is, and just as importantly, what it isn't.
    And in general, anytime it seems like you're being offered a 'panacea,' be suspicious.

  • @jichaelmorgan3796
    @jichaelmorgan3796 Před 12 dny

    They are similar to junior workers. You just have to get an idea of what they can handle while making a minimum amount of mistakes and work with that, which will improve over time. It will be your responsibility to deal with "machine error" rather than human error.

  • @PlatoCave
    @PlatoCave Před 27 dny +8

    AI = Affordable Idiocity

  • @tansiewbee4292
    @tansiewbee4292 Před 23 dny

    Einstein said a long time ago that
    " there is a race between mankind and the universe.
    Mankind is trying to build bigger, better, faster and more foolproof machines.
    The universe is trying to build bigger, better, and faster fools.
    So far, the universe is winning". 😊😊😊

  • @user-tx9zg5mz5p
    @user-tx9zg5mz5p Před 25 dny +3

    Gemini couldn't figure out military time conversion. 😂

  • @christianmmatthews
    @christianmmatthews Před 20 dny

    The second I realized this guy was full of BS was when he said that this only works on the per word level, that’s not how a gpt works at all, that’s how an rnn works. A gpt looks at the entire passage you sent in parallel to understand the entire thing. The attention mechanism like that is literally what sets an llm apart from old techniques

  • @paulocacella
    @paulocacella Před 24 dny +5

    The question is WHO is getting this efficiency gain. I've not observed a lowering in price of shipping. This kind of efficiency gain is USELESS for general public. The problem is WHO is getting the money.

  • @naturalyogi
    @naturalyogi Před 21 dnem

    WOPR was an AI that could control the world. This disproves his theory.

  • @hulqen
    @hulqen Před 26 dny +3

    A question that hits me over and over again, especially when I look at AI generated images, is this: will the technology powering AI right now (i.e. LLM:s) lead to something like 99,5+% accuracy so that we indeed can trust it to do stuff like medical analysis, autonomous driving etc, or is the technology in itself flawed and will only lead to a dead end?

    • @EricSiegelPredicts
      @EricSiegelPredicts Před 25 dny +3

      I'm the guy in the video. For certain limited domains, I think it can. Or I believe it can be that accurate on an 80% portion selected by predictive model (hybrid).

    • @madalinradion
      @madalinradion Před 24 dny

      It will probably get around that accuracy with new models and more compute thrown at the new models, chatgpt 4 is already at 90% accuracy in some tests

    • @JohnDoe-my5ip
      @JohnDoe-my5ip Před 21 dnem

      Autonomous driving has absolutely nothing in common with generative AI. It is a traditional search-based AI problem.

  • @edtyler6444
    @edtyler6444 Před 38 minutami

    Why is he looking to his left when the camera is directly in front of him?

  • @misterfunnybones
    @misterfunnybones Před 27 dny +6

    Pump & dump.

  • @jamesb2059
    @jamesb2059 Před 27 dny +1

    Excellent. Thank you. I feel I now understand some of the issues much better than I did.

  • @AMOCapital
    @AMOCapital Před 27 dny +8

    I mean AI is still new ,so let's give it some time and see 🤷‍♂️

    • @jettrink5810
      @jettrink5810 Před 27 dny

      Ai is not new. It has been around since the 1950s believe it or not.

    • @esdeath89
      @esdeath89 Před 27 dny

      ​@@jettrink5810It wasn't technology. Then computers were to slow to make AI possible. And now computers still slow to make true AGI. I think we will achieve true AGI in the next century.

    • @darkevilbunnyrabbit
      @darkevilbunnyrabbit Před 27 dny

      Current architectures are new, the technology itself is not.

  • @keffbarn
    @keffbarn Před 23 dny

    Predictive and generative AI aren't mutually exclusive; they can be used together. Starting off with that notion is disingenuous…

    • @EricSiegelPredicts
      @EricSiegelPredicts Před 22 dny

      I’m the guy in the video and I agree (except for the disingenuous part). You misread…

  • @MaetelL111
    @MaetelL111 Před 27 dny +4

    This is so spot on, but I’m also glad. These things take time to develop/be developed. You can’t just throw a lot of knowledge into it and expect it to emerge as a human-level intelligent, conscious being. There’s so much more to us than knowledge. There’s reasoning, morality, consciousness, memory, past experience, emotion, chemical receptors, gut feeling, etc. All of this results in our abilities, not just knowledge, especially impartial information. My main issue is having “beings” like this given abilities to kill and maim without that progress. If anything, that could be worse than a true AGI model with such abilities, as a lack of self-progress and ignorance are often what bring out the worst in people, so why not machines? Unfortunately, not everyone investing is doing so because they want a being or beings that can tidy the house and do math homework with their kids. They’re interested in the warfare aspect, as they have been in swords, guns, explosives, drones, bombs, and the like for millennia.

    • @larsfaye292
      @larsfaye292 Před 27 dny +2

      You're spot on. "Intelligence" isn't just billions of parameters and a sea of GPUs running algorithms over it.

    • @gurlakthedestroyer
      @gurlakthedestroyer Před 27 dny +1

      Naaa, people mainly do it for money...... Human greed is about 80% driving factor (don't ask me how I estimated that amount 😀)

    • @MaetelL111
      @MaetelL111 Před 27 dny +1

      @@gurlakthedestroyer yeah, so tell me, how much is the weapons industry worth? A lot, especially right now with so many wars being fought. Fighting wars is also the ultimate form of greed, as it is usually waged to take land, power, and resources from others, often wrongfully or selfishly, but sometimes also in self defense against those who wage war, doubling the profits for arms sales by necessitating the use of similar arms usage on both sides. And don’t tell me corporations haven’t sold out their tech for the purpose of warfare even if they generally produce non-arms products for general use. It doesn’t necessarily need to be a weapon, either, it could be Starlink or AI used to create propaganda at faster speeds than ever. I couldn’t even tell an AI image of a person the other day from a real photo. Was shocked to see it was labeled AI after. Like I said, I’m less worried about AI replacing humans in the work force (it was supposed to replace me, and it just can’t at present, though it can speed up my job, making me more productive) than it being used for warfare.

    • @MaetelL111
      @MaetelL111 Před 27 dny +1

      @@larsfaye292 Yeah, especially when those machines are driving up emissions for no critical or essential purpose.

  • @afterthesmash
    @afterthesmash Před 5 dny

    The correct place for this to start was to look at the aspects of life which demand this level of quantified performance, and which areas do not. Quite a lot of life does not demand a high level of quantified performance, because quite a lot of modern life is tedious ditch digging.

  • @BenNixon32
    @BenNixon32 Před 21 dnem +4

    The ChatGPT 4o model is RARELY wrong in my experience. This is straight up bias.

    • @stefanandersson7519
      @stefanandersson7519 Před 20 dny +2

      I dunno, man - I have a subscription for it via my work, and I've used it many times for my work, and every single time I have to spend about as much time describing the task for it, and correcting the mistakes, as it would've taken me to just do it myself. Usually I just give up after a while and start over on my own. I've also tried using it to just spitball ideas for naming things, which... can be good, at least if it gets many attempts, but most of the time it just gives me the most generic, boring ideas that feel like they were clipped from a Buzzfeed listicle

    • @VivekPayasi
      @VivekPayasi Před 19 dny +1

      @@stefanandersson7519 True, I have seen the same to the point of frustration

    • @EricSiegelPredicts
      @EricSiegelPredicts Před 16 dny +1

      I’m the guy in the video. I'm struck by seeing several comments like this accusing me of bias (i.e., ulterior motives). I'm used to being an educator who's trusted, so... here's the thing: I’m not saying that predictive AI should get at least as much attention as generative AI because I’m personally more invested in predictive AI - it’s the other way around!

  • @Stuharris
    @Stuharris Před 9 dny

    When I first heard about the predictive A.I. functionality my first thought was earthquakes; can we feed a predictive model all of the seismic data we've collected, have it analyze the readings, then hook it up to receive all the new incoming live data allowing it to be cross referenced with the past data in real time. It would then put together 'reports' consisting of anomalies and activity patterns that may likely precede earthquakes, possibly even being able to predict severity levels. I don't think this is going to be like a months or years prediction; but days to hours when talking about an earthquake could potentially save 100's of 1000's of lives.

  • @handlesshouldntdefaulttonames

    I think it's insane how basically everyone is like "this is probably a bad idea" and we're still just letting them do this.

    • @mr.c2485
      @mr.c2485 Před 27 dny

      Sort of like splitting the atom. I don’t remember voting on that..😮

    • @mr.c2485
      @mr.c2485 Před 27 dny

      I know right? Kind of like splitting the atom. I don’t remember voting on that.
      Wait until CERN does it’s thing. Makes splitting atoms look like child’s play.

  • @yuvrajsingh-gm6zk
    @yuvrajsingh-gm6zk Před 23 dny +1

    I really loved the reference of Ex Machina over terminator in the video( cause in my opinion AVA is just far more sophisticated AI than anything know to Hollywood! )

  • @NikoKun
    @NikoKun Před 27 dny +7

    I know enough about what's going on in AI, that I really have to disagree with this guy. I think he's missing the bigger picture, and oversimplifying how LLMs and generative AI work, in order to appease the doubters and skeptics, and to push his own way of doing things. He's exaggerating the concept of "hallucinations" and creating a false premise, there is no such thing as "seeming to understand". To appear to understand IS understanding. To be able to predict the next word and converse with humans on complex topics, everything that could occur in that conversation must be understood. Progress won't be linear, we merely needs to create an AI agent capable of convincingly working on problems on the level of an AI software engineer, then future improvements will come much quicker.

    • @Instaloffle
      @Instaloffle Před 27 dny +2

      I'll agree his explanations could be better, and it kinda feels like he might be appealing to skeptics as you say. But...
      As an actual software engineer that has some basic experience actually working with machine learning & ai programs... You're really misunderstanding how LLMs work, and I don't blame you. You're falling into a very common trap that our brains set for us, we seek signs of language, communication, and intent constantly as a valuable evolutionary trait. The downside is sometime we project.
      When we see text from a LLM our brain can't help but seek intentionality and meaning behind the words but there really isn't any. It's a stochastic parrot.
      "Appearing to understand is understanding" / "Everything... In that conversation must be understood"
      This here is the tricky part. Parrots don't need to understand to repeat something back. Likewise programs don't need to understand to reproduce consistent and structured patterns of words. A calculator could tell you to cut a TV in half to split it between two people (1/2=0.5) meanwhile an Ai can suggest adding glue to pizza because it has no idea what those things are.
      It could say "The dinosaurs were wiped out in 1949 by a meteor, in a flash that could be seen from New York to Tokyo."
      This is a completely valid line to a LLM because it's primary goal is syntactically & grammatically correct english. But to us it's very obviously a ridiculous statement.
      This is the root flaw of hallucinations, the stochastic nature is LLM's greatest strength and the cause of hallucinations. Not a bug but a fundamental aspect of its architecture. We honestly shouldn't call them "hallucinations" because that only further convinces people that it would otherwise "understand what it is saying".
      LLMs lack comprehension, intention, and understanding. They never truly know what they are talking about. That doesn't mean their ability to calculate patterns of words isn't an insanely valuable and cool tool.

    • @NikoKun
      @NikoKun Před 27 dny +3

      @@Instaloffle You're making a lot of assumptions about what I know, and what I've worked with.
      No, none of these things can be described as "stochastic parrots", and frankly, using that term only shows me you're repeating elaborate talking points, rather than giving it the deep level of consideration I have, for almost 4 decades now. That way of thinking about it is nothing more than an attempt to dismiss what you find uncomfortable, by making human intelligence something impossibly special, but yet somehow something that could be "faked", a paradox. The very concepts of stochastic parrots or philosophical zombies, whatever you call it, do not exist outside hypothetical philosophical discussions that are merely used to help us try to understand the nature of our own consciousness. In the real world, they're an impossibility, and make no sense logically.

    • @matheussanthiago9685
      @matheussanthiago9685 Před 24 dny +2

      "trust me bro, one more AI on top of the AI will bring AGI"

    • @NikoKun
      @NikoKun Před 23 dny

      ​@@matheussanthiago9685 That is not the argument I'm making. I'm merely asserting that there is no such thing as faking understanding. For something to demonstrates functional understanding, it effectively MUST understand.
      But, since you bring it up, AI agents do have the ability to check their work, and when configured in groups checking each other, or given a feedback loop on it's own output, they can indeed solve complex general tasks. Sadly, implementing that at this stage, is still too costly.

  • @bernhardd626
    @bernhardd626 Před 4 dny

    It does NOT work on a "per word level". This is a dubious simplification. Attention is all you need.

  • @TheShwiggityshwah
    @TheShwiggityshwah Před 27 dny +12

    AI is the new Crypto.

    • @anjijack5392
      @anjijack5392 Před 27 dny +5

      LMAO 🤣 No, no, it's not.

    • @TheShwiggityshwah
      @TheShwiggityshwah Před 27 dny

      @@anjijack5392 yes it is. Lots of power waste. Venture capital throwing money at it with reckless abandon. Lots of fraud. And introducing more problems than it solves.

    • @zane4240
      @zane4240 Před 27 dny

      AI scares people. Crypto makes government shit their pants.

    • @jaredvizzi8723
      @jaredvizzi8723 Před 25 dny

      Except there is tangible value to AI that is already making differences in real businesses.

  • @Skyking6976
    @Skyking6976 Před 22 dny

    We bought a Tesla model Y. We’ve had it three weeks now and the auto pilot with AI is incredible. We watch the thing learn. I use it 99% of the time and couldn’t care less if I ever drove again.

  • @3thinking
    @3thinking Před 27 dny +2

    Chances are generative AI will be creating the predictive AI models faster than any of your people could, it's one area where generative AI is strong (machine learning, data science, advanced coding).
    So expect your business to be bust in a few years.

    • @amdenis
      @amdenis Před 27 dny +1

      You are 100% correct. Like Noam Chomsky, Siegel is fairly clueless when it comes to what is happening in DL/NN based AI, and has bet on the wrong horse. This video will not age well at all!

    • @echorises
      @echorises Před 26 dny +1

      I don't see that happening to be honest. Because if there is one thing generative AI fails the most, it is with coming up solutions to the problems such as "how can we train predictive AIs more efficiently."

    • @matheussanthiago9685
      @matheussanthiago9685 Před 24 dny

      Have you eaten at least one small rock today tho?

  • @djciregethigher
    @djciregethigher Před 11 dny +1

    Predictive AI is where we should invest resources and interest. However, we are too captivated by generative AI for its seemingly magical capabilities.
    This is akin to the invention of the internet, and how we promised great things with being interconnected as a society. However, we end up with most of the internet being used for porn 😂.
    I’m joking! With that said, predictive AI is also known as traditional machine learning. Generative AI uses relatively newer techniques such as deep learning.
    I agree with the statement that generative AI is great for first drafts.
    For spinning up boiler plate code for trying new paradigms, softwares and languages. It’s so good…
    What you do from there is on you though!!! It won’t take you to the finish line alone!

    • @EricSiegelPredicts
      @EricSiegelPredicts Před 10 dny

      Indeed. Predictive is older -- but no old school! Most of its vast potential remains untapped.

  • @DrPhilby
    @DrPhilby Před 24 dny

    Yes. Hype is generated by those who profit from it....and yet even computer engineers 30 years ago were saying that what we see now wouldn't be available for another 100 years ....

  • @Walter-gi9bz
    @Walter-gi9bz Před 18 dny

    Unfortunately, while AI is getting smarter, human intelligence seems to go backwards. We have no choice but let machines make decisions for us - now and more so in the future.

  • @chebkhaled1985
    @chebkhaled1985 Před 12 dny

    I'm an expert with a phd in the domain and i've seen this many times. Ppl just refuse to believe and search all logical or non logical excuses to say it is not that good and it doesn't understand and it is just "predicting the next word". Its ok to pass by this phase for sometime while figuring things out but insisting and going to plateformes saying things you will regret later is just cringe

    • @jafetmorales9941
      @jafetmorales9941 Před 12 dny

      agree, as if there weren't a thousand different architectures and algorithms to implement machine learning. Human brain is just one.

    • @EricSiegelPredicts
      @EricSiegelPredicts Před 10 dny

      I'm the guy in the video. I don't criticize genAI on the basis that it operates by "predicting the next word." In fact, I don't criticize genAI at all. Rather, I criticize the general hype.

  • @louisifsc
    @louisifsc Před 20 dny

    LLMs are essentially predictive AI for language, they are predicting the next token based on an input of tokens. Generative AI is just a stepping stone on the way to AGI. The techniques and specific technologies are evolving. He's mostly right that generative AI hasn't created that much value so far, but it is a huge unlock necessary before getting to the next major breakthrough.

  • @subhasish661411
    @subhasish661411 Před 27 dny +2

    GenAI is also predictive AI if it is doing next word prediction. Many of the pre transformer genAI using RNN or LSTM was doing exactly that.

    • @amdenis
      @amdenis Před 27 dny +3

      You are totally correct, and as usual, Eric Siegel is wrong about most things in AI-- and he especially has no clue about the difference in growth and capabilities inherent to DL/NN vs traditional pre-DL Machine Learning that he touts. Sadly, many will believe people like him and completely miss a once in history chance to ride the largest technology wave to ever happen, which will affect virtually all aspects of society, business, and life in general.

    • @mintakan003
      @mintakan003 Před 27 dny +1

      The Predictive AI he's describing was the AI we were all doing, prior to ChatGPT. Maybe it was just called "machine learning", at the time. Supervised learning. Classification. (Deep neural nets. But also earlier machine learning "fitting" regression techniques, based on simpler math structures. The simplest of which, learned in grade school, linear regression.) Very narrow task of making a call based patterns of data. It's still there. It's just generative AI now has gotten the attention of most people. (And it too is a form of predictive AI, as a hyped up auto-complete engine.).
      It's self evident by now, that simply scaling up LLM's is not the pathway to AGI. It's a piece of it. It's a step function up, in natural language understanding. But it's limitations are also becoming self evident. We're probably missing several more steps.

    • @amdenis
      @amdenis Před 27 dny

      @@mintakan003 Exactly. He’s just mischaracterizing and mislabeling it is “generative” vs “predictive”. There is a lot more beyond scaling happening with the evolution of DL/NN, which is rapidly evolving across the scaling spectrum. I have just eight 8-way H100 servers here at my lab, but I am employing numerous techniques to achieve fairly broad college to post-doc level, low temperature (zero hallucination) KB systems. Given that we only have a maximum of about 5.2 TB GPU RAM across NVLink/NVConnect fabric, it’s nowhere near scaling at all costs. In fact, for several research areas we took 64-96 small (sub 6B param) LLM’s and achieved better results with 100% unsupervised/largely unstructured, and semi-supervised synthetic data sets. In any case, as you noted, there are so many areas for improvement using MHT LLM’s, and we’re all just getting started!

  • @omarelmaghat5050
    @omarelmaghat5050 Před 21 dnem

    Predictive AI has the potential to significantly improve various aspects of life by enabling better decision-making and optimizing operations across sectors like healthcare, finance, retail, transportation, and energy. It can forecast health risks, financial trends, consumer behavior, and more, leading to enhanced efficiency and personalized services. However, it also poses several challenges including privacy concerns, potential biases, dependence on high-quality data, potential job displacement, lack of transparency, and regulatory challenges. These issues highlight the importance of responsible AI development with a focus on ethics, transparency, and protecting individual rights to ensure the benefits of predictive AI are realized without compromising fundamental values.

  • @michaelbindner9883
    @michaelbindner9883 Před 22 dny +1

    UPS has no feeling. They don't air condition their trucks

  • @Hadrhune0
    @Hadrhune0 Před 5 dny

    I felt like somebody stole 8:27 minutes of my life. Thank you Eric to remind me the value of my time. If I was a barely minimum acknowledged stakeholder or client I wouldn't give you a single dollar.
    It really feels like rant from somebody who stayed to an old technology because had no money to invest in research something new. :)