Computerphile
Computerphile
  • 818
  • 210 382 791
Has Generative AI Already Peaked? - Computerphile
Bug Byte puzzle here - bit.ly/4bnlcb9 - and apply to Jane Street programs here - bit.ly/3JdtFBZ (episode sponsor). More info in full description below ↓↓↓
A new paper suggests diminishing returns from larger and larger generative AI models. Dr Mike Pound discusses.
The Paper (No "Zero-Shot" Without Exponential Data): arxiv.org/abs/2404.04125
computerphile
computer_phile
This video was filmed and edited by Sean Riley.
Computer Science at the University of Nottingham: bit.ly/nottscomputer
Computerphile is a sister project to Brady Haran's Numberphile. More at www.bradyharanblog.com
Thank you to Jane Street for their support of this channel. Learn more: www.janestreet.com
zhlédnutí: 433 555

Video

How Branch Prediction Works in CPUs - Computerphile
zhlédnutí 48KPřed dnem
How does branch prediction speed up operations? Matt Godbolt continues the deep dive into the inner workings of the CPU computerphile computer_phile This video was filmed and edited by Sean Riley. Computer Science at the University of Nottingham: bit.ly/nottscomputer Computerphile is a sister project to Brady Haran's Numberphile. More at www.bradyharanblog.com Thank you...
How AI 'Understands' Images (CLIP) - Computerphile
zhlédnutí 128KPřed 14 dny
With the explosion of AI image generators, AI images are everywhere, but how do they 'know' how to turn text strings into plausible images? Dr Mike Pound expands on his explanation of Diffusion models. computerphile computer_phile This video was filmed and edited by Sean Riley. Computer Science at the University of Nottingham: bit.ly/nottscomputer Computerphile is a sis...
CPU Pipeline - Computerphile
zhlédnutí 54KPřed 21 dnem
How do CPUs make the most efficient use of their compute time? Matt Godbolt takes us through the pipeline - allowing the CPU to do work as many ticks of the system clock as possible! computerphile computer_phile This video was filmed and edited by Sean Riley. Computer Science at the University of Nottingham: bit.ly/nottscomputer Computerphile is a sister project to Brad...
ChatGPT Jailbreak - Computerphile
zhlédnutí 303KPřed měsícem
With Large Language Models becoming used across all areas of computing, security researcher Dr Tim Muller explores how they can be used for all kinds of unintended purposes. computerphile computer_phile This video was filmed and edited by Sean Riley. Computer Science at the University of Nottingham: bit.ly/nottscomputer Computerphile is a sister project to Brady Haran's...
AES GCM (Advanced Encryption Standard in Galois Counter Mode) - Computerphile
zhlédnutí 75KPřed měsícem
Your browser is using this system right now! (at time of typing!) - Dr Mike Pound explains this ubiquitous system! EXTRA BITS with some of the mathematics: czcams.com/video/7OZyHzYFSgI/video.html computerphile computer_phile This video was filmed and edited by Sean Riley. Computer Science at the University of Nottingham: bit.ly/nottscomputer Computerphile is a sister pr...
Wearable Tech Discussed - Computerphile
zhlédnutí 31KPřed měsícem
The field of Human Computer Interaction has been transformed with wearables that are smaller, more powerful and more plentiful. We talked to HCI expert Max Wilson about the use of his, ahem, one ring. computerphile computer_phile This video was filmed and edited by Sean Riley. Computer Science at the University of Nottingham: bit.ly/nottscomputer Computerphile is a sist...
Garbled Circuits - Computerphile
zhlédnutí 29KPřed měsícem
Going hand in hand with Oblivious Transfer is 'Garbled Circuits' - a way of using logic gates to carefully share information. Dr Tim Muller explains. computerphile computer_phile This video was filmed and edited by Sean Riley. Computer Science at the University of Nottingham: bit.ly/nottscomputer Computerphile is a sister project to Brady Haran's Numberphile. More at ww...
3D Gaussian Splatting! - Computerphile
zhlédnutí 98KPřed měsícem
A new technique to turn pictures of a scene into a 3D model is quick, easy and doesn't require that much compute power! Dr Mike Pound and PhD student Lewis Stuart demo and explain. Lewis used this Particle simulation in Unity: GitHub - keijiro/SplatVFX: github.com/keijiro/SplatVFX NeRFStudio is here : docs.nerf.studio/index.html Previous (nerf) video: czcams.com/video/wKsoGiENBHU/video.html fac...
L Systems : Creating Plants from Simple Rules - Computerphile
zhlédnutí 43KPřed 2 měsíci
From simple rules, complex 'organisms' can emerge. PhD candidate Zachariah Garby has been studying the papers to find out what it's all about. This was formerly called: Digital Plants (L-Systems) EXTRA BITS: czcams.com/video/oFqbVJm8gw0/video.html Zac's code: bit.ly/C_Zac_L-systems computerphile computer_phile This video was filmed and edited by Sean Riley. Computer Sci...
Coding a Web Server in 25 Lines - Computerphile
zhlédnutí 316KPřed 2 měsíci
Just how simple can a web server be? Laurence Tratt, Shopify / Royal Academy of Engineering Research Chair in Language Engineering at Kings College London builds it up. More about Laurie: bit.ly/C_LaurenceTratt computerphile computer_phile This video was filmed and edited by Sean Riley. Computer Science at the University of Nottingham: bit.ly/nottscomputer Computerphile...
Mechanising (Graphical) Mathematical Proofs - Computerphile
zhlédnutí 24KPřed 2 měsíci
A graphical problem seems intuitive to a human, but how do you explain something formally to a machine? Dr. Mohammad Abdulaziz, Lecturer in Artificial Intelligence, King's College London This video was initially titled "Mechanizing Mathematical Proofs" computerphile computer_phile This video was filmed and edited by Sean Riley. Computer Science at the University of Nott...
Discussing Digital Twins - Computerphile
zhlédnutí 34KPřed 3 měsíci
Digital Twin - a new buzz word or a useful piece of technology? We asked Dr Steffen Zschaler, Reader in Computer Science at Kings College London. #ComputerScience #DigitalTwin #CS #computing computerphile computer_phile This video was filmed and edited by Sean Riley. Computer Science at the University of Nottingham: bit.ly/nottscomputer Computerphile is a sister project...
Python Hash Sets Explained & Demonstrated - Computerphile
zhlédnutí 102KPřed 3 měsíci
Featuring Mike Pound. Jane Street skyscraper puzzle (and info on the AMP program) at bit.ly/computerphile-amp More below ↓↓↓ Hash Sets in Python work a little bit like the index of a book, giving you a shortcut to looking for a value in a list. Dr Mike Pound explains how they work and demos with some code. #Python #HashSet #Code #Computerphile Jane Street’s Academy of Math and Programming is no...
How CPUs Do Math(s) - Computerphile
zhlédnutí 57KPřed 3 měsíci
Matt Godbolt continues the story of the CPU and explains how machines do addition computerphile computer_phile This video was filmed and edited by Sean Riley. Computer Science at the University of Nottingham: bit.ly/nottscomputer Computerphile is a sister project to Brady Haran's Numberphile. More at www.bradyharanblog.com Thank you to Jane Street for their support of t...
Progress Bars - Computerphile
zhlédnutí 78KPřed 3 měsíci
Progress Bars - Computerphile
Python Regular Expressions - Computerphile
zhlédnutí 52KPřed 4 měsíci
Python Regular Expressions - Computerphile
Machine Code Explained - Computerphile
zhlédnutí 103KPřed 4 měsíci
Machine Code Explained - Computerphile
What Was Your First Computer Game? (Soundcheck Question 2023) - Computerphile
zhlédnutí 32KPřed 4 měsíci
What Was Your First Computer Game? (Soundcheck Question 2023) - Computerphile
NERFs (No, not that kind) - Computerphile
zhlédnutí 58KPřed 4 měsíci
NERFs (No, not that kind) - Computerphile
Defining Regular Expressions (RegEx) - Computerphile
zhlédnutí 83KPřed 5 měsíci
Defining Regular Expressions (RegEx) - Computerphile
Optimising Code - Computerphile
zhlédnutí 138KPřed 5 měsíci
Optimising Code - Computerphile
Bug in Binary Search - Computerphile
zhlédnutí 281KPřed 5 měsíci
Bug in Binary Search - Computerphile
Defining Cybersecurity with Gene Spafford - Computerphile
zhlédnutí 29KPřed 5 měsíci
Defining Cybersecurity with Gene Spafford - Computerphile
True Random Numbers - Computerphile
zhlédnutí 118KPřed 6 měsíci
True Random Numbers - Computerphile
Binary Search Algorithm - Computerphile
zhlédnutí 154KPřed 6 měsíci
Binary Search Algorithm - Computerphile
Oblivious Transfer - Computerphile
zhlédnutí 52KPřed 7 měsíci
Oblivious Transfer - Computerphile
Budget Self-Driving Car - Computerphile
zhlédnutí 38KPřed 7 měsíci
Budget Self-Driving Car - Computerphile
How Bzip2 Works (Burrows Wheeler Transform) - Computerphile
zhlédnutí 48KPřed 7 měsíci
How Bzip2 Works (Burrows Wheeler Transform) - Computerphile
CMPRSN (Compression Overview) - Computerphile
zhlédnutí 69KPřed 7 měsíci
CMPRSN (Compression Overview) - Computerphile

Komentáře

  • @blahblahsaurus2458
    @blahblahsaurus2458 Před 35 minutami

    6:20 that is not what "intelligence explosion" usually refers to. An intelligence explosion refers to the point when an AI is able to help create the next AI, leading to a a chain reaction of self improvement and increasing intelligence.

  • @morningsssss2857
    @morningsssss2857 Před hodinou

    more of Gene, please :)

  • @petersmythe6462
    @petersmythe6462 Před 2 hodinami

    I think you could use relatively small models as a logical deduction engine for general problem solving tasks rather than having it know the answer innately by memorization. Add the ability to conduct experiments by, say, searching the internet or running code, and I think you have an agent that may not even have a billion parameters but can still perform at general intelligence tasks while avoiding hallucinations. This would be especially true if you focus its training set in the direction of humans engaged in problem solving rather than repeating wrote memorized information.

  • @petersmythe6462
    @petersmythe6462 Před 2 hodinami

    Rote memorization vs problem solving. The thing is, LLMs actually have the capacity to do both because their context window constitutes a crude form of state that they have access to. These are mostly tests of the ability of AI to do rote memorization. Not problem solving.

  • @mofogie
    @mofogie Před 3 hodinami

    True but decades ago, RAM was 1/billionth of now. We shall see. Hardware engineers always push new limits

  • @10Exahertz
    @10Exahertz Před 3 hodinami

    This doesnt even get into the recursion issue and the power needs issue (aka the eventual costs are gonna be insane, and eventually that cost will go onto the users)

  • @andrewvirtual
    @andrewvirtual Před 3 hodinami

    Not even close

  • @kirishima638
    @kirishima638 Před 3 hodinami

    The only thing worse than generative AI is the ‘tech bros’ that push it as the solution to all problems, like a cult.

  • @matthewwalewangko1995
    @matthewwalewangko1995 Před 4 hodinami

    ...ChatGPT-4o just woke up today

  • @hamburgerjoe4401
    @hamburgerjoe4401 Před 4 hodinami

    I have recently begun feeling like ChatGPT's quality is actually starting to go downhill, and I wonder if - after running out of human-written data - it's the use of AI-created data that is doing it. If so, it will continue to worsen.

  • @petermoras6893
    @petermoras6893 Před 5 hodinami

    I think people mysticize Machine Learning and Generative AI far more than it needs to be. At the end of the day, ML is just an arbitrary function. It can be any function as long as we have the right input and output data. The obvious problem is that the possibility space of any problem balloons exponentially with it's complexity, so you eventually reach a point where you don't have enough resources to brute force the solution. However, I don't think we've reached the peak of generative AI, as there are other avenues of improvement other than more training data. One solution I think we'll see employed more is using more complex algorithms that help bridge the gap between the input and output data. For example, we don't train a Neural Net on pure images. We use a convolutional layer at the start in order to pre-process the image into data that is easier to find correlations with. But these layers can be anywhere in the NN and still be effective. (Personal opinion) For Image based gen-AI, I think future algorithms will use pre-trained algorithms that show understandings of 3D objects and their transposition onto 2D planes. The general image classifiers could then use the pre-trained 3D transposition as a basis for understanding 2D images, which would in theory give them an understanding of 2D object representation that is closer to our own.

  • @teasips5999
    @teasips5999 Před 5 hodinami

    I'm here from next week. We're already past what they thought was impossible

  • @FlintTD
    @FlintTD Před 5 hodinami

    Modern Neural Network AI trained on large data sets lack a semantic engine. That's the part of an AI which "understands" the information it's being given. A large model model without a semantic engine won't know the meaning behind what it's "typing". An large image model can generate images from prompts, but it gets the details wrong because it doesn't really understand that images depict objects. I guess the people making today's neural net AIs just thought their models would grow a semantic engine as they're trained?

  • @pierrec1590
    @pierrec1590 Před 5 hodinami

    It would be imprudent to expect linear scaling from a domain that is so deeply non-linear.

  • @OstlerDev
    @OstlerDev Před 5 hodinami

    Me: Okay artificial intelligence program, please analyze the difference between time flies like an arrow and fruit flies like a banana. ChatGPT-4: The sentence "Time flies like an arrow; fruit flies like a banana" is a classic example of a garden path sentence and a play on words involving syntactic ambiguity and homonymy. Let's break it down: 1. Time Flies Like an Arrow Meaning: Literal: Time moves quickly and directly, similar to how an arrow flies through the air. Structure: "Time" (subject) "flies" (verb) "like an arrow" (prepositional phrase indicating manner) 2. Fruit Flies Like a Banana Meaning: Literal: Fruit flies (a type of insect) are attracted to bananas. Structure: "Fruit flies" (subject, noun phrase referring to the insect) "like" (verb meaning 'to be fond of' or 'to be attracted to') "a banana" (object of the verb "like") Analysis: Syntactic Ambiguity: The sentence "Fruit flies like a banana" initially leads the reader to interpret "fruit flies" as a verb phrase (like "time flies"), but it is actually a noun phrase. Homonymy: The word "flies" serves as a verb in "Time flies like an arrow" and as part of the noun phrase "fruit flies" in "Fruit flies like a banana." Play on Words: The sentence is a pun, using the dual meanings of "flies" and the syntactic similarity to create humor and confusion. Key Differences: Subject and Verb Usage: In "Time flies like an arrow," "time" is the subject, and "flies" is the verb. In "Fruit flies like a banana," "fruit flies" is the subject, and "like" is the verb. Interpretation: "Time flies like an arrow" is a metaphorical statement about the passage of time. "Fruit flies like a banana" is a literal statement about the behavior of fruit flies. This linguistic play demonstrates how the same words can have different meanings and structures, leading to different interpretations.

  • @johnwilsonwsws
    @johnwilsonwsws Před 5 hodinami

    Humans work on a poverty of stimulus but millions of generations of biological evolution. AI is not replicating human evolution. Its “neural network” is based on a simplified model of the animal brain. You should do a video on Herbert Dreyfus’s book “What Computers Still Can’t Do”. The other problem for AI is it will start to be trained on its own prior output. It won’t just plateau. It will degenerate.

  • @Augustus_Imperator
    @Augustus_Imperator Před 6 hodinami

    I'm sorry man, you choose the wrong day to publish this video 😅

  • @justsomeperson1282
    @justsomeperson1282 Před 6 hodinami

    Is it just me or is this episode sped up? It seems roughly normal speed if you set it to 0.75x video speed.

  • @minefacex
    @minefacex Před 6 hodinami

    The way I see it we will not make AGI as long it is purely statistical. We need to have strong, provable deduction, maybe we need to revive symbolic AI, instead of the statistical-based approach. Don't get me wrong, these statistical models, like LLMs are incredibly powerful and they will get just better, but they are not provable essentially. They are just multimodel autocorrect. The way I see it, we need to study the brain and specifically brain organoids to create true AGI, which is not something that peaks.

  • @EliasTsakmakis
    @EliasTsakmakis Před 6 hodinami

    you are selling intelligence cheap by calling this thing AI but it is a marketing and hype-producing mechanism, ok. without a high functioning sensory apparatus AIs will always be contained in a box of second hand interpreted data, they will never develop an intuition, instinctive or conceptual understanding of the physical world no matter how many Dyson spheres worth of energy you feed it. You can't cheat evolution. Just call it a pattern recognition tool and be done with it.

  • @piotrswieczak3003
    @piotrswieczak3003 Před 7 hodinami

    Its your signal to short AI stocks

  • @ewerybody
    @ewerybody Před 7 hodinami

    Veery interesting👍 thanks! I would have liked to see what's happening under the hood a little. So, what it compiles to compared to without the jit maybe

  • @mkvenner2
    @mkvenner2 Před 8 hodinami

    Generative AI peaked before it was even invented

    • @dinmavric5504
      @dinmavric5504 Před 7 hodinami

      It turns out "AI" that does not understand what it's looking at it, is not AI, what a surprise

  • @Anonymous-sb9rr
    @Anonymous-sb9rr Před 8 hodinami

    The human brain does not need these huge data sets.

  • @hopecomingsoon
    @hopecomingsoon Před 8 hodinami

    I'm going to propose a crowd-sourcing approach. Ask people to write the name of the thing on a piece of paper, then take a photo of the thing with the paper in view, then upload it to the internet. Then put it all on a stack overflow-like website to improve the quality of training datasets.

  • @bobrocks6324
    @bobrocks6324 Před 8 hodinami

    What about gpt4o?

    • @gnikhil335
      @gnikhil335 Před 8 hodinami

      it's nothing new, just multimodal and faster

  • @UniDocs_Mahapushpa_Cyavana
    @UniDocs_Mahapushpa_Cyavana Před 8 hodinami

    For millennia, humans have been tech-ing into the quantity>quality lifestyle. Generative AI's 🤖 power is not in *quality* (humans, especially humans using generative AI as a helping tool, are better at that), it is in *quantity.* You can generate images much quicker and at lower cost than humans alone. Humans using generative AI as a helping tool are pretty fast though.

  • @jimbobbyrnes
    @jimbobbyrnes Před 9 hodinami

    What we generating earlier was a image now we are generating video. Soon we will be generating full 3d models and game simulations. Meaning one day we can imagine generating a full 1 to 1 simulation of the entire visible universe in one prompt. It is only limited by it's training data and that is only limited by our ability to collect and store that data. So i would so no it has not peaked until we have run out of data. The failures of AI are all about our ability to collect, store, and train that data. That part needs to be refined then the failures would stop happening. Also maybe stop calling it AI to stop confusing people who do not know what machine learning is.

  • @HL-iw1du
    @HL-iw1du Před 9 hodinami

    Everything peaked in 1972.

  • @HL-iw1du
    @HL-iw1du Před 9 hodinami

    Yes

  • @JimElford
    @JimElford Před 9 hodinami

    Organisms learn as part of their survival mechanism. Sooner or later some tech bro will figure this out, teach an AI to fear its own demise, and boom, we've got Skynet burning us to ash

  • @cyberfutur5000
    @cyberfutur5000 Před 9 hodinami

    10:41 just today I asked ChatGTP if the Nürburgring (nordschleife + GP) would even fit into the borders of Luxembourg and it's answer was basically "probably not". When asking how it came to this conclusion it kinda told me, that is has no idea of the area covered by the Nürburgring, just how long it is is and that it guessed it has a track with of 10-15 meters which isn't just useless information, but also apparently wrong. So I still don't know if it would fit (I guess so?) but it kinda proves the point made in the video. And don't get me started for when I talked about historic air plane engines and it said, that there was no RR Merlin Mk.3. I had to ask it some questions I knew the answer to is Merlin III and indeed that's what it said. Then I forced it to read my original question, it's answer to it and all it's other answers and to again tell me if there was a third iteration of an aircraft engine named Merlin and it admitted that indeed there was such a thing. But that established it still couldn't get me any answers and I had to spend hours searching through forums and all that.

  • @oxylepy2
    @oxylepy2 Před 10 hodinami

    It's gunna be like the NFT bubble: expands, draws attention, repeat, reality sets in, bursts and leaves behind its real value, in only a couple years tops.

  • @betterthantrash111
    @betterthantrash111 Před 10 hodinami

    Bro gpt4o just came out

  • @Jamesssssssssssssss
    @Jamesssssssssssssss Před 10 hodinami

    I did the meme of DiCaprio pointing at the screen when I saw that you were Lefty. I'm also a lefty and it's always cool to see someone write with the proper hand.

  • @RyanStronach
    @RyanStronach Před 10 hodinami

    It makes sense that the first data point about a subject enlightens you more then any subsequent data point about that subject. If there is such extreme diminishing returns with more data, just imagine how foolish these companies and investors are going to feel while standing near their $300 billion data centers.. 😂

  • @patrickmeyer2598
    @patrickmeyer2598 Před 10 hodinami

    All AI is based upon human civilization up to the present time, therefore it's training data doesn't allow for genuinely new things, only clever derivations. There is no "intelligence" apart from human intelligence. It gets all of it from us.

    • @dinmavric5504
      @dinmavric5504 Před 7 hodinami

      quite right. this is not AI, it's gaslighting.

    • @patrickmeyer2598
      @patrickmeyer2598 Před 6 hodinami

      Artificial intelligence is better thought of as applied statistics across massively parallel computing systems. I use several of the best AI tools everyday, and while they can be useful for limited tasks, they often fail in interesting ways. Any time you are interested in trying out something that couldn't already be searched somewhere on stack overflow or reddit, and I'm talking about programming problems here, it just doesn't work very well. But it doesn't work in such a way, that it's like it can't conceive of something that it's not already similar to that exists in its training set. So it always feels recycled and derivative. I think that's because real intelligence is dependent upon having a body in the physical world.

  • @orsonzedd
    @orsonzedd Před 10 hodinami

    Why aren't they using their computers to teach them to do art like a human?

  • @gregs6403
    @gregs6403 Před 11 hodinami

    Great explanation and visualization. Thank you!

  • @Decapodd
    @Decapodd Před 11 hodinami

    if yu watched today's OpenAI announcement, you know it hasnt peaked.

    • @sanal4492
      @sanal4492 Před 10 hodinami

      I will never trust a product before trying it out myself. Not to mention, OpenAI is basically playing catching up with the capabilities of the Google's Gemini Ultra.

    • @dinmavric5504
      @dinmavric5504 Před 7 hodinami

      you do realize "4o" has been out for a long time? just not for free. its nothing new.

  • @tkzsfen
    @tkzsfen Před 11 hodinami

    Where is the point, where it will be too expensive to develop AI?

    • @rickysmyth
      @rickysmyth Před 7 hodinami

      Whenever you stop touching yourself in bed

  • @psychedelicrelic2299
    @psychedelicrelic2299 Před 11 hodinami

    I see the point you are making and I like your illustration, but I think there is a sort of apples to oranges comparison being made here. You’re basically saying that, say, a LLM or an image matcher hits a point of diminishing returns; therefore, general AI is going to king past that point of diminishing returns and is a very long way away. But general AI is an entirely different type of technology than a single LLM. Maybe a general AI would be a network of dozens of hundred da of different models linked together using algorithms. Proving something in this small case does not necessarily transfer into the larger question of general AI. That being said I think this is a great video and I generally agree with your premise and I think that the amount of computational power required to give people access to significantly better ML models than we have right now aren’t really economically feasible and this technology has already been applied to most of the best use cases.

  • @xXx_Regulus_xXx
    @xXx_Regulus_xXx Před 11 hodinami

    as much as I appreciate people who are willing to deflate AI hype, I wonder if this prediction will age well, or will it be more like the prediction that we'd only ever need 100 computers on the whole planet.

  • @3dartxsi
    @3dartxsi Před 11 hodinami

    Years ago, all the talk about AI was that anything resembling proper "strong" AGI was likely decades off, if we ever managed to achieve it. This is largely due to A.) limitations imposed on computer hardware(as currently envisioned) by the laws of physics themselves, & B.) the fact that we didn't have a full understanding of how a human brain works, limiting our ability to replicate it in any functional way Suddenly, silicone valley is selling AI as if we've reached that level, even though neither of the previous considerations have been addressed. This would be like someone you know discussing how they want to take a year off to go to Europe, despite not having the money to pay for the trip, or having a valid passport and then suddenly they are saying how they've bought plane tickets and will be leaving the country next week, even those previous issues were never dealt with.

  • @william14able
    @william14able Před 11 hodinami

    There are so many reasons why Gen AI will only get worse.

  • @HydrogenAlpha
    @HydrogenAlpha Před 11 hodinami

    Finally, a reassuring video about AI.

  • @salasart
    @salasart Před 12 hodinami

    That's actually great news, means I , as a fleshy and flawed LLM, will have more time to adapt before I'm inevitably replaced XD

  • @gaho5624
    @gaho5624 Před 12 hodinami

    "Cheaters never win". -Droopy.

  • @negativerainbow
    @negativerainbow Před 12 hodinami

    I feel like expecting anything other than a logarithmic trend on a graph like that is EXTREMELY optimistic. This is the exact kind of problem that expects diminishing returns, and so far we've been able to exponentially increase datasets from gigabytes to petabytes to whatever, so we've seen massive growth, but obviously that factor of growth is untenable, so the results will taper off as well.

    • @negativerainbow
      @negativerainbow Před 12 hodinami

      Add onto that the fact that in this field of data science, pruning and refining your dataset is extremely important to having an effective model. I can't imagine that the level of quality of datasets has been maintained as we've hit the yottabytes or whatever size of training data we have now. That is almost certainly going to degrade results as well, and I'm sure we're already seeing that to some degree, as corporations are pushing to have newer and bigger models as fast as possible to compete with each other, and likely care a lot less about the ramifications of bad training data polluting their models.

  • @nicklowe_
    @nicklowe_ Před 12 hodinami

    to preface: havent watched this video yet. But I want to comment on the hilarity of mankind for a second. I love how last year the titles in this space were like: WE ARE AN INCH AWAY FROM AGI. A year later we are back to: AI? Probably not going much further. It just shows you how little we all actually know about how the future will turn out.