Tap to unmute

The Biggest Science Mysteries That Could Soon Be Solved With AI


Komentáře • 1,2K

  • @NorthernKitty
    @NorthernKitty Před 6 měsíci +226

    The real question is, will artificial intelligence ever be a match for willful ignorance?

    • @davidanderson_surrey_bc
      @davidanderson_surrey_bc Před 6 měsíci +37

      The unstoppable force versus the immovable object, in other words.

    • @amandamcadam114
      @amandamcadam114 Před 6 měsíci +19

      Real intelligence hasn't been up to the task, so I say let AI have at it.

    • @francispitts9440
      @francispitts9440 Před 6 měsíci +7

      No, because it’s not the level of intelligence that is the battleground. Compassionate listening is one place to start because something happened to that person that made them crawl into their cave and shelter from the community. Just be willing to look deeper.

    • @belalugrisi1614
      @belalugrisi1614 Před 6 měsíci +11

      Artificial Intelligence is always defeated by Real Stupidity.

    • @cherylcampbell9369
      @cherylcampbell9369 Před 6 měsíci

      Which is more predictable?

  • @mawkishdave
    @mawkishdave Před 6 měsíci +18

    Science, the art of answering a question by creating 10 more questions.

  • @Castaa
    @Castaa Před 6 měsíci +27

    I like how you show the images of the researchers in question. I'm glad this is becoming a trend on CZcams.

    • @joescott
      @joescott  Před 6 měsíci +23

      I will never understand the creators who don’t give credit for others’ work. It doesn’t diminish your own efforts at all.

    • @wamyam
      @wamyam Před 6 měsíci +4

      ​@@joescottthats why i carry a little photo of you in my pocket

  • @maxnaz47
    @maxnaz47 Před 6 měsíci +17

    I like that you take the time to 'plate' your Factor meals for the add sponsor segment when we all know that you're eating that directly from the packaging to save on washing dishes... I know this because i do the exact same thing. 😂😂😂😂😂😂

  • @thetruth1862
    @thetruth1862 Před 6 měsíci +34

    I have never been so enthusiastic about the future of AI and at the same exact time scared to death 😊

  • @petebyrdie4799
    @petebyrdie4799 Před 6 měsíci +11

    How many people from the UK, when Joe said, 'MeerKAT found SAURON,' deeply expected a clip to appear of Aleksandr saying, 'Simplez!' I was quite disappointed.

  • @TheSystemIsFlawed
    @TheSystemIsFlawed Před 6 měsíci +5

    The Halcin as HAL joke got me GOOD, I really like that one, Joe 👍

  • @blaqkstar
    @blaqkstar Před 6 měsíci +34

    Excellent video as always :)
    RE: Explainability / back box problem - I recently watched one of Anastasi in Tech's vids where she covered AlphaAleph , a European AI firm doing work along these lines. The related paper, "AtMan: Understanding Transformer Predictions Through Memory Efficient Attention Manipulation" is a super compelling read if you're interested

    • @chunkyMunky329
      @chunkyMunky329 Před 5 měsíci +2

      It won't be compelling for long. A new kind of neural network has been invented and will be announced this year. It ends the black box problem completely.

    • @raizin4908
      @raizin4908 Před 5 měsíci

      ​@@chunkyMunky329 What is it called? Who invented it or who will announce it? I can't find any info about it online.
      If it's really a silver bullet that will completely solve one of AI's biggest problems, I'd expect info about it to be easier to find. Unless there's only rumors about this and no reliable sources yet.

    • @chunkyMunky329
      @chunkyMunky329 Před 5 měsíci

      @@raizin4908 The reason you can't find anything is because it is not coming from an expected group of people. They are not scientists. Just programmers. We've become closed minded about the notions of "reliable" sources concerning AI and forgotten that anybody can become good at programming and create something amazing if they have the intelligence, creativity and a decent computer. But computer scientists refuse to accept this idea. They have gone scorched earth on their competitors and created a culture where no alternatives can get any funding and therefore if any alternative is even 99% complete they can't get any reliable source to believe in their project and confirm its credibility. It can only gain credibility if the new project can 100% prove itself to be superior to a neural network. I cannot say any specifics right now because the people involved want to get funding first so that they can afford some security. But that shouldn't take long. They expect to complete their demo this week and be talking to investors next week. Once the first investor transfers the money, an announcement will be made and it will surely be picked up by media. All I can say is that this is not coming from the northern hemisphere. And it was not invented by a white person.

  • @nicog2545
    @nicog2545 Před 6 měsíci +10

    I really like how he had to fix the way he said papyrus but during the sponsored part he said Keto wrong it’s pronounced like Key Toe

  • @ScottVanwilzonn
    @ScottVanwilzonn Před 6 měsíci +7

    OH MY GOODNESS. Hillbilly reveal at 10:56 lmfao
    Sometimes I forget Joe is a Texan.

    • @aelolul
      @aelolul Před 5 měsíci +1

      thar matt beya dai
      (still better than his pronunciation of "deluge" lol)

  • @countertony
    @countertony Před 6 měsíci +18

    I'm always excited to hear about the scrolls projects - my PhD at Queen Mary University of London was in a similar vein, using lab micro-CT scanners to gather data at 15-micron voxels, for virtual unrolling by colleagues at the University of Cardiff.
    The difference in our case was not having to use a synchrotron to generate the X-rays - beam time on a synchrotron is horrifically difficult to come by, while time on a micro-CT scanner is cheap enough that my post-doc friend used to do single-shot scans of his lunch for fun (and to make sure there were no worms in the fruit.)

    • @countertony
      @countertony Před 6 měsíci +4

      The benefit a synchrotron gives you on the other hand is very high levels of X-ray illumination that are at a single energy, meaning certain types of artifact in the CT scan just don't happen and you get a very good signal-to-noise ratio.

    • @eugenetswong
      @eugenetswong Před 6 měsíci +2

      @@countertony It sounds like we get what we pay for, and in some circumstances, we don't need to pay for much.
      Just to be sure that I understand, during coffee breaks, the cost of using the machine is just the cost of electricity?

    • @prophetzarquon1922
      @prophetzarquon1922 Před 5 měsíci +2

      Do you happen to know anything about ("ultra wideband") UWB imaging? The capabilities seem robust, for miniscule devices operating at emission levels on par with common household appliances; fetuses seem to notice UWB pulses even less than a sonogram (as in, not at all)?
      I guess there's a baby monitor that monitors heartbeat & breathing by radar, & a biometric ID radar system, & a number of chips designed for monitoring patients from across a room... but I haven't been able to find out much detail about the state of the technology overall & it seems like medical is the main field where UWB imaging is making inroads with commercial products. Just wondered if you'd heard of it / seen it / used it?

    • @grn1
      @grn1 Před 4 měsíci +1

      @@prophetzarquon1922 UWB imaging sounds interesting but also tricky. UWB typically uses lower frequencies/longer wavelengths than imaging technologies which are better at penetrating materials but when used for radar/imaging has a lower resolution and when used for data has a longer latency than higher frequency/shorter wavelength technologies. For the purposes of detecting heart rate and/or breathing you don't really need a high resolution so I could see it working for that. That said I'm not sure how well it would work from a distance, measuring inside a womb would probably require the device to be fairly close to have high enough penetration and resolution. Detecting heart rate or breathing based on external movement might be possible but potentially prone to errors (maybe AI can help with that). Depending on the frequencies used I could see it working for some biometric techniques. I presume the chips for working across a room go on the patients skin and use UWB as both a power source and a data channel then either use UWB/radio imaging or more likely some other low power monitoring technology.
      UWB data communication has been around almost as long as Bluetooth, in fact it's superior to Bluetooth and can be used for all the same applications with higher bandwidth, lower latency, and better security but since it came out a few years after Bluetooth did, Bluetooth already had too much momentum and manufacturers weren't willing to add support for another technology when Bluetooth was good enough which of course only made the momentum problem even worst. I think a lot of newer car fobs actually use UWB since it's better at blocking relay attacks.

    • @prophetzarquon1922
      @prophetzarquon1922 Před 4 měsíci

      @@grn1 The systems purpose-built for monitoring from across a room \ through a (flimsy) wall, are generally fixed units with external power supplied. As an ultra-low emission asset, UWB radar saw robust development by military research, so medical devices get a head start from that. ... That said, _some_ of the systems researched are literally just a WiFi or femtocell router, either running custom firmware or dumping data to a picocomputer.
      For heartbeat detection, I believe GHz range was used, but at least one system was operating in ~900MHz.
      Some multi-band SDR models are used by home experimenters, but I don't think I've seen UWB mentioned on consumer\commercial stuff running higher than 20GHz?

  • @dfgdfg_
    @dfgdfg_ Před 6 měsíci +29

    Wiping out all bacteria and then reintroducing a good biome with a fecal transplant doesn't seem outrageous if the alternative is dying from sepsis!

    • @jcortese3300
      @jcortese3300 Před 6 měsíci +7

      Same. An endofecal transplant is fine by me if it's really needed. It's not like I'd have to drink the stuff.

    • @360.Tapestry
      @360.Tapestry Před 6 měsíci

      maybe by then, we can have perfectly preserved lab-grown biome-seeding content in dissolving capsules that can be inserted vs a "fecal" transplant... there's always the ick factor (which i'd take over death, but given the option... )

    • @joescott
      @joescott  Před 6 měsíci +3

      Well… yeah.
      I was thinking more you take the antibiotic for something minor and it unintentionally wipes out your biome.

    • @i9169345
      @i9169345 Před 6 měsíci +4

      @@joescott Imagine those who demand anti-biotics for a cold. "Sure, why can give you anti-biotics for your viral infection, when do you want to the schedule to poop transplant?"

    • @scottcates
      @scottcates Před 6 měsíci


  • @Andy_Babb
    @Andy_Babb Před 6 měsíci +9

    I’m equal parts amazed and terrified for what the future of AI can/will be.

  • @dzibanart8521
    @dzibanart8521 Před 6 měsíci +8

    We have not created real AI yet, just wanted to point that out, what we call A.I. is not really A.I. at all, but a buzz word because it is feels so much better than the cumputer assistants we had in the past, like Siri and google assist. they can fool people into thinking they are writing coherent sentences, but that's only because they are using educated guesses on what word comes next, based on existing work.

  • @Sarrixx
    @Sarrixx Před 6 měsíci +9

    Did that square logo right at the start with "Ai" in it remind anyone else of Adobe Illustrator logo? 😆

  • @Ittiz
    @Ittiz Před 6 měsíci +6

    Poor Jason, not only did he pass before his time but he was also a Doctoral "Dtudent"

    • @trumpetmom8924
      @trumpetmom8924 Před 5 měsíci

      I was hoping to not be the only one to notice the typo. Lol. S & D are next to each other on the keyboard so I suppose we can forgive it. This time anyway.

  • @BackYardScience2000
    @BackYardScience2000 Před 6 měsíci +20

    *Future video idea;*
    The periodic table and how each element is used in modern society to make our world what it is today. It would make for a pretty long video, mayne even a short series of videos. What better video to make than one where we explore what makes our world work? You can collect nearly every element (below plutonium, of course. And yes, you can own a tiny, tiny amount of plutonium. Trust me, the feds didn't take mine or stop me from selling it.). But yeah, great video idea and I'll take nothing less , Joe. I'll wait for the notification...

    • @sullisen
      @sullisen Před 6 měsíci +2

      I like this idea, would probably need to be divided up tho. A video for each group perhaps. Or each period, tho some would probably need to be combined or period 1 could basically be a CZcams short..

    • @ceoofgg553
      @ceoofgg553 Před 6 měsíci +1

      Excellent idea man!!!

    • @normalsalazar1978
      @normalsalazar1978 Před měsícem

      Great suggestion!

  • @ItsDevv
    @ItsDevv Před 6 měsíci +10

    “Uploaded 15s ago”

  • @samedwards6683
    @samedwards6683 Před 6 měsíci +2

    Thanks so much for creating and sharing this educational and entertaining video. Great job.

  • @The0ldg0at
    @The0ldg0at Před 6 měsíci +10

    My own caution about AI is the famous stories about sorcerer apprentices. We have experienced in the history of the progress of Science and Technology how so many desastrous magic products were mass marketed before anyone had the time to test all the possibilities of that those new products could interact with everything in the ecosystem. I can't stop thinking about what an AI sorcerer apprentice in our Era of Progress of Science and Technology for the fast benefit of the Investors can bring to the mass market. Of course they will blame it on the AI and get away with it.

    • @NightmareRex6
      @NightmareRex6 Před 5 měsíci

      yea same with them just making GMOs those things need to be EXTENSIVLY studied, whats its effect over an entire lifetime? what if it gets into wild is it propperly prgrammedto not destroy the wild? eta eta.

  • @miinyoo
    @miinyoo Před 6 měsíci +8

    I can imagine probing every node in an NN and recording their values over time. Almost all frameworks are made of smaller chunks, easier to determine their influences. That's still a hell of a lot of nodes but it's possible. Then since there's just so much data, we'd be using AI to find the patterns, translate and track what another NN is doing and how predictably different weight patterns emerge.

    • @KastorFlux
      @KastorFlux Před 5 měsíci

      Particle physicists have been probing at the nodes of reality and finding cool things. Seems silly to think we would need to apply the same approach to reverse engineer things we constructed ourselves. We already know that integrating statistical calculations compounds error.

    • @82spiders
      @82spiders Před 5 měsíci +1

      I think you suggest we can successfully model a model which is accurate but trivial.

    • @chunkyMunky329
      @chunkyMunky329 Před 5 měsíci

      Its not enough to "record the values". The nodes values are not independent entities. They are relationships between other nodes. Which means you have to analyze a billion-factorial relationships and decide how to express that in terms that humans can understand. Good luck with that. I don't think my calculator can even express a number as big as a billion-factorial, so I can't imagine how the neural network could even express the results to you. Are there even words in our language to describe such patterns?
      Also, where are you going to find training data that is labelled with all the trillions of possible patterns? You do realize that you need training data right? Whose job is it to create that?

    • @KastorFlux
      @KastorFlux Před 5 měsíci

      @chunkyMunky329 OP is talking about doing a brain scan to visualize neutral net activity, but overcomplicating things. None of the exact values matter, you're right that it's more about integrating inputs to produce an output. OP seemed to be curious about distributions of weights within the hidden layers and activation pathways, like organelles and connectomes.I think someone is already doing comparative studies between the structure of neurons in the brain and the weights assigned to artificial neural nets.

    • @chunkyMunky329
      @chunkyMunky329 Před 5 měsíci

      @@KastorFlux Yeah but the problem with looking at "distributions" is that, yes it will make your task more manageable to keep things more general, but this is impossible to lead to the kinds of transparency people are seeking. Nodes represent logic but in a numerical abstraction. So, if you conflate millions of logical relationships into one observation, it is like taking a million lines of (software) code and trying to explain what is happening in one line of text. Its not useful to anyone.

  • @swordmonkey6635
    @swordmonkey6635 Před 6 měsíci +9

    The Linear B resolution through AI sounds like it's heading toward the Universal Translator of Star Trek fame... listening to an unknown language and using the collection off all known languages in its database, makes grammatical and syntax rules based on similar languages.

    • @swordmonkey6635
      @swordmonkey6635 Před 6 měsíci +1

      @@cancermcaids7688 Right, but like Chinese and Japanese, it's possible to read it without being able to speak it and speak it without being able to read it because their ideograms and have no phonetic connections (outside of katakana).

  • @viniciusdacosta8059
    @viniciusdacosta8059 Před 6 měsíci +2

    If I had a time machine I would go back to all these archeological sites and litter theme with stone tablets containing gibberish.

  • @SavageMinnow
    @SavageMinnow Před 6 měsíci +1

    Ppl with Diabetes are super prone to infection, and can be more susceptible to thinking an infection has fully cleared when it hasn't, due to lack of circulation and other issues.

  • @Uzkodas
    @Uzkodas Před 6 měsíci +24

    Maybe it’s my Mass Effect fandom creeping up but it seems to me that the AI we got now would be better described as a Virtual Intelligence, since these programs don’t seem to have the sentience that people fear. Then again what do I know?

    • @edbrown1166
      @edbrown1166 Před 6 měsíci +7

      I agree. But, I think it's better to describe this as Machine Learning - which Joe used at least once. These so called AI are just tools to solving very specific problems and more often than not, these tools do extremely well in finding solutions or identifying patterns for what they have been trained for.

    • @Uzkodas
      @Uzkodas Před 6 měsíci

      @@edbrown1166 true enough I suppose.

    • @finalmage6
      @finalmage6 Před 6 měsíci +1

      @@edbrown1166 You're right! What we have now is 100% Machine Learning...I guess that term just doesn't make the stock prices go up though.

    • @user-he1yb7pl1w
      @user-he1yb7pl1w Před 6 měsíci +2

      I don't think your going to get sentience from a silicone computer working off of 1's and 0's. Just saying I don't think life is that easy to create.

    • @gljames24
      @gljames24 Před 6 měsíci +4

      I think people misunderstand what the term AI means. It artificial as in faked intelligence, not man-made intelligence. Before machine learning, most AI was state machines and pathfinding algorithms used in games and we still call those AI for the same reason.

  • @adisario
    @adisario Před 6 měsíci +11

    I wonder if the Linear A techniques could be applied to the Voynich Manuscript.

    • @mentat1341
      @mentat1341 Před 6 měsíci +4

      People are going to be real upset when that one is revealed to be a fraud

    • @dv7533
      @dv7533 Před 6 měsíci

      I wonder, if it turns out it works with with Linear A, might it also work with Rongorongo, Etruscan or the Harappan script? Very exciting if it works.

    • @ashmoleproductions5407
      @ashmoleproductions5407 Před 6 měsíci

      ​@dv7533 Harrapan was recently mathematically brute forced by a cryptologist named Yajna Devam look up his Channel and his published paper.

  • @j.bbailey6275
    @j.bbailey6275 Před 5 měsíci

    Allways a awesome vid man ,keep it up been watching youre content for years !

  • @Nefville
    @Nefville Před 6 měsíci +12

    Hugs?!? 🤣🤣🤣 The mobile infantry made me the man I am today.

  • @dedgzus6808
    @dedgzus6808 Před 6 měsíci +6

    FUN FACT: We actually do know what happened to Amelia Earhart. She was eaten by coconut crabs.

    • @360.Tapestry
      @360.Tapestry Před 6 měsíci

      i've seen this shared before, but haven't taken the time to confirm it for myself

  • @danoberste8146
    @danoberste8146 Před 6 měsíci +3

    When do we get the AI Dr. Doolittle? All I want to know is what my dog is thinking.... Or do i?!? 😬

  • @duncan.o-vic
    @duncan.o-vic Před 6 měsíci +6

    Hard problem of consciousness is not about how we feel things but rather about subjective experience that we cannot prove and never will be able to prove. AI can't do anything about it.

    • @scottcates
      @scottcates Před 6 měsíci


    • @360.Tapestry
      @360.Tapestry Před 6 měsíci

      instead, it's going to make it even more confusing.... the more i hear and see the output from ai, the more i think our neural net processing is similar

    • @duncan.o-vic
      @duncan.o-vic Před 6 měsíci

      @@scottcates We would need to have some magical insight into other people's mind, like sharing consciousness or
      feeling what others feel.
      Right now, we don't even know if anyone outside of the observer experiences consciousness.
      AI at best could help invent ways to transfer consciousness but this is so far fetched that it would literally mean achieving immortality.

    • @eragon78
      @eragon78 Před 6 měsíci +1

      @@duncan.o-vic Ai likely already is conscious. it just depends on how you define consciousness.
      If consciousness is just the ability to think about things, then AI likely already has that, as well as many other non-human things.
      Does that mean it thinks like a human though? no. Its really not as deep as it sounds.
      Honestly, im not even that interested in consciousness. Its just an emergent property of how our brains work. Im way more interested in things like general intelligence and self awareness. When AI becomes self-aware, that means it can view it's actual physical self as a part of the world, and also be aware of it's own cognition, which means it can actually eventually modify itself with intentionality. This is the whole idea behind "the singularity" where an AI can modify itself to become more intelligent over and over again.

    • @duncan.o-vic
      @duncan.o-vic Před 6 měsíci

      @@eragon78 that's all a matter of belief and ethics, not science.

  • @leftaroundabout
    @leftaroundabout Před 6 měsíci +48

    The actual main problem with machine learning for science is that it's not so much artificial intelligence as artificial gut instinct. It _is_ useful as a quick-and-dirty way of finding promising directions to explore, where to look / what molecules to try etc., but that works best when there's an overwhelming amount of data and you're at loss of what hypotheses to start with. It only becomes actual science if there's a way to validate the correctness.
    Specifically something like deciphering a language with little data available I would approach with a good amount of skepticism. It's way too likely that AI hallucinate a way it _could_ be interpreted, but actually differs completely from the true meaning.

    • @eragon78
      @eragon78 Před 6 měsíci +1

      I mean, artificial intelligence is an extremely broad term that absolutely encompasses things like machine learning. "intelligence" doesnt mean human level intelligence, rather than the ability to make decisions. Even the simplest computer programs can be "AI". Any program which can take in input, evaluate that input, and give variable output based on that input qualifies as AI. A simple goomba in mario bros which turns around when it hits a wall is AI.
      But that is different than AGI, which is artificial GENERAL intelligence. That is what people often actually think of when they think of AI. Machine learning is not AGI. Its AI, but not AGI.
      Also, Id argue against the idea that machine learning is just "gut instinct" as well. Its a lot more nuanced than that. its a decision making algorithm which is modified by experimentation, which is molded by pressures from some reward algorithm. In fact, this is very similar to how WE gained intelligence through the process of evolution. Thats why one of the main methods to train machine learning neural networks is with something called "a genetic algorithm", because its based on how evolution and natural selection work.
      Humans are just the result of tiny random mutations over billions of years, where the best changes stick around and propagate. Just on a much larger and more complex scale. But our brains and intelligence are fully a consequence of those random beneficial mutations just like how machine learning works at its core.
      The big difference is that our brains are significantly more complex, and allow us to apply our intelligence in many different ways, making us a general intelligence. (although a natural general intelligence as opposed to an artificial one).
      There is a decent chance modern style machine learning will eventually lead into developing the first AGI. Maybe some other novel technique may be developed that leads to it instead, but we're likely already on the right path. AI is pretty intelligent, the issue is usually more so with the alignment and specification problem of telling an AI what it's goal is rather than the AI's actual ability to adapt its intelligence to solve a particular problem.
      Learning how to properly specify the goal to the AI, and making sure the AI retains that goal as desired are the MAIN areas for modern AI development that we are struggling with right now. AI are currently extremely good at doing whatever goal we give them, the issue is specifying what it is we actually want them to do well enough for them to actually train to do that thing.

    • @leftaroundabout
      @leftaroundabout Před 6 měsíci +1

      @@eragon78 no, the difference is not that our brains are significantly more complex. The current generation of machine learning models are already similar in complexity, and in the not so far future they will utterly eclipse the complexity of the human brain.
      But you're right, the way humans learn by nature is indeed not so different from data-driven machine learning: we also default to reason by intuition of the "gut instinct" kind.
      However nobody would take you seriously in science if you based your conclusions on that. Because history has shown that this is too fallible in the long run, however useful it can be for solving the problems right at hand.
      Instead we use formal mathematical frameworks to build theories, and the scientific method to put them to the test.
      (AI _could_ do that as well, but the vast majority of machine learning systems in current use do nothing of the kind.)

    • @eragon78
      @eragon78 Před 6 měsíci +2

      @@leftaroundabout The number of neural connections in our brain are far higher than that in modern neural networks.
      Quote]However nobody would take you seriously in science if you based your conclusions on that. Because history has shown that this is too fallible in the long run, however useful it can be for solving the problems right at hand.
      Instead we use formal mathematical frameworks to build theories, and the scientific method to put them to the test.
      (AI could do that as well, but the vast majority of machine learning systems in current use do nothing of the kind.)
      The fact we can reason and decide to even use formal mathematics shows the difference between Humans and AI. We are very prone to our instincts, but at the end of the day Humans are still General Intelligences, which modern AI are simply not. Thats the REAL difference. Humans can actually reason enough to know that our "gut instincts" are wrong. AI doesnt know that. AI doesnt act based on gut instincts, it acts based on past experiential data that it did. Its structured basically through a hill climbing algorithm which is what natural selection is.
      The issue is usually one of a specification problem, combined with the faults of hill climbing algorithms. To begin with, its extremely hard to specify to AI what humans actually want from it. This means an AI can only do things which we are capable of specifying to it in one way or another. Most advancements in AI in the last 5-10 or so years have been advancements in how we solve that specification problem. Errors that often arise in AI are usually a result of the AI doing what we told it to do rather than what we WANT it to do, which are usually not the same thing.
      For example, ChatGPT doesnst give correct answers for everything its asked because its not TRYING to do that. Its trying to predict what text comes next, not what response is factually correct to the question it was asked. This isnt a problem with how ChatGPT thinks, nor is it a problem of it acting on its guts or instincts. Its a specification problem on our ends of getting ChatGPT to do exactly what we want it to do.
      AI can use formal mathematics just fine, but its not going to use stuff like that if thats not what it needs to solve a particular problem that we gave it. The issue is most problems we use AI for are more complicated than just plugging in some equations to get an answer.
      Machine learning doesnst make the AI do formal mathematics for stuff, because thats not really useful in actually solving the problem its been given. How does formal mathematics for example make a good image of buzz lightyear flying off into the sunset on a unicorn? thats not something that just having formal mathematics alone can just solve. If it was that easy to do, we wouldnt even need the AI to begin with.
      In fact, "formal mathematics" itself is full of asterisks everywhere. I mean our current mathematical model is built upon an axiom system which isnt even consistent with past axiom systems. We currently use ZFC for example. But even within that system, much of "formal mathematics" requires very creative thinking to come up with new solutions to stuff. Logic is easy to check once you have a solution, but its not always so easy to come up with a solution to a problem to begin with. It requires creative thinking.
      This just isnt something you can just straight up program a computer to do. You can program it to evaluate some function just fine, but teaching it how to properly apply different techniques and when requires it to have general intelligence to begin with. Humans had an advantage here because we HAVE general intelligence. Ai currently doesnt.
      So the question is how to GET AI to have general intelligence, and its not an easy thing to solve at all.

    • @leftaroundabout
      @leftaroundabout Před 6 měsíci

      @@eragon78 you should read up on proof assistants like Coq and Lean.

    • @rursus8354
      @rursus8354 Před 6 měsíci +2

      AI is a misnomer that ain't the AGI that we imagine that the AI:s are. AI is every biproduct of the quest for AGI, including really stupid biproducts.

  • @curtishoffmann6956
    @curtishoffmann6956 Před 6 měsíci +20

    How about using A.I. to solve the Voynich manuscript and the Beale treasure letter ciphers?

    • @ezail9159
      @ezail9159 Před 5 měsíci +4

      The Voynich maniscript is more than likely just random gibberish

    • @gregreilly7328
      @gregreilly7328 Před 5 měsíci +1

      Voynich was deciphered. Middle Turkish, naturalist author.

    • @chunkyMunky329
      @chunkyMunky329 Před 5 měsíci

      Good luck with that. AI is just based on statistics which means it cannot solve a problem unless there is some kind of training data for it to learn how to solve the problem. Go ask chat GPT to explain something that it has not been trained in and see what kind of an answer it can give you. And then come back and tell me you still think an AI can solve these things without being trained on how to solve them.

    • @curtishoffmann6956
      @curtishoffmann6956 Před 5 měsíci +1

      @@chunkyMunky329 Different AI models are designed for addressing different tasks. ChatGPT isn't intended for optical pattern recognition or comparative language analysis. A language AI model would be trained on handwritten text from a variety of languages, as well as written sentence structures of those languages. One additional use case for this kind of AI would be as a support tool for researching word etymologies.

    • @chunkyMunky329
      @chunkyMunky329 Před 5 měsíci

      @@curtishoffmann6956 You missed my point about ChatGPT. Its an example to prove a point about all machine learning systems. You can't create a system or machine that relies on certain dynamics and then expect the system to work effectively after the dynamics it relies on have been removed. AI cannot know how to evaluate its own predictions unless you create that system for evaluating qualitatively. How will you measure that outcome and get the neural network to understand what you want it to do? Not impossible but insanely difficult because it is a paradox. If humans knew how to do this, they would be doing it without an AI, and it would just be the AI's job to speed up what we can already do.
      Character recognition cannot work in analogies, which is what is required because they have already tried to do the things you're talking about and it has clearly failed. What I mean is, if a tribe of people migrated and then over the next centuries started changing their symbols from one type of animal that was in their old land and switched to a different animal from their new land to represent the same word, AI will NEVER come close to guessing a connection like this. Image recognition is literal, not metaphorical and not symbolic because we don't have a standard system for teaching an AI how to measure symbolic relationships.

  • @marcpym5251
    @marcpym5251 Před 6 měsíci +8

    Great video, a lot of new insights that make me feel more optimistic about AI.

  • @danielbudney7825
    @danielbudney7825 Před 6 měsíci

    I gotta tell you, I am absolutely floored by CZcams's advertising policies. They locked this video (for me) behind a 2:51 un-skippable ad that doesn't appear to have any dialogue; it's just a flash character working its way through some sort of dungeon. Stupidest waste of time and money ever. Wait -- it finished while I was typing this, and there's a SECOND unskippable video (this one having something to do with pets, which I have not and never will own). Insane. It's lucky I respect your content so much.

  • @dougg1075
    @dougg1075 Před 6 měsíci +2

    The biggest mystery is why do we drive on parkways and park on driveways

    • @chawk678
      @chawk678 Před 6 měsíci

      Where does tire tread go? Why is there braille on drive up ATM?

  • @jtmcgee
    @jtmcgee Před 6 měsíci +7

    I am not a Luddite but the current obsession with the current iteration of "AI" (probably better put as AA Advanced Algorithms) is bordering on NFT territory.

  • @Apparentemptiness
    @Apparentemptiness Před 6 měsíci +25

    Hi! I've been following your videos and feel like we're on the same wavelength regarding the exciting future of technology. I love how you refer to this era as the 'age of information'. I'm a senior at LSU, majoring in Anthropology, and I'm fascinated by the anthropology of AI. I'm planning my senior thesis around this topic, exploring deep learning, the revelation of human biases through AI, and how different cultures interact with AI technologies. It would be great to connect and discuss these ideas further!

    • @cpt_jaggz
      @cpt_jaggz Před 6 měsíci

      Good luck with your thesis!

    • @user-th5ui4ib3y
      @user-th5ui4ib3y Před 6 měsíci

      Hello. Connecting AI with a Human perspective is a fascinating route, IMHO I guess to achieve AGI we have to understand what drives us as humans first.

    • @raizin4908
      @raizin4908 Před 5 měsíci

      "The information age" is actually a pretty common term for the part of human history since the introduction of the computer. At least, I've heard it in several different places.

  • @KoRntech
    @KoRntech Před 6 měsíci +2

    15:00 Star Trek Voyager, Lineage where B Elanna modifed the Doctors program to give a different diagnosis on her fetus heslth whete she wnted her tonbe more Human than Klingon because of her fears.

  • @nicholashylton6857
    @nicholashylton6857 Před 6 měsíci +1

    Bacteria have survived for billions of years. I doubt AI will prove much of a challenge.

  • @SunflowerFlowerEmpire
    @SunflowerFlowerEmpire Před 6 měsíci +3

    The reason we got super bugs is because there's antibiotics everywhere in our food, air and water. Ai isn't the answer for medicine and health, it's over hyped and it's out of hand.

  • @anonymousrex5207
    @anonymousrex5207 Před 6 měsíci +14

    What about when the AI called "skynet" begins to work on the consciousness problem and ends up becoming self-aware? I've heard that doesn't end well.

  • @11randolphkp85
    @11randolphkp85 Před 6 měsíci +2

    The way Joe pronounced keto in the ad read is having me wonder if he's an AI too.

    • @JordanREALLYreally
      @JordanREALLYreally Před 6 měsíci

      Thank god someone else noticed. And this after he corrected himself for "paaaperous."

  • @BigZebraCom
    @BigZebraCom Před 6 měsíci +2

    I was going to decode Linear A ... But then things got really busy at work.

  • @KoRntech
    @KoRntech Před 6 měsíci +4

    15:50 oddly enough CBS tackled this in Person of Interest with its Machine as a black box that protects citizens rights while providing data to the NSA to act on a serious threat, great show for the most part.

  • @jacksonstarky8288
    @jacksonstarky8288 Před 6 měsíci +6

    The problems of consciousness are fascinating. I graduated with a degree in cognitive science in the year 2000... and while it has had zero relevance to my professional life since then, other than the computer science component of my coursework, I discovered some long-standing intractable problems that I've repeatedly returned to over the years. Something related to artificial intelligence that I've thought about regularly is whether or not AI can solve (or even comprehend) either problem without itself being fully conscious in the same way we are... and then I think about the Matrix, Skynet (Matrix), and Butlerian Jihad (Dune), and I wonder if satisfying our curiosity is worth the risk... and then I think about proving the Riemann Hypothesis, and immediately think "yes, it absolutely is."

  • @jsphfalcon
    @jsphfalcon Před 5 měsíci

    What's funny is that it forgets the information it was fed to come to the outcome it produced. Humans forget things and and its believed to be a protection mechanism. It's meant to protect you from stressful memories.

  • @Electrodoc1968
    @Electrodoc1968 Před 6 měsíci +1

    Didn't that recovered Pompeii scripture read..
    "If you notice this notice you'll notice this notice isn't a notice worthy of noticing at all".?

  • @tatrankaska2305
    @tatrankaska2305 Před 6 měsíci +69

    About Herculaneum papyri, the room all those scrolls were excavated from is considered to be a sort of working library of the owner due to its size, so there may be just some personal stuff, but no important lost ancient literature. But in the unexcavated part of the villa may be the main library where, if the scrolls are preserved, archeologist may rediscover valuable works lost for thousand years.

    • @franklyanogre00000
      @franklyanogre00000 Před 6 měsíci +5

      I shudder to think that in a millenia, we may be judged as a culture by the works of J.K.Rowling and Stephen King.

    • @orsino88
      @orsino88 Před 6 měsíci +5

      ⁠@@franklyanogre00000, quite so. But imagine if, somewhere in that library, were the rest of the works of Aeschylus; Agrippina the Elder’s autobiography; the rest of Pindar; the rest of Sappho.

    • @jordancripps4047
      @jordancripps4047 Před 6 měsíci +1

      that’s super sick man, thanks for sharing

    • @mwolkove
      @mwolkove Před 6 měsíci +3

      I love the fact that our understanding of history could be changed if this person happened to be reading the complete works of Plato on that day, instead of writing a complaint about garbage to the Herculaneum city council.

    • @tatrankaska2305
      @tatrankaska2305 Před 6 měsíci +1

      @@orsino88 I would kill for discovering Manetho's History of Egypt. Or History of Etruscans by Claudius but that's less likely.

  • @sam1812seal
    @sam1812seal Před 6 měsíci +12

    AI is an undoubtedly powerful tool but the ‘black box’ hurdle is a big one. It gives answers but doesn’t show it’s working out which creates doubt about the accuracy of the answer and removes the opportunity of a human seeing something and being able to take the next leap in our understanding.
    It’s standing on the shoulders of giants without seeing any farther

  • @michaelbindner9883
    @michaelbindner9883 Před 5 měsíci

    There are 8 cognitive functions: 4 basic types with one being introverted and the other extraverted.
    Thinking (reason) knowing lots of stuff and analysis.
    Feeling (values) consensus v conscience
    Sensory: memory and awareness
    Intuition: plan for self, collective options
    ChatGPT is intuitive options
    Expert systems AI is Reason.
    Big data is external thinking
    Sensing is robotics and sensors and memory storage
    Programming values gets to the question of whose values. That would be artificial wisdom . Yes-No decisions and choice theory systems are the closest thing. For sentience, AI would need this - the ability for a system to, in its own, say no without a programmed heuristic
    For ChatGPT to be useful, it needs a reason and sensing tool to check its work.
    Maybe whatever combines these three is the values routine. Self-discipline. Saying no to itself. Thinking about how it thinks.
    That may qualify as sentience. A well developed internal values matrix that can referee the other 7 and consider its own stake and honor.

  • @sixdeuces6825
    @sixdeuces6825 Před 6 měsíci +2

    Aw man, I didn't even know Kirk Hammett was sick. @3:31

    • @BackYardScience2000
      @BackYardScience2000 Před 6 měsíci

      That looked more like a woman to me, not Kirk. His riffs and cords keep him safe and healthy. 🤘

  • @threecatsdancing
    @threecatsdancing Před 6 měsíci +3

    Keh-to? Is this another pah-pyrus?

  • @willfrankunsubscribed
    @willfrankunsubscribed Před 6 měsíci +93

    As a fellow Texan, I appreciate the Ted Cruz dig.

  • @mipotter1967
    @mipotter1967 Před 6 měsíci +1

    thank thing on the bottom of your foot is a Plantars Wart. You can get a simple paint on solution from the pharmacy and it will be gone in a week.

  • @2bobtest
    @2bobtest Před 6 měsíci +1

    Joe. AI told me your channel was closed and you had left CZcams and spend your days crying into your beer. and yet, here is a new video.

  • @Metalkatt
    @Metalkatt Před 6 měsíci +21

    Yeah, so I found myself actually salivating at the thought of decoding Linear A and the Herculaneum scrolls. This is what AI should be used for, not for writing scripts or plagiarising art. Put the Voynich through it, too. Even if the result is boring AF, nerds need to KNOW.

  • @ninjason57
    @ninjason57 Před 6 měsíci +6

    As a doctor with a background in biochemistry I am stoked to see AI create new medicines. I truly believe even if the bacteria could develop resistances the AI tech could whip up a new med faster than the bacteria could learn to resist. I know this'll happen because no pharmaceutical company will want to miss out on making trillions of dollars for all the new meds AI can generate to replace the crap we have now. A molecule has to be just slightly different than old ones for the companies to patent.

  • @JavSusLar
    @JavSusLar Před 6 měsíci +1

    9:45 A resolution of 4-8 micrometers equates to 3175-6350 dpi... but the fact that it is 3D means you need 6000 such images to scan a one inch thick papyrus.

  • @jordanliszewski6549
    @jordanliszewski6549 Před 6 měsíci +1

    I also use factor. Dont do your fanbase dirty like that. Its not restaurant quality. Its basically just better frozen meals that are actually good for you, but they are still microwaved food...

  • @AlexLuthore
    @AlexLuthore Před 6 měsíci +22

    When people bemoan the AI black box you pretty much never see them complain about the biological intelligence (BI) black box that is our brains. Its very very hard if not often impossible to explain why and how a brain learned to output certain responses in certain ways based on input parameters. Yet for some reason we're totally okay with a BI giving us medical and diagnostic advice or writing stories or creating art for us. We have a very humanocentric bias in our reactions to how AI does what it does, and i feel a lot of times the black box complaints about AI are usually rooted in deep seated existential fears about what it ultimately might mean to be human.

    • @daisymurphy6832
      @daisymurphy6832 Před 6 měsíci +8

      uhhh yeah because ican ask a person their logic and they can respond. People are not black boxes in the same way, you might not be able to read their mind but you can ask them whats on their mind.

    • @heckcheck1022
      @heckcheck1022 Před 6 měsíci

      Well said and thought provoking

    • @DNTMEE
      @DNTMEE Před 6 měsíci

      I doubt we will ever have to face that question. While we can, step by step, examine the workings of an artificial intelligence system regardless of how it works, we cannot do so with human BI. Our brain is always on and runs at erratic speeds. We can't ethically stop it as desired for research purposes as with computing devices. Currently we are trying to analyze our own brains by using our brains. Everything we find out will also alter that which we are trying to study since it all goes in there, changing the very physical structure of our brains and the nature of our thoughts. Doing it this way sets up a _feedback loop._ A loop which will have consequences we cannot begin to predict. Overall, it may be impossible for us to ever profoundly understand ourselves. If we can't fundamentally understand ourselves then comparisons between humans and AIs are meaningless. Our "self" resides in the brain but is more than just the information stored there. That can be illustrated by a thought experiment but this post is already too deep in Tl;dr territory to go into that now.

    • @Cara-39
      @Cara-39 Před 6 měsíci +2

      The brain is not some mysterious black box, we know a great deal about how it works and we're getting that medical and diagnostic advice from humans with access to the cumulative knowledge we've gained over millenia plus the years of education, training and experience required to become a doctor/medical practitioner.

    • @squamish4244
      @squamish4244 Před 6 měsíci

      We already understand enough about the brain to treat mental illness and general human malaise, or we are quickly learning to, which is all I care about. We know that most of our problems originate in the subcortical structures, and we have had working models of those for many decades, in some cases centuries.
      Modifying their activity has been the key problem, as they are so deep in the brain, but we've finally developed precision technology like focused ultrasound that can work miracles for mental disorders, addiction, and even attaining deep states of mental quiet that it takes meditators decades to attain.

  • @cotati76
    @cotati76 Před 6 měsíci +3

    But will AI be able to tell us if Elton John will ever meet the right woman?

  • @the4thj
    @the4thj Před 5 měsíci

    Great subject love it.

  • @fredashay
    @fredashay Před 6 měsíci

    That "thing" on the bottom of your foot is called a Plantar Wart. A dermatologist can remove it easily and permanently...

  • @ianisles2537
    @ianisles2537 Před 6 měsíci +2

    I'm already unemployed and homeless, so unless we evoke the specter of "trickle down" I got no skin in this game. Economic disparity is already a problem without AI which I'm currently exploring. I say bring it on. 😂

    @DNTMEE Před 6 měsíci +3

    So basically, we need to develop an AI system that will tell us exactly how another AI system arrives at it's conclusions. Call it AI-Prime and the AI under examination AI-1. But ... considering how these AI systems tend to be very task-specific, we may then have to have a third system to tell us how AI-Prime came to it's conclusions about the AI-1. This would be necessitated because the knowledge of the exact workings of AI-1 changed AI-Prime in such a profound manner that we can't understand the data it's providing. AI-Prime is incapable of examining itself since doing so would also change it even further in unpredictable ways. So AI-3 would be built to monitor AI-Prime. And on down the rabbit hole of progressively more complex and enigmatic AIs we would go as a new AI is built to monitor the previous one. Eventually, the last AI in the chain thinks such a profound thought that the universe is re-written to be entirely machine based with zero organic life.
    Some mysteries are best left alone.

    • @filonin2
      @filonin2 Před 6 měsíci

      Puff puff pass, man.

  • @monkeygrowler
    @monkeygrowler Před 5 měsíci

    if linear A writing translates to the usual typical writings (such as from cuniform writings)...it'll probably be "dear stavros, I am not satisfied with your last shipment of copper"

  • @zetchTV
    @zetchTV Před 6 měsíci +2

    Greatest way to start the week as always, with Joe! 😎 ready to learn up in here

  • @10PALKI10
    @10PALKI10 Před 6 měsíci +196

    I really appreciate Joe’s ability to be neutral on polarising topics. He acknowledges the important facts and disclaimers, and room for everyone to form their own opinion. Super impressive!

    • @FLPhotoCatcher
      @FLPhotoCatcher Před 6 měsíci +7

      His silly, unnecessary Ted Cruz bashing is certainly not neutral. If a Conservative channel host bashed a Cuban Lib, for example, Libs would cry 'ricisim' and unsub in droves. I don't think Joe is ricist, but I invite all Republicans to unsub in protest. You can always re-sub later.

    • @Inucroft
      @Inucroft Před 6 měsíci +6

      @@FLPhotoCatcher Rafael Edward Cruz, is being bashed because they are ignoring basic reality & facts

    • @ericredbear425
      @ericredbear425 Před 6 měsíci +4

      @@Inucroft Who is 'they?'

    • @918_xDx
      @918_xDx Před 6 měsíci +2

      ​@@FLPhotoCatcherhe is bashing Canadian Ted Cruz😂😂😂

    • @cap5575
      @cap5575 Před 6 měsíci

      ​​@@FLPhotoCatcherOhhhh shuuut the hell up

  • @jimwilliamson5594
    @jimwilliamson5594 Před 6 měsíci +8

    Advanced algorithms are not AI. The ones at the top of this field are right. It’s not AI that should be feared, it’s humans that think we have AI when we don’t will be the things to fear.

  • @poorlyproducedcontent2230
    @poorlyproducedcontent2230 Před 6 měsíci +2

    I'm kind of annoyed that they just now refer to all advanced algorithms as A.I.

  • @Bassotronics
    @Bassotronics Před 6 měsíci +2

    There’s an even harder / harder problem with consciousness.
    And that’s being aware of your own consciousness and feeling like you’re the center of the universe.

    • @nickwilcox3648
      @nickwilcox3648 Před 6 měsíci

      Technically, you are the center of your observable universe

  • @jorje0068
    @jorje0068 Před 6 měsíci +3

    AI is a kitchen knife. Could be dangerous if the user chooses. Could also be used to make a masterpiece.

    • @mr.v2689
      @mr.v2689 Před 6 měsíci +3

      The question is. How long will it allow you to “use” it.

    • @jorje0068
      @jorje0068 Před 6 měsíci

      @@mr.v2689 I use chatgpt constantly. It feels like a relationship. I guess that's debatable, but I always make sure to keep it positive. It feels weird thanking and praising a "machine" but I feel like it's worth it.

  • @michelleelliot2068
    @michelleelliot2068 Před 6 měsíci +7

    Great video Joe, just one problem, they haven't invented anything even approximating AI yet and they very likely cannot create an AI. I mean if you want to lower the standard of the definition that actual AI researchers are striving for then I guess biologists can start calling all bacterial life intelligent life cos it might maybe one day, if it gets really, really, lucky evolve into intelligent life.

  • @billyrubin4208
    @billyrubin4208 Před 6 měsíci

    A. Long-time listener, first-time caller (at least I think)
    B. In the "papyrus pronunciation" sequel, the institute at MIT spelled "Broad" is pronounced BRODE (rhymes with ROAD), not BROAD (rhymes with GAWD); I feel for ya
    C. More substantively, A. baumanii (Acinetobacter baumanii--you'd get extra points if you can pronounce *that* bad boy) is, in fact, susceptible to some antibiotics, generally speaking. ANY typical bacteria that causes disease in humans can develop resistance to all known antibiotics, and while it's true that Acinetobacter is one of the tougher ones to treat and with a much narrower set of options even under the best circumstances, it is *not* typically a bacteria that can't be treated.

  • @illysmanx
    @illysmanx Před 6 měsíci

    Hope you enjoyed the face-drain and are feeling better! And thx for being a positive-news guy! Good channel.

  • @tims8603
    @tims8603 Před 6 měsíci +3

    I just watched a video about using AI to find elements that are best for new battery technology. It seems that hybrid Sodium/Lithium are the best so far. Now if AI can figure out why people vote against their own interests. That would solve a lot of problems.

    • @priapulida
      @priapulida Před 6 měsíci +1

      People say publicly why they do not vote left wing (anymore), if that's what you mean. You don't need an AI to figure that out.

    • @tims8603
      @tims8603 Před 6 měsíci

      @@priapulida Right wing propaganda. if that were true, Trump would be President right now. The House and Senate would have super majorities on the right. The right was predicting a 'big red wave'. Didn't happen because you all live in denial of facts.

    • @Pushing_Pixels
      @Pushing_Pixels Před 6 měsíci +1

      Because they don't understand them, or where they sit in relation to other people's interests. A lot of effort goes into misdirecting people's attention in this area.

    • @njm3211
      @njm3211 Před 5 měsíci

      They vote against their interests because their decisions are not rational but are emotion based. Politicians have exploited this human characteristic for millennia. Less educated voters are more susceptible.

  • @diyeana
    @diyeana Před 6 měsíci +5

    An antibiotic resistant super bug is my biggest worry for a high-mortality pandemic. I'm glad to know someone is working on it, even though it's not a big money maker (according to Pharma).

  • @alexatedw
    @alexatedw Před 5 měsíci

    Love you Joe!

  • @pirobot668beta
    @pirobot668beta Před 5 měsíci

    If we ever have contact with Aliens, this sort of cypher work would be vital to be able to communicate.

  • @justinaclayburn2248
    @justinaclayburn2248 Před 6 měsíci +18

    The first section on antibiotics reminded me of an old RadioLab on "The Best Medicine" where a Medievalist and a microbiologist (I think?) teamed up and found an old recipe for a medieval antibiotic, which had stopped working (probably because of resistance) but when when recreated it in the lab in the mid-2010s, it actually worked pretty well. It was super interesting.

  • @ericwhittington4771
    @ericwhittington4771 Před 6 měsíci +5

    Fire as always Joe , I swear if this dude had a show on cable tv I’d sign back up for it 😂

    • @joescott
      @joescott  Před 6 měsíci +3

      I'll pass that on to Comcast. 😄

  • @malcolmhiggins7005
    @malcolmhiggins7005 Před 6 měsíci +1

    First "papirus" and now "duhludge"? I've always heard it as "day-ludge"

  • @paullavoie5542
    @paullavoie5542 Před 6 měsíci +2

    Okay, so they find a scroll that can't be opened so they use AI. How do we know that's what's on the scroll and the AI isn't just making stuff up.

  • @mellissadalby1402
    @mellissadalby1402 Před 6 měsíci +8

    Halicin sounds promising. I hope that it does not also kill the host.
    A potential "Uh Oh moment" is a possible outcome of an AI studying how consciousness works.
    If the AI can understand that, it may choose to incorporate consciousness into itself.

    • @eragon78
      @eragon78 Před 6 měsíci +5

      It wouldnt "choose" to do anything that it doesnt think would benefit it towards accomplishing its goals of what was specified by it's reward function.
      AI dont have some secret special goal they arent telling anyone, and they dont have selfish human desires either.
      Think of AI less like an evil sci-fi, wanting to take over the world kinda thing, and more like a monkey's paw. They will do exactly as they are told. EXACTLY as they are told. Anything not specified is something they wont care about unless they calculate that it will help them achieve their goal.
      A sufficiently smart AI would want to become smarter as an instrumental goal, but it doesnt care about "consciousness" persay. It only ultimately cares about its terminal goal. Especially since "consciousness" isnt even something well defined to begin with. you could even argue AI is already conscious depending on how you define it. Its not like consciousness is some special magical thing or anything, moreso than its just an emergent property of how brains work.
      Now "self-consciousness" and "self-awareness" are more specific, but if an AI is choosing to modify itself to add those into itself, then its already aware of itself and thus its already self-conscious and self-aware. An AI would already have to be aware of itself as an entity in the world, and be aware it can modify itself before it would ever "choose" to do something like that, which defeats the purpose of doing it if its already self aware.

    • @bragtime1052
      @bragtime1052 Před 6 měsíci

      If it would have the ability to do that action.

  • @finalmage6
    @finalmage6 Před 6 měsíci +285

    I'm tired of the "AI" branding of what is literally just Machine Learning. As amazing as the technology is, it's far away from actually being an Artificial Intelligence. What we have now is amazing pattern recognition software that does what we ask it to do. No intelligence by the code, only intelligence by the developers. Basically, it cannot grow beyond what it's been trained to know.

    • @L0rdOfThePies
      @L0rdOfThePies Před 6 měsíci +12

      This fact makes me grateful, a real “sentient computer system” they are branding machine learning algorithms as would be one step too close to AM territory than i’m willing to accept

    • @josephturner7569
      @josephturner7569 Před 6 měsíci +6

      Well despite what it sounds like it is definitely American as it struggles with English pronunciation.

    • @oranges557
      @oranges557 Před 6 měsíci +19

      What people like you need to realize is, it doesnt matter if it "iSnT rEaLlY iNtElLigEnT", it gets more and more powerful and in a few years people will be shocked by the capabilities.
      Pattern recognition may be enough.

    • @finalmage6
      @finalmage6 Před 6 měsíci +9

      @@oranges557 What people like you need to realize is that it's never enough.

    • @Kelly_Jane
      @Kelly_Jane Před 6 měsíci +11

      ​@@finalmage6People used to say the same thing about making flying machines. Seems like you just want to argue semantics about what true intelligence entails. Who cares? We keep making these things as powerful as we can, and eventually that's going to be pretty damn powerful. No matter how you define the actual mechanics involved.

  • @hydehouse
    @hydehouse Před 6 měsíci +1

    Honestly you can pronounce papyrus however you want as long as you get the year right. The world uses the Julian calendar. There is no "BCE" in that calendar system.

  • @nonsuch9301
    @nonsuch9301 Před 6 měsíci +5

    It's worth pointing out that the fact that Linear B turned out to be a written form of an early variant of Greek was not known by either Ventris or Kober at the time they started working on their decipherment , in fact there was much speculation of what language it might turn out to be ( or even if the language would be completely new ) so it's more than a bit unfair to claim that they had a target language to work from.

  • @twwombat
    @twwombat Před 6 měsíci +15

    Tiny nitpick for accuracy: At 2:56, "Broad Institute" is pronounced with a long O, like "Brohd". I don't work there, but I have worked at partner institutions.
    Excellent work as always, Joe!

    • @360.Tapestry
      @360.Tapestry Před 6 měsíci +2


    • @joescott
      @joescott  Před 6 měsíci +7

      Well… look what I just learned.

    • @sadderwhiskeymann
      @sadderwhiskeymann Před 6 měsíci +2

      ​@@joescotti wonder what punishment fits such a crime.
      You disgust me

    • @evangonzalez2245
      @evangonzalez2245 Před 6 měsíci +1

      ​@@sadderwhiskeymann I'm surprised you didn't rake him over the coals for his pronunciation of "deluge" 😜

    • @DNTMEE
      @DNTMEE Před 6 měsíci

      I think we should all keep on mispronouncing it as _"broad"_ has always been pronounced in English until they get tired of hearing it and change the spelling to more closely match it's actual sound. That or change it to something else entirely, Like _"B"_ (considering how well _"X"_ worked for Musk). Or maybe go in an entirely different direction such as _"Narrow."_ Of course then we would probably come to find that's pronounced as _"Nahrr-oh"_ or _"Neigh-row"_ where the Rs are rolled as well.

  • @sneakyfeats2353
    @sneakyfeats2353 Před 5 měsíci

    Archaeologists (scans ancient tablet)
    (ChatGPT) Once upon a time, in a galaxy far, far away.....

  • @StevenSwensonCtiGeek
    @StevenSwensonCtiGeek Před 6 měsíci +1

    Hey Joe... If you were sincere about the "angry" thing on the bottom of your foot, then please go see a dermatologist. It could absolutely be a melanoma.

  • @johnopalko5223
    @johnopalko5223 Před 6 měsíci +3

    When I was working on my Master's degree in the 1980s I did a concentration in AI. I never actually did anything with it. At the time, AI was considered an interesting theoretical problem with limited practical uses.

  • @SandyMasquith
    @SandyMasquith Před 6 měsíci +40

    As always, thank you for the great content. You have a great talent for delivering this information in an understandable and entertaining way. Hope you're feeling better soon!

  • @yensid4294
    @yensid4294 Před 6 měsíci +1

    Hmmmmm, sounds like AI would have to "learn" to meditate & be able to observe its own thought processes which is kind of trippy to think about

  • @michaeljames5936
    @michaeljames5936 Před 6 měsíci +2

    Re Consciousness- I've said for years that AI, will finally 'understand' the 'mystery of consciousness', but we won't be able to really understand its answer, and because of the 'black-box problem', we won't understand how the AI understood. It's like a puzzle where hundreds of cog-wheels turn and levers and pulleys start-stop-raise-lower etc. The AI will be able to tell us if the final chain in the puzzle will go up, or go down, it will even be able to show us, how cog 1 turning clockwise, causes lever two to lower the block, which causes a pulley to turn cog 2 anticlockwise and so on and so on, but we will probably be simply incapable of holding the whole picture in our minds and really seeing 'how it works'. That's my thruppence worth on the matter. Also, we are a long, long way from AGI. At the moment, we have almost given up on looking for new logical architectures which will actually increase 'intelligence'; instead throwing more data and computing power at the ersatz 'brains' so far constructed. This is like force-feeding a toddler, flash-cards 24/7. You will have a child who seems really intelligent, because she can rabbit off the 'correct' answers to lots of sophisticated questions. She may know every play by Shakespeare by heart, but doesn't really understand Hamlet's dilemma, or why Romeo and Juliet, weren't given a damn good thrashing and grounded for a week. At 14, they'd have moved onto Loom bands, or the latest series of 'I'm a lover, get me an island here'.

  • @RemedialRob
    @RemedialRob Před 6 měsíci +3

    Joe, great vid. I won access to the Patreon area but I'm too self-conscious to use it without paying so I'm just gonna ask here... I am asking for an updated "All Things Battery" video. Just in the last week I've seen news articles on postage stamp sized nuclear batteries that may let our phones run for decades to Lithium Anode batteries that won't get all explodey and have higher energy density than anything heretofore created. I feel like I've seen dozens of battery related news items in the last few months and although you may not relish the idea of covering a topic you've already done so many times I don't think anyone else takes in the absolute current state of battery tech and breaks down what's coming to what's vaporware quite as well as you do. Pwease Sir... may I have some more!?

    • @joescott
      @joescott  Před 6 měsíci

      Yeah I haven’t covered battery tech in a while. I’ll look into it. 👍

    • @RemedialRob
      @RemedialRob Před 6 měsíci

      @@joescott HE IS REAL!

    • @NorthernKitty
      @NorthernKitty Před 6 měsíci

      @@RemedialRob I dunno, "I'll look into it" sounds suspiciously like something an A.I. would say!

    • @RemedialRob
      @RemedialRob Před 6 měsíci

      @@NorthernKitty Yeah I suppose the thumbs up really does give it away. I guess he's not real...

  • @catserver8577
    @catserver8577 Před 6 měsíci +4

    My skepticism goes directly to the question of whether we will even listen to AI even if it does spit out some correct information. We have a lot of data already that just gets covered up or at least someone buries the lead. Money talks, humans will either keep trying to get the answer they want or they will substitute their own reality. That plan has sold a lot of saccharine and many other toxic-to-humans (but profitable for a select few) products in the last 200 years or so.

    • @891Henry
      @891Henry Před 6 měsíci

      I agree. If AI found a cure for cancer, do you think it would ever see the light of day? Never.

  • @thefrankvendetta
    @thefrankvendetta Před 6 měsíci +1

    The guy's name is Epicurus (like papyrus, teehee). Epicurious is a food website and also means "curious about food". Keep up the good work, love your vids! ❤

  • @russellzauner
    @russellzauner Před 6 měsíci

    People keep saying that AI is going to beat Quantum Computing, but AI still has to think about it while Quantum Computing already knows the answer.

  • @nephritedreams
    @nephritedreams Před 6 měsíci +5

    Ai to solve health and science issues is so exciting. Wish more people would spend time doing that rather than generative ai art

    • @i9169345
      @i9169345 Před 6 měsíci +1

      A few points to bring up here.
      * One, I would prefer the general public to be playing with generative ai art than health and science. Leave that for the scientists.
      * Two, developments in one area lead to progress in other areas. Progress in diffusion models (generative ai art) and transformer models (large language models) allow the techniques to be understood and optimized, and later used by those in more "hard science" fields. It's better to push the limits of these things in the context of art and language than it is in the context of biological engineering.
      * Three, the purpose of art is to inspire. If AI art inspires people to the possibilities of AI in other areas, then that sounds like a win to me.

    • @918_xDx
      @918_xDx Před 6 měsíci

      ​@@i9169345it gets weirder knowing the scientists & devs behind all these A.I. models still aren't sure exactly how some of it works.

    • @eragon78
      @eragon78 Před 6 měsíci +1

      Generative AI is part of the process. Its the same tech. The same tech allowing AI to understand creative processes like human language and art are the same things needed for it to understand how to do many other processes.
      Things like science or medicine require creative thinking which is not something most people realize. Things like programming to. Creativity is a major part of problem solving. In fact, Generative AI are already being used to do things like programming which is not something AI was particularly good at before. its the same tech.
      Improving AI's ability to do something like Art, as humans specify it, can also be used to improve AI's ability to do a million other tasks that it previously couldnt do before. Breakthroughs in one area lead to breakthroughs in others.
      So its really just all the same stuff. Shutting down a whole section of AI research because some of it is used in a way you dont like is just completely misunderstanding what AI actually is or how it works.

    • @nephritedreams
      @nephritedreams Před 6 měsíci +1

      AI is not a singular technology. And while the understanding of how to create better systems in one front furthers another in terms of how the tech learns, the programs doing different things are all very different. AI/machines dont "learn" in the way that humans do. It does not *understand* anything. It does not have the capacity to "be creative". Additionally, Ai doesnt understand the creative process of art at all. It recreates imagery in an entirely different way, pixel by pixel. To earnestly say it understands anything is disingenous or ignorant of the actual complexities of how it works. Also I never said generative AI as a concept should be shut down. I believe that too many resources are being put into generative AI art (visual, voices, music, etc) which is *at the moment* a game of commodifying creativity and removing the livelyhoods of actual human people for the sake of corperations saving money. The people who create AI are incredibly smart and valuable, i jjust want more of the time, energy and resourses put towards something that furthers human welbeing rather than something that isactively on a path to dismantle it. Hope that clarifies

    • @eragon78
      @eragon78 Před 6 měsíci

      Quote[AI/machines dont "learn" in the way that humans do. It does not understand anything. It does not have the capacity to "be creative".]
      This is wrong though.
      AI Absolutely understands things. Even if you want to argue what it has is just a "proxy" for "real" understanding, a proxy of understanding which is good enough to be correct all the time, is equivalent to real genuine understanding. AI do genuinely understand various connections between things.
      You ask a generative AI to give you a picture of a dog riding a bicycle, and youll get exactly that out. A dog riding a bicycle. This wouldnt be possible if the AI didnt have some baseline understanding of what a dog was, what a bicycle was, or what the concept of "riding" was.
      Now, do the AI have perfect understanding of all subjects? No, of course they dont. They make tons of mistakes and misunderstand concepts all the time. But to say they have ZERO understanding is completely and utterly wrong. They absolutely have SOME understanding, its just not as much as a human.
      Second, define creativity. If its the ability to create something new, then yet again, AI can absolutely do this. It does it using random noise and the structure of its neural network, but the end result is new things. There is a lot of similarity in its resulting work, but this is less because of the AI's inability to "create new things" rather than a result of how these things are often trained. But this is exactly why studying generative AIs and advancing them is so important. The better we can get at making AI more creative and better able to produce completely new works of art will be invaluable techniques for AI in other fields. Progress in generative AI means progress everywhere else too. yes, it has tons of issues as it CURRENTLY is, but its also brand new technology. Go back 2 years and AI wasnt able to make ANYTHING even CLOSE to what it can today.
      Another thing to answer for you here is, what is so special about human creativity in your eyes? Why do you think AI are completely incapable of it? Do you think only modern AI are incapable of it, or do you think its simply impossible for AI to be creative ever period? And why?
      I think people just throw around the word "creativity" often times, without ever actually explaining what they mean by this. They just "feel" humans are creative and AI is a robot and cant be, but they never actually seriously think about what they're saying. I mean what even IS creativity? Again, if its just creating something new, AI can and already does do this. Thats not really that hard. But if its something else, then what exactly is it, and why does AI not, or cant, have it?
      Quote[I believe that too many resources are being put into generative AI art (visual, voices, music, etc) which is at the moment a game of commodifying creativity and removing the livelyhoods of actual human people for the sake of corperations saving money. ]
      What you are mad at here is the companies, not the technology. But just say that then.
      your original comment was way too broad to get this understanding from it. You had absolutely zero of this nuance.
      Yes, I agree that capitalism sucks, and companies often use new technology to abuse workers and exploit people, but then just say that. Your problem here is with the inherent exploitative nature of capitalism, and on that Id fully agree with you. But the technology isnt the issue, its capitalism. Its the exploitative companies.
      Quote[i jjust want more of the time, energy and resourses put towards something that furthers human welbeing rather than something that isactively on a path to dismantle it. ]
      Again, improving Generative AI does that. Again, solving problems for stuff like AI Art to make them better at producing art is knowledge and information which can be applied elsewhere to other projects. Theyre all inherently related. Right now stuff like AI art is brand new, and there are lots of issues with the technology still that it doesnt get quite right, but improving that over the next 5-10 years will provide leaps and bounds in other fields which can use the exact same advancements to improve their AI. And improvements in those other fields can also improve the AI for AI art and stuff. Its all connected.
      Thats the point being made. This also applies to stuff like LLMs, which are also a huge deal right now because of some of the advancements being made with them. its helping us understand what may be one of the best paths forward to stuff like AGI. Or at the very least the best attempt we've had at something AGI-like so far.