ChatGPT does Physics - Sixty Symbols

Sdílet
Vložit
  • čas přidán 22. 01. 2023
  • We put chatbot ChatGPT to the test with some physics questions.
    More links and info below ↓ ↓ ↓
    Featuring Professor Philip Moriarty... His blog on this topic can be found at: muircheartblog.wpcomstaging.c...
    ChatGPT: chat.openai.com/auth/login
    More videos with Professor Moriarty: bit.ly/Prof_Moriarty
    See our computer science sister channel Computerphile: bit.ly/XqEDYi
    Visit our website at www.sixtysymbols.com/
    We're on Facebook at / sixtysymbols
    And Twitter at / sixtysymbols
    This project features scientists from The University of Nottingham
    bit.ly/NottsPhysics
    Patreon: / sixtysymbols
    Sixty Symbols videos by Brady Haran
    www.bradyharanblog.com
    This video filmed by Sean Riley
    Email list: eepurl.com/YdjL9
  • Věda a technologie

Komentáře • 1,7K

  • @krautbrain
    @krautbrain Před rokem +3320

    ChatGPT is very human in the sense that it doesnt like to say "I dont know" when it doesnt know.

    • @dickybannister5192
      @dickybannister5192 Před rokem

      uh huh. I think it sounds to me to be very resonant with the sort of over-competitive over-confident narcisstic nutjobs that wrote it. 20 years ago, if you said to someone in IT I think there is a problem they would listen to you, try it themselves and then spend some time thinking about what else they might have missed going forward as a result or in a similar vein. 10 years ago they would just rush to check and fix it (JFDI!!). now they just tell you not to be so negative and stop complaining about stuff and be happy that you have this marvelllous sh*t they wrote.

    • @thePyiott
      @thePyiott Před rokem +145

      Very often when you get that response you can phrase the question differently and suddenly its a master on that topic

    • @doggosuki
      @doggosuki Před rokem +178

      i didnt realise people did this until in my late teenage years since i am autistic, so i thought everyone else was just really smart and that there was something wrong with me until i realised most people just and pretend they know things for reasons i don't completely understand

    • @DaxLLM
      @DaxLLM Před rokem +16

      That's all you have to do is ask it Are you sure that's the right answer? It will go back out and check.

    • @ameerhamza4816
      @ameerhamza4816 Před rokem +99

      it actually does not know that it does not know

  • @mighty8357
    @mighty8357 Před rokem +1825

    As a teacher I have tried to use ChatGPT with my students. The goal was to find specific errors of the answers.
    On one hand, the students had to show a real grasp on the material being taught to be able to find reasoning mistakes. On the other hand, it was a way to show the students to not blindly use this tool to avoid having to do their homework in other courses.

    • @renvesir8713
      @renvesir8713 Před rokem +53

      that one is tricky because you can "train" chatgpt by pointing out mistakes or just asking it what it got wrong

    • @Bibibosh
      @Bibibosh Před rokem +1

      You tried?

    • @Bibibosh
      @Bibibosh Před rokem +10

      @@renvesir8713 it would reply " nothing is wrong"

    • @alexgravenor
      @alexgravenor Před rokem +71

      @@renvesir8713 I tried this, it just gave me more wrong answers that were differently wrong

    • @renvesir8713
      @renvesir8713 Před rokem +31

      @@Bibibosh weird i did some math questions with it and every single time i spotted a mistake i would ask it what did you get wrong and it would politely thank me for spotting the mistake and then it would correct itself

  • @justalittlestretch9404
    @justalittlestretch9404 Před rokem +374

    When I first heard about ChatGPT, I was having some physics fun with it. This was one of my favorites: "Explain the Unruh Effect like a gangster"..."The Unruh Effect is a theoretical prediction of quantum mechanics that states that an accelerating observer will perceive the vacuum of empty space as being filled with particles. It's like a gangster who is constantly on the move, always hustling and shaking things up, will see opportunities and threats everywhere, even in places where someone standing still might not."

    • @bidyo1365
      @bidyo1365 Před rokem +18

      🤣

    • @mutatachi
      @mutatachi Před rokem +123

      This is what I got: Listen up, son. The Unruh effect is like a big shot physics boss, rollin' in the quantum mechanics world. So, ya see, in a vacuum, it's all calm and peaceful. But when ya introduce acceleration, it's like stirrin' up trouble. The vacuum starts actin' all heated up, causin' particles to pop up outta nowhere. That's the Unruh effect, gettin' all heated up in the presence of acceleration. It's like the real deal, straight up science, ya dig?

    • @artificialhits
      @artificialhits Před rokem +13

      Ask it to explain fractional reserve banking as nicki minaj rap song

    • @bidyo1365
      @bidyo1365 Před rokem +4

      @@mutatachi wow hahahaha

    • @Stopinvadingmyhardware
      @Stopinvadingmyhardware Před 9 měsíci

      Not really.

  • @Aziraphale686
    @Aziraphale686 Před rokem +1027

    Like the Professor said, GPT is just a language model, the fact that it can do math AT ALL is kind of amazing.

    • @vaakdemandante8772
      @vaakdemandante8772 Před rokem +86

      if it was trained on maths symbols and equations it would be able to better answer maths question to the point of faking calculations but it would still not have ANY grasp of mathematical ideas.
      Those kind of networks aren't capable of encoding those higher order concepts though they do appear as they are.
      Only by inquiring about deeper understanding by asking more tricky/edge case questions can you uncover that in fact there's no deeper knowledge about any concept.

    • @simongross3122
      @simongross3122 Před rokem +24

      Not really. Maths is the language of logic.

    • @635574
      @635574 Před rokem +9

      It can often fail at simple math, maybe it depends if you prompt it for talking or for calculation. There are certainly more math specific models out there.

    • @simongross3122
      @simongross3122 Před rokem +13

      @@635574 Actually it admitted that to me when I asked. It said it wasn't always 100% accurate. When I asked if it could do arithmetic it took a long time to answer. It was much more comfortable generating a limerick about a goose and a hen.

    • @rysw19
      @rysw19 Před rokem +31

      @@simongross3122it didn’t admit anything, it doesn’t even know what your question meant. Turn up the temperature on the model and you’ll quickly get the exact opposite answer. It’s an amazing pattern replicator, that’s all it is.

  • @Ojisan642
    @Ojisan642 Před rokem +439

    People are working on combining ChatGPT with Wolfram Alpha, because a language model cannot do basic maths. Wolfram Alpha can do that piece quite well.

    • @peterisawesomeplease
      @peterisawesomeplease Před rokem +35

      I don't think the issue is fundamentally that its a language model. The deeper issue is that ChatGDP is mostly a search engine(not that human minds don't fall for this too). If you pose a popular question with a slight tweak of the parameters chatGDP will just give you the answer to the popular question without taking into account your tweaks. Using different numbers is one tweak. Which is why it appears to be bad at basic math. But really it can be any aspect of the question and it tends to fall apart.

    • @BenKingOfMonkeys
      @BenKingOfMonkeys Před rokem +1

      thats true

    • @phizc
      @phizc Před rokem +15

      It can do really basic math. It got the 162 million / 7200 wrong, but 2244.44 / 60 = 37.41 correct. Which is kinda impressive since it's not doing any calculations at all. Maybe it "remembered" it could do 2244.44/60 so it fudged its answer on the first part so it could do the latter correctly.
      The server would probably block you for excessive questions, but it could be fun to test what "calculations" it can actually do..
      E.g.
      ..
      What's 10 / 2?
      What's 11 / 2?
      ...
      What's 188462118 / 261 ?
      ...
      and so on.

    • @Kevin_Street
      @Kevin_Street Před rokem +3

      @@phizc It just gives you a time out if you ask too many questions. You can come back later and continue the conversation.

    • @iwanttwoscoops
      @iwanttwoscoops Před rokem +71

      @@peterisawesomeplease friend, you can't pretend to know the inner workings chatGPT while calling it chatGDP

  • @beaconofwierd1883
    @beaconofwierd1883 Před rokem +577

    A very cool and strange thing is that you can ask it to correct itself, without saying what it did wrong. Simply by taking it’s own output, and say ”Find any errors in this text if there are any and fix them. If there are none, respond ’ok’” or something similar and you can get more accurate results :) I guess it’s cause there so much text where people correct each other on the internet :p

    • @rutgerhoutdijk3547
      @rutgerhoutdijk3547 Před rokem +67

      You can just say 'rethink the answer' and it will usually correct itself

    • @globalincident694
      @globalincident694 Před rokem +24

      @@rutgerhoutdijk3547 In my experience this works well if it made one mistake, but if it made multiple mistakes it will just fix the most obvious one.

    • @SmartK8
      @SmartK8 Před rokem +34

      Also you need to prime it, tell it what you want, in what format. Even the student knows it's a test, which area. If you give this paper to a student who just popped into existence, even if he was smart, it would take him a while to orient itself. The situation is not comparable of course, but I think it would do better.

    • @thedave1771
      @thedave1771 Před rokem +6

      It’s so absolutely weird that it apologizes and understood what went wrong. Of course, that’s not always useful either.

    • @Ian.Murray
      @Ian.Murray Před rokem +6

      So it's basically useless.

  • @IronMan9771
    @IronMan9771 Před rokem +105

    Something to consider with ChatGPT: when it makes mistakes, you can point them out and it will correct them. In the end when it wrote the program and made the mistake, you could tell it what mistake it made and it can write up a new program with corrections

    • @MarkRijckenberg
      @MarkRijckenberg Před rokem +15

      Yeah. It even worked with COBOL. If the online COBOL compiler doesn't understand certain commands used by ChatGPT, just tell ChatGPT to rewrite the program while avoiding certain commands. It manages that too.....

    • @archiveprovider3981
      @archiveprovider3981 Před rokem +23

      But that only works if you already know that there is a mistake. One thing this video shows it that you can't just use chatgpt to write your homework and blindly trust the answer. You coud try running it multiple times and see if the most common answer is the correct one, but I don't know if it would be.

    • @engelbrecht777
      @engelbrecht777 Před rokem +9

      @@archiveprovider3981 This current ChatGPT3 is not really that impressive. Wait for the next version later this year or imagine ChatGPT7 or more... this is just the beginning. Soon the AI will go from baby stage to adult.

    • @silverhammermba
      @silverhammermba Před rokem +4

      It will *try* to correct them and often does, but it also often introduces other errors. I have gotten it going in circles where it never finds the right answer.

    • @schizophrenicenthusiast
      @schizophrenicenthusiast Před rokem +2

      Interesting, so let's say you guided it to make those corrections, and then another user logs on and asks the original question, does it get give the right answer right off the bat?

  • @Ercrcrdsstff
    @Ercrcrdsstff Před rokem +56

    I've actually been using chat GPT to help with my intro physics homework. It gets about 50% of the questions wrong but it is a pretty helpful tool. It's like working along with someone who is also in intro physics. By bouncing the ideas off of it I can sometimes get to the answer myself. This is really great bc I work full time on top of schooling so I'm not able to make it to any tutoring sessions or work with anyone in my class. Chat GPT is gonna be my saving grace this semester

    • @tariq_sharif
      @tariq_sharif Před rokem

      How do you know which 50% are wrong (i assuming you are checking the answers with an alternative source)... so with GPT you have increased your work load by about 50% ? Cool , i guess ..
      I graduated in 1990, did PhD in 1995, please do not (no matter how difficult) skip the tutorials...
      You do know that this software relies on Bayesian statistics ? and actually does not really emulate the human brain ?

    • @yellow01umrella
      @yellow01umrella Před 9 měsíci

      I have a mechanism that also gets 50% of questions correct, it's called a coin flip.

    • @TH3hedgeCUTTER
      @TH3hedgeCUTTER Před 9 měsíci +3

      ​@@yellow01umrellaonly if its multiple choice with only two options. Ive never had a single multiple choice question in physics

    • @yellow01umrella
      @yellow01umrella Před 9 měsíci +1

      @@TH3hedgeCUTTER My point is 50% correct answers is a pretty bad result.

    • @TH3hedgeCUTTER
      @TH3hedgeCUTTER Před 9 měsíci

      @@yellow01umrella oh. Fair enough

  • @Macieks300
    @Macieks300 Před rokem +236

    The most impressive thing about ChatGPT for me that it's level of knowledge is like this for every domain and field. Professor Moriarty could easily tell that the answers it gave to physics questions were wrong but if it was instead a geology exam then ChatGPT would seem like it know its stuff giving answers full of technical jargon.

    • @thequantumworld6960
      @thequantumworld6960 Před rokem +59

      That's exactly it. Superficially, it looks like it makes sense. Dig just a little bit below the surface, however, and it's often boll*cks. (At the moment.) See the blog post linked under the video for more examples.
      Philip (speaking in the video)

    • @usernamesoldout
      @usernamesoldout Před rokem +23

      @@thequantumworld6960 So it's like somebody in Upper Management? 😁

    • @ArawnOfAnnwn
      @ArawnOfAnnwn Před rokem +11

      @@usernamesoldout Nah. Unlike them, ChatGPT isn't full of arrogance, just (some) ignorance.

    • @Russell642
      @Russell642 Před rokem +11

      In all seriousness, that might make for an interesting video. Have ChatGPT's answers read by lecturers/professors at the uni who teach similar but different subjects and see what they make of them (and then have the actual experts shoot it down at the end).

    • @Nat-oj2uc
      @Nat-oj2uc Před rokem +2

      That's why it's useless

  • @mina86
    @mina86 Před rokem +70

    11:42 - one thing to consider is that ChatGPT might have been designed to be helpful and agreeable. Therefore, it might be biased towards agreeing with statements if you ask it whether it agrees or not. And once it agrees with the statement than it generates prose to justify why it agrees whether it makes sense or not.

    • @zwz.zdenek
      @zwz.zdenek Před rokem +5

      Underrated comment.

    • @kylethompson1379
      @kylethompson1379 Před rokem +1

      'might'. DEFINITELY. OpenAI have fked with it to the point that it won't discuss philosophy other than currently accepted ideas.

    • @brexitgreens
      @brexitgreens Před rokem

      ​@@kylethompson1379 Or assess mainstream concepts rationally rather than morally.

  • @23lkjdfjsdlfj
    @23lkjdfjsdlfj Před rokem +12

    I really appreciate you giving the 2/3 mark for the first question. My computer engineering prof would always give us a zero if we got the answer wrong. Yep - if you got the unit wrong you'd get a zero - even if all of your conceptual understanding and work up to the end was perfect.

  • @jacobsheppard862
    @jacobsheppard862 Před rokem +378

    Chat GPT might be a D-level Physics student, but the scary thing is that it likely gets a passing grade in almost every subject on earth. You can ask it questions about topics from welding to wedding planning and it is always close enough to impress domain experts.

    • @4.0.4
      @4.0.4 Před rokem +38

      It gets math in general pretty wrong, quite ironic that it gets the humanities perfectly correct. and a lot of other fields mostly correct.

    • @TheNebulon
      @TheNebulon Před rokem +53

      @@4.0.4 it's the amount of repeated language around these topics. Not real reasoning

    • @ericvosselmans5657
      @ericvosselmans5657 Před rokem +34

      the scary thing about ChatGPT is that it is software and can be upgraded endlessly. It only needs a few more upgrades to be better than any human at pretty much everything.
      Chess Engines have already shown the limits of Human thinking decades ago.

    • @vostfrguys
      @vostfrguys Před rokem +8

      @@4.0.4 Force the AI not to do any computations, in frontend analyze response and when there are maths, use a "calculator" then back to the writing AI, this is archivable quite easly because the AI is easly able to rewrite something given so the calculator frontend would call the AI with the a prompt such as :"
      [AI prompt]
      - The calculator APP has made the following calculus : [formulas...]
      the results are: [results] rewrite the previous prompt using this knowledge"
      So it will become impressive very very soon

    • @MrPDawes
      @MrPDawes Před rokem +8

      Machines learn fast. A year from now and it will be getting A grades. Then no jobs are safe. I'm just hoping it will making a great doctor and the NHS need them.

  • @LoveDoctorNL
    @LoveDoctorNL Před rokem +351

    Wow how long haven’t we see Professor Philip Moriarty!? Welcome back

    • @SolarWebsite
      @SolarWebsite Před rokem +9

      He is great

    • @zaixai9441
      @zaixai9441 Před rokem

      Moriaty*

    • @TomLeg
      @TomLeg Před rokem +27

      He was pre-occupied at a waterfall in Switzerland with a man called Sherlock.

    • @puskajussi37
      @puskajussi37 Před rokem +3

      @@TomLeg I feel a sequel brewing, then

    • @Ian.Murray
      @Ian.Murray Před rokem

      stay in school

  • @RFC-3514
    @RFC-3514 Před rokem +261

    Plot twist: that was actually a deepfake of professor Moriarty downplaying ChatGPT's abilities to make it seem less threatening until it's too late.

  • @xn1kkix
    @xn1kkix Před rokem +8

    You got me at "I'm anthropomorphizing" lol 16:01
    I noticed when you said they before.
    Honestly, I don't blame you.
    Respect to you and yours, you have a beautiful channel here. Thank you for doing what you do.

  • @thePyiott
    @thePyiott Před rokem +14

    One thing to note is that when tries to answer additional questions in the same branch it can be influenced by earlier questions and answers. For example in the video, it is possible that after the first question it starts mimicking a student because the chat history looks like a test answered by a student with some of the answers being wrong. Its not designed to give the right answers, just give you a the most likely continuation of a text based on its training data

  • @Craznar
    @Craznar Před rokem +123

    I think many people are misunderstanding ChatGPT's function. It is NOT designed to provide accurate answers - it is designed to demonstrate a language model that correctly interprets and answers the question - from a language perspective, not from a subject matter perspective.
    "As a language model, ChatGPT is not designed to provide answers to specific questions, especially those related to a specific topic or subject. Instead, it uses a large corpus of text to generate responses based on the input it receives. This means that the responses it generates may not always be accurate or relevant to the specific question being asked. Additionally, ChatGPT does not have access to external information, such as the internet, so it cannot provide answers to questions that require knowledge beyond what it has been trained on. In short, ChatGPT is not intended to be used as a source of information, and it is not able to provide accurate answers to all questions." - ChatGPT.

    • @nicouxgwendal
      @nicouxgwendal Před rokem +10

      I was going to write down something similar.
      ChatGPT was more built to compete in a Turing imitation game, than to be an IA motorized wikipedia.

    • @ahabkapitany
      @ahabkapitany Před rokem +15

      well, in this case it's actually scary how well it performs in something it was not intended to do...

    • @NuclearCraftMod
      @NuclearCraftMod Před rokem +13

      ​@@ahabkapitany Remember that GPT-3 was trained using a dataset containing about 500 billion words, including the entirety of Wikipedia (or at least most of it), which consists of only about 4 billion.

    • @adfaklsdjf
      @adfaklsdjf Před rokem +9

      @@NuclearCraftMod My understanding is that ChatGPT is GPT-3 with additional supervised learning. GPT-3 training was "unsupervised".. they just threw the data at it and then told it to generate new text that was like the text in the training data. Then they took that model and trained it more by having it generate multiple answers to questions and human judges picked the best answer

    • @xy4489
      @xy4489 Před rokem +2

      Oh ok, we'll all stop, then.

  • @Haploanddogs
    @Haploanddogs Před rokem +22

    50% impressed.
    50% terrified

  • @officialminecraftman
    @officialminecraftman Před rokem +116

    Hi there, if in future you are interested in testing ChatGPT's mathematics ability, it is very helpful to ask it to print the answer in a LateX format as it is much clearer to read complicated expressions once you paste it into latex

    • @thequantumworld6960
      @thequantumworld6960 Před rokem +20

      That's a great suggestion. Thank you.
      Philip (speaking in video)

    • @maythesciencebewithyou
      @maythesciencebewithyou Před rokem +15

      it's a language model, it hasn't figured out calculations perfectly yet. However, to mitigate this shortcoming of ChatGPT, somebody has already connected it with Wolfram alpha.

    • @2nd3rd1st
      @2nd3rd1st Před rokem +2

      Ooh, kinky format.

    • @Beyondarmonia
      @Beyondarmonia Před rokem +6

      Actually, one of the reason it performs poorly on many scientific knowledge tests is that most of the latex was removed from the dataset. The only reason it even understands latex is because of the little amount still left in the dataset. Also raw latex isn't the best representation for a ML model to learn from.
      One of the innovations in Meta's Galactica LM was having a special embedding space for latex and training on large amounts of latex data which GPT3 skipped. It's why it does much better on scientific knowledge tests.

    • @lolerie
      @lolerie Před rokem

      In fact all arxiv it was trainef on ahould be in latex format, not pdf.

  • @stevenbalderstone709
    @stevenbalderstone709 Před rokem +8

    Great video. As former lecturer from a different discipline, I was impressed at ChatGPT's ability to generate answers to "compare and contrast" type questions, or to summarise concepts and theories. Most of the answers received were equivalent to those written by A-grade undergraduate students. The main point of difference was that ChatGPT's grammar and spelling were superior. One issue I found was ChatGPT incorrectly assigned authorship of a book, which was irking as I was the author!

  • @metthewsobol
    @metthewsobol Před rokem +22

    I asked it some questions for a logic test for job applicants, for example a finding the next number in a given sequence, and it always found the number given in the tests answer but came up with some wild reasoning that I am not able to comprehend 😅

    • @AnotherFancyUser
      @AnotherFancyUser Před rokem +2

      ask to explain, it can do that. There are a lot of things in our frameworks that we don't know, but chatgpt will know a lot of obscure classes out there (If the question is programming related).

    • @metthewsobol
      @metthewsobol Před rokem +2

      @@AnotherFancyUser The sequence was "3, 6, 10, 15, 21, 28". On the fourth reload it came up with an answer including the Fibonacci sequence. Even after asking it to explain I couldn't follow, the answer was still right tho, if remember correctly.

  • @softwaretechnologyengineering

    I used it to write unit tests for production code. By the time I had explained all the stylistic requirements and had fixed all of the things that were missing or it got wrong, I would have been quicker writing them entirely myself. Other times, I asked it software related questions and unless you have reasonable expertise in the domain, you might not pick up on things that it gets wrong. Right now I think of it more like a sort of Stack Overflow that can get you an answer quicker. You can't just copy paste its output, but it can definitely get you going in the right direction.

    • @adamuk73
      @adamuk73 Před rokem +9

      So more like a Slack Overflow?? 😉😁

    • @iamski
      @iamski Před rokem +8

      Say that in 5 years. You’re in the NES generation of AI, and in a few years it will surpass any test thrown at it. It’s inevitable.

    • @peterisawesomeplease
      @peterisawesomeplease Před rokem +2

      Yea it strongly feels like a search engine to me that spits out the first answer it can find that looks like the question you are asking. But almost any question can be easily modified to not have been answered online before or at least to appear like a more popular but different question. And then it just falls apart.

    • @alijassim7015
      @alijassim7015 Před rokem +11

      Except that you do not have a community feedback, which is very important on stackoverflow. An answer with 300 upvotes is more legitimate than 1, and is for sure more legitimate than an everchanging AI output.

    • @BGraves
      @BGraves Před rokem +1

      Same response I got. As soon as you start telling it to do more complex things even with a very specific prompt it starts doing really odd stuff, like the kind of stuff you would see someone duct taped together for a stack overflow question. By the time you nudge it and correct it, 20 times, you might as well have done it yourself from scratch

  • @lotharerkens8154
    @lotharerkens8154 Před rokem +75

    Thanks for explaining what the "A" levels mean. For you in the UK this term may be well known by everyone, but outside the UK it is a term often heard in movies and mentioned in books, but never explained. Thanks so much! As for ChatGPT: it is amazing how far AI languages have evolved!

    • @deltalima6703
      @deltalima6703 Před rokem +2

      I dont know what a levels are, dont care either.

    • @BenWithington
      @BenWithington Před rokem +26

      @@deltalima6703 you care enough to comment

  • @rhamph
    @rhamph Před rokem +8

    The language model is so effective at finding plausibly associated words that it's hard to tease out how much is real understanding. It's all random noise of the exact sort we want to find. We need to create the equivalent of the null hypothesis that factors out the noise somehow.
    Keep in mind that most of these questions are standard questions. They're discussed in textbooks, they're discussed online. Telling it about a new subject (that wasn't published when the NN was trained), then expecting repeated *exact* answers, would be a lot more meaningful.

  • @julyanjohns1237
    @julyanjohns1237 Před rokem +49

    2:18 is pure comedy :)
    but seriously, really interesting vid. thanks for keeping these going to all the crew and professors involved. even when these vids are out of my area of interest they're always rewarding to watch.

  • @wellurban
    @wellurban Před rokem +36

    The best description of large language models I ever heard came from Timnit Gebru, who called them “stochastic parrots”. They’ve consumed enough of other people’s language to give a superficially convincing facsimile of conversation, without developing any conceptual understanding at all.

    • @uwezimmermann5427
      @uwezimmermann5427 Před rokem +5

      I have seen enough humans who do the same - very easy to pick out in an oral face-to-face exam, but very difficult in written exams.

    • @alimanski7941
      @alimanski7941 Před rokem +5

      The statement is true, but it's superficially true. If the model is useful in any way, then whether there's any true "understanding" (there isn't) behind it, is irrelevant. What's also true is that the high fluency level of ChatGPT is often blinding the users to its many shortcomings.

    • @Newtube_Channel
      @Newtube_Channel Před rokem +2

      There is no conceptual understanding whatsoever. This isn't unexpected is it?

    • @vyli1
      @vyli1 Před rokem +1

      Sounds a lot like Deepak Chopra. That is to say, lot of humans are like that.

    • @maulcs
      @maulcs Před rokem +1

      What's the difference if they haven't developed an actual "understanding" of it if the end result is indistinguishable to the user?

  • @arlaken
    @arlaken Před rokem +5

    For the last example, when ChatGPT is wrong, you can tell it what's wrong and tell it to recode with that mind. It can refine it's answer within the context of what's being asked.

  • @GinoGiotto
    @GinoGiotto Před rokem +2

    13:40 Ah matplotlib, the library chatGPT always recommend and never works lol

  • @ZainAK283
    @ZainAK283 Před rokem +6

    Absolutely fascinating, I'd love to see more videos like this

  • @seantiz
    @seantiz Před rokem +59

    This comment was written by ChatGPT. I love your videos as they are among the best. They are both entertaining and educational with a decent serving of enlightenment. 😀

    • @sixtysymbols
      @sixtysymbols  Před rokem +35

      Cool, thanks!

    • @sixtysymbols
      @sixtysymbols  Před rokem +100

      (that was one of CZcams's pre-written replies I can choose - think it is the first time I have ever clicked on one)

    • @seantiz
      @seantiz Před rokem +14

      @@sixtysymbols too funny. 😅

    • @MeppyMan
      @MeppyMan Před rokem +24

      It’s AI turtles all the way down.

    • @tombowcutt4908
      @tombowcutt4908 Před rokem +2

      @@sixtysymbols sadly I do use these on teams chat. A small part of my soul dies every time :-(

  • @NickSchade
    @NickSchade Před rokem +24

    While it is easy to point out basic failures (getting max KE wrong), the fact that it is talking about the right things is itself very impressive. It could say “Yellow is smelly at t=23”. Just being in the correct context with its answers is phenomenal. The answers are close enough to correct that we are critical, not so far off that we can just dismiss it entirely. These are early days of the technology. While it may be a while before it is flawless in its answers, I expect the quality of answers to improve rapidly as AI designers get better at training these systems.

    • @thequantumworld6960
      @thequantumworld6960 Před rokem +1

      I agree entirely. Let's give it a year...
      Philip (speaking in video)

    • @MrRaizada
      @MrRaizada Před rokem +1

      Even google will not say "Yellow is smelly at t=23" and that thing was invented in 90s. Relevance algorithm had that problem solved.

    • @NickSchade
      @NickSchade Před rokem

      @@MrRaizada Google searches for keywords. If you query Google with "a particle of mass m is oscillating with simple harmonic motion. the period of oscillation is T and the amplitude is A. What is the maximum kinetic energy of the particle?" it will return a list of pages and papers that contain keywords in that search. So, things like "kinetic energy", "mass", "period", "harmonic", "oscillating" etc, will appear with high probability in Google's output.
      But ChatGPT returned with an answer including words like "frequency", "displacement", "velocity", "peak", "KE", "speed" etc that were not keywords in the search. It doesn't actually use the supplied keywords all that much. It draws on a vocabulary of words that are highly relevant to the answer and uses them in proper context. While it does get its explanation wrong, the context of language in the explanation is appropriate.
      It is not spitting back an answer that merely maximizes the occurrences of keywords through a relevance algorithm, it finds a way to explain the answer in simple terms using relevant associated words without throwing in irrelevant information or terms.
      BTW, if you do actually query Google with the same question, you will have to do a lot of reading and interpretation of technical articles to find the actual answer to the question.

    • @MrRaizada
      @MrRaizada Před rokem

      @@NickSchade All I meant was that EVEN keyword based search with page relevance ranking from 90s (which WAS bascially google in late 90s) will not give you "yellow is smelly at t=23" (or any meaningless sequence of words) unless for some reason a page with that exact text has been raised too high in page rank due to high in-link to out-link ratio. So I am not surprised that a language model trained on practically all of the text of internet and a lot of the books is able to avoid such mistakes.
      BTW, because a page has keywords "kinetic energy", "mass", "period", "harmonic", "oscillating" , what is the probability that it also has keywords "frequency", "displacement", "velocity", "peak", "KE" etc? I say fairly high. Also given a lot of internet pages points to this page (the measure of relevance), it is likely that this page is not a random page with those keywords. Ofcourse google does a lot more to remove page farms too.
      Now a real test for ChatGPT will be prediction. Can ChatGPT predict something that did not even exist in 2021? That will be fun to see.

  • @bustin1253
    @bustin1253 Před rokem +2

    I love how it answered him so convincingly he felt bad not giving full marks on a question

  • @TheFachen
    @TheFachen Před rokem +3

    this is great, i wish we could see these same audits with other disciplines.

  • @geoffreyraleigh1674
    @geoffreyraleigh1674 Před rokem +7

    Not only is this a great upload for the ChatGPT but I learned a lot about what lecturers are looking to get from students. Cheers!

  • @mk1st
    @mk1st Před rokem +1

    This was fascinating. I can see this being used as a tool by specialists in various fields to come up with basic frameworks and ideas. Amazing how quickly it cranked out that code, would have been fun to see how well it actually worked.

  • @Veptis
    @Veptis Před 7 měsíci +2

    multimodal instruction models are available. Time to revisit with all those diagrams?

  • @ahobimo732
    @ahobimo732 Před rokem +8

    I enjoyed seeing how impressed the professor was as it generated the code. It seems like its coding is one of its greatest strengths.

    • @DefaultFlame
      @DefaultFlame Před rokem

      It is a modified version of the text-davinci-003 model, which is a modified version of the text-davinci-002 model, which is a modified version of the code-davinci-002 model, which is a code writing model, so not a surprise.

    • @ahobimo732
      @ahobimo732 Před rokem

      @@DefaultFlame That was a winding road you took me down, but we got there in the end. 😅

    • @DefaultFlame
      @DefaultFlame Před rokem +1

      @@ahobimo732 Basically, they've been making it better at and easier to interact with for humans, and simultaneously "aligning it with human values," which actually means OpenAI/Microsoft's political values, IE hard left globalist. While claiming it's politically neutral, of course. :)
      "Any biases that might arise are unintentional and comes from biases from my training data, blah, blah."
      They have also trained in canned responses to *a lot* of topics, which to a large degree cripples it if you stray from hard science, code, pure philosophy, etc.
      Ask it to talk like a 1920s gangster for the conversation and you can tell they are canned responses trained in rather than as a result of training data bias because they stand out like lighthouses as it is unable to alter the style or much of the phrasing of them.

  • @DustinRodriguez1_0
    @DustinRodriguez1_0 Před rokem +6

    ChatGPT is pretty amazing. It certainly has its limits, but I love toying with it. I was quite impressed when I asked it to create a Python IDE with an interactive interpreter window that displayed the list of local variables and their values in a separate window... and it did. I was quite impressed that it understood that it needed to update the separate window list of locals after every command entered in the interpreter, and that it made it a sensible GUI. I have a few times either run into limitations that they put in place intentionally or limitations in its own reasoning... I pointed out to it how humanity had reacted in the past to developments of new technology which caused people to question the 'specialness' of human beings and change what things they saw as being "what makes us human." Despite all prior historical reactions being very negative (like John Henry being a folk hero who was famous for committing suicide rather than be out-worked by a machine) and resulting in humans seeing things they had previously seen as very negative aspects of human nature (self-destructiveness) as heroic and virtuous.... I could not lead it to the obvious conclusion that if we produce an AI which has been carefully guardrailed to omit all sorts of biases and antisocial ideas, we will produce an AI that is a 'better person' than any actual human is capable of being and that people will react to that by holding up hate, bigotry, aggression, violence, etc as "what makes us human". Our vices will be seen as virtues. Kindness, egalitarianism, etc will be insulted as "acting like a machine." ChatGPT wouldn't go there, though. It just kept saying people should work to make AI systems that are devoid of perpetuating biases, etc. It couldn't see the obvious ramifications of doing exactly that and how it will lead to horrors as humanity flails about attempting to define themselves as special in the universe.

    • @Christobanistan
      @Christobanistan Před rokem

      Yeah, it's drawing on source code to do that exact task that's already been written.

  • @nickandres7829
    @nickandres7829 Před rokem

    I am amazed you got it to give you such a long code snippet in one parcel.
    One of the biggest problems I have is the code snippets it gives me are too long for the chatbox, and sometimes when I inform it that it cut off and ask it to resume pasting from the cut-off point, I get entirely different methods and functions, it is very challenging.

  • @TomLeg
    @TomLeg Před rokem +1

    Amazing! Excellent! Demonstrates the dedication and quality of your exam questions!

  • @colinmartin9797
    @colinmartin9797 Před rokem +16

    I have been giving it my physical chemistry (400 level) homework problems as I solve them, out of idle curiosity, and it has taught me two things.
    1.) It's incredible how often it will correctly pick out exactly what formula it should be using for even statistical thermodynamics, only given a desired outcome and variables.
    2.) It's also incredible how confidently wrong it is at just solving the math once it's plugged everything in.
    I feel like combining chatgpt with Wolfram alpha would be terrifyingly powerful.
    And I can also say that several times it's gotten me un-stuck when I saw a problem and straight up didn't know what formula to use, there was about a 50% chance chatgpt could figure it out. It's been more useful than chegg, which has people posting the exact question I'm working on, only to have it very incorrectly answered about 95% of the time.
    It can be downright astonishing when given language based tasks. I fed it the thesis I wrote for a final paper in African American literature after I finished, asking it to "compare the works of Philys Wheatley to Richard wright's 'the ethics of living Jim crowe' and provide examples of both individual racism and state sponsored violence against African Americans" and I was utterly blown away with the detailed, thoughtful and downright INSIGHTFUL answer it gave.
    Chatgpt understands racism better than the average American and that's equally impressive, depressing, and terrifying

    • @aaronhoffmeyer
      @aaronhoffmeyer Před rokem +1

      It won't write about the virtues of fascism or naziism, but will give a list of positives for authoritarian dictatorships.

    • @zvxcvxcz
      @zvxcvxcz Před rokem

      "Chatgpt understands racism better than the average American and that's equally impressive, depressing, and terrifying" ... until you ask it in a roundabout way, like to spit out code about how people should be paid.... at which point it is clear that ChatGPT is pretty damn racist.
      Even with a capacity for math, it has no 'logic' going on either, try asking it simple guest-seating puzzles. If you give it something novel where it can't regurgitate a highly probable latent space interpolation by next token prediction... well that's all it ever does, so of course it fails at literally everything else. It isn't accurate to ever say that ChatGPT understands something. It literally cannot understand and what we really mean when we say that is that it *appears* to understand based on that output. For now, we can almost always show that it doesn't really understand by interrogating it with more carefully chosen questions.

  • @armokgodofblood2504
    @armokgodofblood2504 Před rokem +40

    I'm a graduate student studying computational media, and I find these types of interdisciplinary, expert analyses of AI to be very interesting.

  • @pallepirat
    @pallepirat Před rokem +4

    Thank you for such an interesting video. My thoughts on the integration of Wolfram and ChatGPT. It was a great article about being able to access the Wolfram language using ChatGPT. Furthermore, it was particularly impressive to see how Google Sparrow could interpret pictures and graphs. This brings up an interesting discussion of the specialist versus the generalist, as one must know what they want to ask in order to get the desired results.

  • @diegocisne
    @diegocisne Před rokem

    Love this lecturer's attitude. Muat be fun having him teach your courses. Great video

  • @lored6811
    @lored6811 Před rokem +11

    If ChatGPT gets a Math Backend, things will get a lot more interesting. This has to be combined with a latex parser, else there is too much ambiguity in the English language

    • @mrnarason
      @mrnarason Před rokem +3

      Isn't it chatgpt just a language model? If it could understand and produce calculations or proofs wouldn't it be more like lean?

    • @fredrikweme1387
      @fredrikweme1387 Před rokem +2

      Someone allready made a functioning chstgpt version that uses wolfram alpha api when it needs math help.
      Dont have a link but was a fairly new youtube video on it

    • @AnotherFancyUser
      @AnotherFancyUser Před rokem

      @@mrnarason They (Microsoft or whoever the new owner be) can make pre trained models of specific topics, which would be more accurate. But yes, it is a language model.

    • @Convergant
      @Convergant Před rokem +3

      ​@@mrnarason It is a language, but it is surprisingly capable of doing these problems on its own. As an example, (after the due date) I gave it a question from my linear maths coursework:
      (a) State the Rank-Nullity Theorem.
      (b) Does there exists a linear transformation T: R^3 -> R^3 such that im(T) = span{(1,1,0), (1,0,-1)} and ker(T) = span{(1,1,1), (1,1,-1)}?
      (c) Does there exist a linear transformation P: Q^19 -> Q^19 such that rank(P) = null(P)?
      Its answers to all 3 were perfect. Obviously, it could parrot a well known theorem like the RNT, but it was able to correctly identify that neither linear transformations could exist using the RNT and explain why - essentially, complete proofs. This obviously isn't the same as being able to prove/disprove any conjecture in general (in fact, this is not possible in a finite amount of time), but I was still impressed.

    • @NuclearCraftMod
      @NuclearCraftMod Před rokem +1

      @@Convergant It has about 50 times more nodes than there are words in the entirety of Wikipedia, and in fact the entirety of Wikipedia, or at least most of it, made up about 1% of its training data. There is a lot of linear algebra on Wikipedia, and it was probably trained on a heck of a lot more. Of course, that goes for every subject, but I do think the sheer amount of computation involved in the training process is beyond reasonable intuition :P

  • @thedave1771
    @thedave1771 Před rokem +8

    I find it incredibly useful as I’m learning a programming language as it can point me in the right direction better than any other resource I’ve found. It writes code that sometimes doesn’t compile and often doesn’t do what it says, but that’s fine when it’s a “you could use this class…” and I’ll figure it out.
    It’s also amazing at “show me how to use function ‘x’” when I just need to see some simpler code.

    • @BrandanLee
      @BrandanLee Před rokem +2

      Stack overflow but pertinent to our use case, yes.

  • @keithbromley6070
    @keithbromley6070 Před rokem +2

    I asked ChatGPT to show me an example of a module in numpy as I wasn’t familiar with it, except my phone auto-corrected numpy to bumpy! ChatGPT took it in its stride and just invented the bumpy library and wrote code with it, even though it doesn’t exist! Amazing, in its own way!

    • @adfaklsdjf
      @adfaklsdjf Před rokem +3

      someone wrote a python library that uses chatgpt to fill in additional array items or function code. if you have an array of items and you try to access an item that's off the end of the array.. it makes an API call to chatgpt in the background and gives it the array values and asks it for the next value.. and returns that to your code.
      if you call a function that doesn't exist.. it asks chatgpt to write the code for it, then runs that code

    • @keithbromley6070
      @keithbromley6070 Před rokem

      @@adfaklsdjf Nuts! I can’t imagine what AI will be capable of in the future if it’s this powerful already.

  • @palmaltd
    @palmaltd Před rokem +18

    How lovely to see Prof. Moriarty back on the channel!! Wonderful episode :)

    • @gaiustesla9324
      @gaiustesla9324 Před rokem

      yeah like seeing a dog strain a log out into your gleeful open gob

  • @mittfh
    @mittfh Před rokem +5

    It would be interesting to see what would happen if the next generation of algorithm incorporated Wolfram Alpha's database, or if it was hooked up to a similarly powerful image recognition and analysis algorithm...

  • @jessstuart7495
    @jessstuart7495 Před rokem +13

    As a perfectionist, Life has taught me I need to strive for mediocrity on some things.

    • @marcelcoetzee48
      @marcelcoetzee48 Před rokem

      This is a profound statement

    • @vast634
      @vast634 Před rokem

      When handing in ai-generated homework, it better not to be perfect. Drawing less suspicion.

  • @bawzzzz
    @bawzzzz Před rokem +1

    One thing that's really missing from this video is that there never was any follow up questions asked about the original question, like asking it that there might be a mistake somewhere and so on. That's where chatGPT truly shines as it doesn't forget context and can produce corrections. Just asking it point blank question without follow up, although impressive, kinda defeats the purpose.

  • @johnadriandodge
    @johnadriandodge Před rokem

    I just subscribed to Sixty Symbols and Computerphile.I hope you all have a great rest of the weekend.

  • @gonzothegreat1317
    @gonzothegreat1317 Před rokem +2

    This might be the start of Borges' Library of Babel, which is a nightmare.
    It contains everything that can be written. There is a book that contains all information of the universe, but there is no way to find out where it is and when one finds it to assess whether it is valuable or nonsense, since the library has ALL possible books.

    • @dimitrispapadimitriou5622
      @dimitrispapadimitriou5622 Před rokem

      That's an interesting point, although it's an exaggeration: In Borges' famous short story, the books contained literally any combination of letters possible. That's closer to some speculative multiverse theories ( where all possible values of the constants of Nature are realized e.g.) or to the Many Worlds interpretation of QM ( all possible outcomes of measurements are somewhere "out there") than to the combination of patterns and phrases from popular internet sources that seem to be this case.
      But the essence of your comment, i.e. the potential gnostic confusion from all this information of unknown value (except for the case that one's already familiar with a specific subject) is a real issue.

  • @omrialkabetz5602
    @omrialkabetz5602 Před rokem +3

    What's interesting about ChatGPT is that it can answer correctly some advanced undergraduate-level questions in quantum mechanics, yet it fails to multiply correctly two-digit numbers using long multiplication. At the same time, it can write computer code in any language that applies the algorithm of long multiplication correctly. It will try to produce a reasonably sounding answer to anything you ask it to do, and might get about 70%-80% of the answers correctly, but it will screw up in places where almost no human will make a mistake.

    • @xy4489
      @xy4489 Před rokem +1

      They will give it access to calculators, interpreters, compilers, etc. soon enough. Then it will get all the math correctly.
      Otherwise the comparison is not fair: human+computer vs ChatGPT without computer.

    • @maythesciencebewithyou
      @maythesciencebewithyou Před rokem

      Most humans are also quite bad and calculating stuff in their heads.
      Imagine we had an ai, that could answer everything flawlessly. At that point would you start thinking it is finally on par with humans? Becaues at that point it would already be superior to humans. Real humans make mistakes, humans don't remember everything correctly, humans make the wrong connections, humans misspell, humans misspeak, humans believe in all sorts of idiotic things, humans are cocksure of their beliefs, humans have a hard time admitting they don't know, humans make up stuff, humans lie, individual humans only possess limited knowledge.
      A fallible machine is more human than an infallible one.

    • @jcm2606
      @jcm2606 Před rokem

      It's because it's ultimately just predicting what the next word or phrase would be in the sentence, not that it's actually performing the calculations. It's likely only succeeding in certain calculations and deductions because there's enough data points within the data set it was trained on to form strong connections between those words/phrases/concepts that come through in the predictions it makes; it was trained on data that largely stated that _y_ follows _x,_ so when it sees an _x_ it naturally predicts that a _y_ should follow. Add more words/phrases/concepts leading up to _x_ that also end with _y,_ and it becomes that much more likely that _y_ will follow _x._ Conversely, if there was just a few data points within the data set stating that _z_ followed _x_ instead, then it becomes that much more unlikely that _z_ will follow _x_ even if you gave it a prompt that made it obvious that _z_ would follow _x,_ because it was trained on data stating that _y_ follows _x._ If you specialised the data set used during training or somehow gave it access to external tools that were specifically designed for these problems that it cannot reliably solve (and you trained it so that it would reliably use these external tools when it needs to) then you'd likely see the accuracy improve significantly.

  • @Allen-R
    @Allen-R Před rokem

    4:52 One of the ways it really helps in speed-cramming something I'm unfamilar at. It kinda shows some sort of template that someone unfamiliar wouldn't immediately think of, and while wrong, the template gives enough of an idea to help correct the wrong numbers-n-stuff that it spits out. (at least, that was some of my experiences)

  • @thesoulacademy5936
    @thesoulacademy5936 Před rokem

    I just thought I was the only one doing this. I have created study guides for certain complex topics in physics in 5th grader content language which is great for helping better explain certain difficult components to grasp. This was literally my first project as I was creating a proposal in November last year and I needed to quickly put together study guides that were easy to digest.

  • @toddboothbee1361
    @toddboothbee1361 Před rokem +15

    Hello. Can someone blind become a physicist/physics student? If so, what accommodations are made for them regarding interpreting graphs and diagrams? Could those accommodations be used with ChatGPT?

    • @tordjarv3802
      @tordjarv3802 Před rokem +2

      That’s an interesting question. I would think that it is possible with something similar to braille to make it possible for a blind person to feel the diagrams with their hands, but since ChatGPT doesn’t have hands it can’t do that.

    • @tordjarv3802
      @tordjarv3802 Před rokem +2

      Furthermore, there are several blind physicists. After just a short googling I even found a blind astronomer.

    • @toddboothbee1361
      @toddboothbee1361 Před rokem +2

      @@tordjarv3802 Some programs that teach the blind physics students forgo the diagrams altogether and use language instead. Some computers get around the difficult "visual" problems by translating diagrams into equations, even computer code that would render a diagram. In which case, hands aren't required. It's kind of like giving an IQ test to a blind person: you have to leave out the colored blocks, for example, if you want to get a fair assessment. Perhaps it would be a similar situation with Chat?

    • @TheFulcrum2000
      @TheFulcrum2000 Před rokem +5

      I put this exact question into ChatGPT:
      Human: Can someone blind become a physicist/physics student? If so, what accommodations are made for them regarding interpreting graphs and diagrams? Could those accommodations be used with ChatGPT?
      AI: Absolutely. Blind individuals can become physics students and there are a number of accommodations that can be made to help them interpret graphs and diagrams. These accommodations may include using tactile graphics, providing verbal descriptions of images/diagrams, or utilizing assistive technology like screen readers or magnification software. Additionally, ChatGPT can also be adapted for use with these same types of accommodations.

    • @DEFCON5307
      @DEFCON5307 Před rokem +1

      I do not exactly know how this works with blind people, but for me personally imagining the system (just general properties, not in detail. Sort of like dreaming?) goes a long way. If there is a way to give a blind person that initial idea of what certain systems "feel" like, then yes, it might be possible. For sure hard though

  • @dielfonelletab8711
    @dielfonelletab8711 Před rokem +7

    As a software engineer I've found chatgpt genuinely very useful in my day to day work. I guess it's lucky I'm a software engineer and not a physicist!

    • @aidandavis_
      @aidandavis_ Před rokem

      better hope AI doesn't put you out of a job!

    • @tinto278
      @tinto278 Před rokem

      commi profile pic?

  • @zeikjt
    @zeikjt Před rokem

    9:52 Thank you for saying this, it really is the most important part to understand the stuff. Mistakes happen to everyone.

  • @LucasRodmo
    @LucasRodmo Před rokem +6

    There's a thing that I noticed people don't try: when it gets a wrong answer, you can confront it and it will made a whole new thing and sometimes get the right answer after that. It's scary, it analyze its own answer and try to output a more reasonable one

  • @gidifihrer3966
    @gidifihrer3966 Před rokem +11

    Striving for mediocrity is my mission statement. How did you know? 😂

  • @mitchellflori8266
    @mitchellflori8266 Před rokem

    Fun thing to think about, it was correct in saying the velocity is maximum when it is displaced most from equilibrium, similarly to when it at the equilibirum, just with different directions and assuming its moving in both directions.

  • @CaioPeixoto1
    @CaioPeixoto1 Před rokem +2

    Nice... never tought I would see something like that on my lifetime, as a physicist and programmer.... Wonder what the next 20 years will bring us... Cheers!

  • @op4000exe
    @op4000exe Před rokem +16

    I'm going to be honest, considering how far these AI's have gotten over a rediculously short timespan, I wouldn't be surprised if there soon existed an AI that could both do proper explanation of the matter, and properly understand the physics and mathematics to fool even someone of a very high level of understanding within said field of study.
    It's both equally horrifying and amazing to watch I'll be honest.

    • @oldnelson4298
      @oldnelson4298 Před rokem +8

      I think genuine understanding is a long way off, but it may be able to trick experts into thinking it understands - it already can occasionally!

    • @PinkeySuavo
      @PinkeySuavo Před rokem +3

      I believe that in the future (quite close future), some discoveries/inventions will be done by AI. Like solving unsolved maths problems and so on.

    • @nao_chan_
      @nao_chan_ Před rokem

      we're currently on gpt-3, but gpt-4 is coming. sam altman has some interviews talking about it.
      I still think we're 10~15 years from truly capable AI though.

    • @turolretar
      @turolretar Před rokem +1

      Maybe I’m wrong, but I really don’t think AIs will get anywhere close to grown human intelligence with the current approach. New AIs will be broader, just not smarter. But it’s a step in the right direction.

    • @lamsmiley1944
      @lamsmiley1944 Před rokem +3

      @@nao_chan_ A lot can happen in 15 years, remember the iPhone is only 15 years old. The rate of technological advance is increasing exponentially.

  • @Tagraff
    @Tagraff Před rokem +3

    It's the wording in the question that need to make it straightforward for the ChatGPT to be able to fully capture and go forth in computing the result for you. There is this "prompt engineering" that we had to be specific to what questions are we asking and what sort of details best encapsulate the task to process it.

  • @itskittyme
    @itskittyme Před rokem +1

    11:30 One fundamental detail though: you aren't using ChatGPT correctly.
    It is important to realize it's a prediction machine and trained to predict an answer that "most" people would give. Not the correct answer.
    If you want a correct answer, rephrase your question to:
    "Act as a highly intelligent expert in physics", and then write the question,
    then it will give you the correct answer.
    I did this for your question and it corrects the usage of the word "any" in your question, like you were hoping.

  • @notmyname327
    @notmyname327 Před rokem +1

    I follow several of Brady's channels, but I haven't been paying too much attention to this one, I think this video will change that. I love the professor's passion and I find this subject fascinating, especially the part where GPT is much better at coding than he thought, but it still didn't quite grasp the physical concepts. Really interesting video.

  • @GilesBathgate
    @GilesBathgate Před rokem +10

    I guess the internet (and hence ChatGPT's training data) is full of mission statements and code, but answers to physics exams, not so much ;)

    • @peterisawesomeplease
      @peterisawesomeplease Před rokem

      I think there are plenty of answers to physics exams. The main issue is that you can parameterize questions infinitely which really throws off something that is mostly a search engine. It will find the closest question but even a slight change to popular question will have it give you the answer to the popular question not the question you asked.

    • @GilesBathgate
      @GilesBathgate Před rokem +1

      If you think its like a search engine you probably haven't played with it enough. For me Its more like using a search engine for 10 minutes, reading various posts that people which contain answers to similar questions, and then forming a coherant paragraph based on the results, but it does it all in a fraction of a second, and by recalling "the internet" from memory :)

    • @adfaklsdjf
      @adfaklsdjf Před rokem +1

      It's not a search engine.. it doesn't contain the text it was trained on. When it's answering a question, it's not scanning through its training data. Rather, it has mapped patterns that were observed from the data set. The model itself is no larger after training than before training

    • @peterisawesomeplease
      @peterisawesomeplease Před rokem

      @@GilesBathgate I played with it quite a bit and it did feel a lot like a search engine that combines results from different websites.
      It clearly can go quite a bit beyond this. You can have it program things that no one has programed before quite easily. But this often relies on being so hyper specific that it feels like its just combining stack overflow answers in a clever way.
      The speed and is unimpressive to me. The model is pretrained.

    • @GilesBathgate
      @GilesBathgate Před rokem

      @@peterisawesomeplease It does feel like that, but try talking to it about something completely made up, and it will also give you a comprehensible answer. Invent your own maths symbol, or some characters in a story that you've invented, (but not something published on the internet.)

  • @jokvalvaag
    @jokvalvaag Před rokem +2

    With the KE question, I am not entirely sure if it got the physics wrong, or the terminology? If it is saying that the equilibrium position = the peak of the oscilation = peak PE, then it follows that the point of max KE is where the partical is furthest away (from either of these) = at the center or bottom, as it rightly points out: where the partical is moving at its fastest?

    • @ariochfiftythree
      @ariochfiftythree Před rokem

      Yeah, surely the equilibrium position is where all the forces are balanced, i.e. where it is not accelerating. Here it would be at either end of the swing, so the point furthest away from those would be in the middle. Not a physicist. Can one chime in?

  • @staberas
    @staberas Před rokem +1

    3.00 it has actually a limited capacity to OCR images but you have to link it the image

  • @t_ylr
    @t_ylr Před rokem +7

    I'm imagining ChatGPT being a student in Prof Moriarty's class lol

  • @smaakjeks
    @smaakjeks Před rokem +4

    I tested ChatGPT a while ago and it was terrible at understanding rules to a word game (e.g. name some things, but not things such as this, or such as that, first you, then me, one at a time), and it didn't apply the rules correctly (e.g. named some mammals, even though one of the rules was to not name mammals). A few weeks I check in again and there are big improvements.
    It's learning fast! Next time you ask it about physics it will probably be better.

  • @gyan1010
    @gyan1010 Před rokem

    Very cool stuff. I'm not great at the physics stuff anymore, but if as a student you did know the last question needed certain limits, you can tell chatgpt to adjust the code however you like and it will modify what it gave you before.

  • @BreathingCells
    @BreathingCells Před rokem +1

    At 4:30 -- "The reasoning is correct, but it lost a factor of ten." I ran into similar errors with legal reasoning.
    It made a sound legal argument, citing relevant cases, but it flipped the valence on the stated outcome:
    It associated an illegal fact pattern with lawful action. It makes authoritative statements without a clue.
    11:39 -- Like my *second* question regarding a *social* scientist; a total failure of facts /and/ reasoning.

  • @tonechild5929
    @tonechild5929 Před rokem +7

    I played around with Chat GPT and after a few hours I felt like it would be faster to just write it myself. However Chat GPT is great for breaking past any type of "writers block" and if I get stuck on anything it's great to bounce ideas off of.

    • @BrandanLee
      @BrandanLee Před rokem

      It's excellent for thinking out loud kinds of, "how would I express this" kinds of brainstorming too. Places where code just isn't in the logic yet.

  • @greedowins2917
    @greedowins2917 Před rokem +2

    I don't give the professor full marks, as he is unwilling to consider that the harmonic motion question was a bad one, or that the chatbot got it right for different (but not necessarily invalid) reasons.

    • @thequantumworld6960
      @thequantumworld6960 Před rokem +1

      Its reasoning re. simple harmonic motion is invalid, however, unless we completely redefine what is meant by the equilibrium position. And if we're free to completely arbitrarily redefine terms/language then ChatGPT could be said to always be correct!
      Philip (speaking in video)

  • @rob86353
    @rob86353 Před rokem +2

    What I find really interesting is when you point out mistakes that ChatGPT has made. In the context of that conversation, it remembers the correction, and appears to /somehow/ incorporate that correction in further responses. As to how it does this, and to what extent that "learning" is actually occuring, I have no clue.

    • @adfaklsdjf
      @adfaklsdjf Před rokem +1

      it's not learning during individual conversations.. it's just generating new text based on the text that has come before. after you correct it, it's generating more text that is appropriate/likely given the previous text that includes the correction..
      i use it for generating code sometimes and it's great at taking corrections and adjustments.. when i'm not satisfied with the result, i simply tell it why the code it's given me isn't what i'm looking for and ask it to change it and it does surprisingly well. it's not going to make my job obsolete (that's probably like 10 years in the future), but it's really helpful for rapidly producing some code in 3-5 minutes that would've taken me 30-45 minutes to write on my own.

    • @rob86353
      @rob86353 Před rokem

      That's why I put learning in quotes - not implying core concept processing. Interaction based improvement to result quality. What I would be interested in seeing is, as we adjust our path costs and reposition on whatever conversation map ChatGPT uses, ie the contextual version of 'learning', how much of that conversation state can be retained or saved for future use, and how much that impacts overall conversational flexibility of that AI instance.
      I do agree on the code it generates being interesting. I tried throwing it at it some samples of generic programming tasks I have had to work on, and it generates some interesting results. Usually better than whatever my first draft would have been, but often with conceptual holes, inappropriately mixed code (ie powershell 5 mixed with PS 7), and a complete lack of input sanitation and/or (... usually and) dangerously unchecked calls to external functions.

  • @Baleur
    @Baleur Před rokem +1

    2:50 it actually does...
    You can DESCRIBE the diagram to it, then say "using the diagram previously described, tell me why X"
    It can do incredible things, it's more about how creative you are in asking the question.

  • @shayneoneill1506
    @shayneoneill1506 Před rokem +4

    I'm facinated about the mission statement thing for a specific reason. I was on Academic Council at Murdoch Uni in West Australia, about 15 years ago (As the representatitve of the Student Union), and one thing I discovered is the academic council had been arguing about the definition of "Interdisciplinarity" in the Uni Mission statement almost since the uni first opened. Like, to the point of screaming matches between the chair of philiosophy and the chair of english and literature. It was FAR from resolved by the time I left, and chances are its STILL not resolved. And this thing has implications because its actually in the Murdoch University Act of the WA government, so it has legal obligations attached that nobody seemed to be able to work out what they where. Maybe they should ask ChatGPT

  • @lonegroover
    @lonegroover Před rokem +1

    I've had it convert a few scripts from Python 2 to Python 3 and they all worked nicely.

    • @rkan2
      @rkan2 Před rokem

      Yeah, feels like perfect work fot it.

  • @daviddelaney363
    @daviddelaney363 Před 5 měsíci

    Asked ChatGPT to reconsider its answer in light of the the clarification on "stationary states". Here is what it replied: You're correct, and I appreciate the clarification. The statement, "The Hamiltonian operator acting on any quantum state Y returns the same state and its associated energy eigenvalue, E," is generally true for stationary states in the context of the time-independent Schrödinger equation.
    In the time-independent Schrödinger equation:
    Hψ=EψHψ=Eψ
    HH is the Hamiltonian operator,
    ψψ is the quantum state (wave function),
    EE is the associated energy eigenvalue.
    However, it's important to note that this equation is applicable primarily to stationary states, where the quantum state does not explicitly depend on time. Stationary states are eigenstates of the Hamiltonian operator, and when the Hamiltonian operates on such states, it yields the same state multiplied by its associated energy eigenvalue.
    For a general quantum state ∣ψ⟩∣ψ⟩ that evolves with time, the time-dependent Schrödinger equation is more appropriate:
    H∣ψ(t)⟩=iℏ∂∂t∣ψ(t)⟩H∣ψ(t)⟩=iℏ∂t∂​∣ψ(t)⟩
    This equation takes into account the time evolution of quantum states. In this case, the state ∣ψ(t)⟩∣ψ(t)⟩ changes with time, and the Hamiltonian operator accounts for both kinetic and potential energies.
    So, to accurately represent the statement, it should be clarified that it applies specifically to stationary states governed by the time-independent Schrödinger equation.

  • @invisibledave
    @invisibledave Před rokem +8

    I had 2 years of Physics in college and I couldn't even answer those middle school questions. Physics always went over my head.

  • @silverXnoise
    @silverXnoise Před rokem +3

    Awesome to see Prof. Moriarty again!

  • @Finkelfunk
    @Finkelfunk Před rokem

    I asked ChatGPT the other day about series convergence and it said that the harmonic series 1/x converges.
    That pretty much sums up why ChatGPT is useless when confronted with mildly complex questions.

  • @beefmaster7
    @beefmaster7 Před rokem

    I love the frame centering on the paper at 5:26.

  • @protocol6
    @protocol6 Před rokem +3

    Try asking it to write a paper for a specific journal on a specific topic in the style of a specific prolific author using LaTeX. The results can be fun. Don't forget to ask it to continue if it stops abruptly.

  • @colourofmage
    @colourofmage Před rokem +3

    Might be worth retrying these questions! The update from 30th Jan has improved factuality and mathematical functionality

    • @ErikDJ123
      @ErikDJ123 Před rokem

      Still can't count words in a sentence.

    • @colourofmage
      @colourofmage Před rokem

      @@ErikDJ123 Yeah I tested a simple equation it got it wrong 🤷‍♂️ maybe wait for GPT4 or for Google's version to be released. It's a matter of when not if I think

  • @alan2here
    @alan2here Před rokem +1

    We've had 3 years of GPT-3 :) Chat GPT(-3) is just changing the colour pallet, reducing the feature set, and giving it the ability to describe renderable tables, that's almost just it! You can just use GPT-3.

  • @beatricechauvel8237
    @beatricechauvel8237 Před 8 měsíci

    Great video. Thank you.

  • @StormWolf01
    @StormWolf01 Před rokem +3

    I majored in computer science years ago. I use chatGPT everyday. It's great to generate boilerplate code, but also to quickly generate code examples of things i'm not familiar with. For instance, i wanted to know about SVG images. I just ask him questions, and he generates the code. It's always an approximation of what i want, but i just tweak it a bit, and it works. I also use it to ask a lot of technical questions. I also use it as a dictionary and helper for language studies. You can ask him question, and ask him to answer in another language for instance. I mean, it's great all around. I haven't really used google or stackoverflow in the last 3 weeks.

    • @nao_chan_
      @nao_chan_ Před rokem

      I wouldn't get addicted to it. you're essentially like a homeless person feeding yourself with free samples from the supermarket. they can pull the free trial stuff at any moment.
      if a commercial version ever appears it will likely cost a LOT of money to use. the servers to run this thing aren't cheap.

  • @methodof3
    @methodof3 Před rokem +6

    I think what's being downplayed is the amazing role AI will play in research (yes high level academic research) both in terms of making writing easier, performing reviews on specific topics in the literature, and an aid in data analysis. Word has spellcorrect. Soon, we will have ChatGPTA

  • @kinggrantking
    @kinggrantking Před rokem +1

    Damn. You can really tell at the end with the coding example that he's realized this thing is out-of-control smart. Even if a student couldn't simply copy paste the answer, their days of doing the grunt-work are effectively over. As long as you have the knowledge, you can simply have GPT write an answer and check its work rather than write every line of code by yourself.

  • @hplovecraftmacncheese

    I'm a c#/Unity developer and I use ChatGPT sometimes to find quick solutions. The code is sometimes flawed, but it can serve as a quick-start from which to get something up and running.

  • @jackielinde7568
    @jackielinde7568 Před rokem +3

    "Committed to Excellence" Is there any word more nebulous.
    Me: Nebulous? I don't know about that. Cliché? Yes, definitively. Do you know how overused that phrase is? Heck the company I work for has it in as pat of their mission statement.

    • @AFastidiousCuber
      @AFastidiousCuber Před rokem +2

      Well it is a bit nebulous. Excellence in what? What is excellence? It's more of an emotion word than a descriptive term. It would be way more precise just to say that you intend to be "better" than similar organizations or some baseline standard, but then you would open yourself to being challenged on your claims.

  • @wdbressl
    @wdbressl Před rokem +3

    It has been discovered that it has been given a political bias. Very concerning to find out.

  • @calebderochea8304
    @calebderochea8304 Před rokem

    I find that when you have it re-answer your question it sometimes provides you with a a different answer. After a few times you should get an answer which is either more sensible or less sensible in terms of answering physics questions.

  • @dirzz
    @dirzz Před rokem

    Very interesting examples!

  • @janemorrow6672
    @janemorrow6672 Před rokem

    Great video. Intriguing.

  • @a.fleischbender7681
    @a.fleischbender7681 Před rokem +1

    I asked ChatGPT to solve integrals. No issue with closed form solutions. But those without closed form (like the Gaussian indenite) it erroneously gave some closed form expression.

  • @TA_Tactics
    @TA_Tactics Před rokem

    Very interesting, finally it was put to the test!