Torvalds Speaks: Impact of Artificial Intelligence on Programming

Sdílet
Vložit
  • čas přidán 16. 01. 2024
  • 🚀 Torvalds delves into the transformative influence of Artificial Intelligence on the world of coding.
    🚀 Key Topics:
    * Evolution of programming languages in the era of AI.
    * Enhancements in development workflows through machine learning.
    * Predictions for the future of software development with the integration of AI.
  • Věda a technologie

Komentáře • 1,5K

  • @modernrice
    @modernrice Před 4 měsíci +5716

    These are the true Linus tech tips

    • @ginogarcia8730
      @ginogarcia8730 Před 4 měsíci +29

      hahaha

    • @rooot_
      @rooot_ Před 4 měsíci +22

      so true lmao

    • @denisblack9897
      @denisblack9897 Před 4 měsíci +124

      This!
      Hate that lame wannabe dude pretending to know stuff

    • @authentic_101
      @authentic_101 Před 4 měsíci +4

      😅

    • @viktorsincic8039
      @viktorsincic8039 Před 4 měsíci +146

      @@denisblack9897don't hate anyone man, the guy is responsible for countless kids getting into tech, people tend to sort out the educational "bugs" on the way up :)

  • @alakani
    @alakani Před 4 měsíci +1509

    Man Linus is always such a refreshing glimpse of sanity

    • @JosiahWarren
      @JosiahWarren Před 4 měsíci +4

      His argumet was bugs are shallow .we have compliers for shallow bugs llm can gind not so shallow .he is not the brightest

    • @rickgray
      @rickgray Před 4 měsíci +137

      ​@@JosiahWarren Try that again with proper grammar chief.

    • @Ryochan7
      @Ryochan7 Před 4 měsíci +4

      He let his own kernel and dev community get destroyed. Screw him. RIP Linux

    • @alakani
      @alakani Před 4 měsíci +64

      @@Ryochan7 Fun fact, fMRI studies show trolling has the same neural activation patterns as psychopaths thinking about torturing puppies; it's very specific, right down to the part where they vacillate between thinking it's their universal right, and that they're helping someone somehow

    • @Phirebirdphoenix
      @Phirebirdphoenix Před 4 měsíci +2

      ​@@alakani and some people who troll do not think about it at all. they're easier to deal with if we aren't ascribing beneficial qualities to them.

  • @the.elsewhere
    @the.elsewhere Před 4 měsíci +1540

    "Sometimes you have to be a bit too optimistic to make a difference"

    • @bartonfarnsworth7690
      @bartonfarnsworth7690 Před 4 měsíci +15

      -Stockton Rush

    • @harmez7
      @harmez7 Před 4 měsíci

      it;s actually originally from William Paul Young, The Shack@@bartonfarnsworth7690

    • @Martinit0
      @Martinit0 Před 4 měsíci +3

      Understatement of the day, LOL.

    • @MommysGoodPuppy
      @MommysGoodPuppy Před 4 měsíci +8

      hell of a motivational quote

    • @harmez7
      @harmez7 Před 4 měsíci +4

      that is also what a scammer wants from you.
      dont put everything that looks fancy in you mind kiddo.

  • @lexsongtw
    @lexsongtw Před 4 měsíci +1838

    LLMs write way better commit messages than I do and I appreciate that.

    • @SaintNath
      @SaintNath Před 4 měsíci +190

      And they actually comment their code 😂

    • @Sindoku
      @Sindoku Před 4 měsíci +77

      @@SaintNathcomments are usually bad though but are good if you’re learning I suppose but they can be out of date and thus misleading

    • @steffanstelzer3071
      @steffanstelzer3071 Před 4 měsíci +273

      @@Sindoku i hope your comment gets out of date quickly because its already misleading

    • @NetherFX
      @NetherFX Před 4 měsíci +135

      @@Sindoku While I get your point, comments are definitely a good thing.
      Yes code should be self-explanatory, and if it isn't you try your best to fix this. But there's definitely cases where it's best to add a short comment explaining why you've done something. It shouldn't describe *what* but *why*

    • @user-oj9iz4vb4q
      @user-oj9iz4vb4q Před 4 měsíci +52

      @@NetherFX That's the point, a comment is worthless unless it touches on the why. A comment that just discusses the what is absolutely garbage because the code documents the what.

  • @Hobbitstomper
    @Hobbitstomper Před 4 měsíci +353

    Full interview video is called "Keynote: Linus Torvalds, Creator of Linux & Git, in Conversation with Dirk Hohndel" by the Linux Foundation channel.

    • @mercster
      @mercster Před 4 měsíci +1

      Where was this talk held?

    • @DavidHnilica
      @DavidHnilica Před 3 měsíci +12

      thanks "so" much! It's pretty appalling that these folks don't even quote the source

    • @kurshadqaya1684
      @kurshadqaya1684 Před 3 měsíci +1

      Thank you a ton!

    • @KaiCarver
      @KaiCarver Před 3 měsíci

      Thank you czcams.com/video/OvuEYtkOH88/video.html

    • @captaincaption
      @captaincaption Před 2 měsíci

      Thank you so much!

  • @porky1118
    @porky1118 Před 4 měsíci +559

    1:06 "Now we're moving on from C to Rust" This is much more interesting than the title. I always thought, Torvalds viewed Rust as an experiment.

    • @feignit
      @feignit Před 4 měsíci +82

      Rust just isn't his expertise. It's going in the kernel, he's just letting others oversee it.

    • @SecretAgentBartFargo
      @SecretAgentBartFargo Před 4 měsíci +54

      @@feignit It's already in the mainline kernel for a while. It's very stable and Rust just works really well now.

    • @yifeiren8004
      @yifeiren8004 Před 3 měsíci +6

      I actually think go is better than Rust

    • @speedytruck
      @speedytruck Před 3 měsíci +183

      @@yifeiren8004 You want a garbage collector running in the kernel?

    • @catmanmovie8759
      @catmanmovie8759 Před 3 měsíci +11

      ​@@SecretAgentBartFargoRust isn't even close to the stable.

  • @ficolas2
    @ficolas2 Před 4 měsíci +721

    I have had copilot suggest an if statement that fixed an edge case I didn't contemplate, enough times to see it could really shine in fixing obvious bugs like that.

    • @doodlebroSH
      @doodlebroSH Před 4 měsíci +111

      Skill issue

    • @antesajjas3371
      @antesajjas3371 Před 4 měsíci +287

      ​@@doodlebroSHif you always think of every edge case in all of the code you write you are not programming that much

    • @ficolas2
      @ficolas2 Před 4 měsíci

      @@doodlebroSH I can tell you are new to programming and talking out of your ass just by that comment.

    • @markoates9057
      @markoates9057 Před 4 měsíci +23

      @@doodlebroSH :D yikes

    • @turolretar
      @turolretar Před 4 měsíci +3

      @@antesajjas3371I think you misspelled edge

  • @MethodOverRide
    @MethodOverRide Před 4 měsíci +333

    I am senior software engineer and I use chat gpt sometimes at work to write powershell scripts. They usually provide a good enough start for me to modify to do what i want. That saves me time and allows me to create more scripts to automate more. Its not my main programming task, but it definitely saves me time when I need to do it.

    • @falkensmaize
      @falkensmaize Před 4 měsíci +44

      Same. ChatGPT is great for throwing together a quick shell or python script to do boring data tasks that would otherwise take much longer.

    • @alakani
      @alakani Před 4 měsíci +18

      Yep, saves me so much time with data preprocessing, and adds nice little features that I wouldn't normally bother with for a 1 time use throwaway script

    • @jsrjsr
      @jsrjsr Před 4 měsíci +10

      Quit your job.

    • @alakani
      @alakani Před 4 měsíci +28

      @@jsrjsr And light a fart?

    • @jsrjsr
      @jsrjsr Před 4 měsíci +2

      @@alakani he should do worse than that.

  • @mikicerise6250
    @mikicerise6250 Před 4 měsíci +583

    If you let the LLM author code without checking it, then inevitably you will just get broken code. If you don't use LLMs you will take twice as long. If you use LLMs and review and verify what it says and proposes, and use it as Linus rightly suggests as a code reviewer who will actually read your code and can guess at your intent, you get more reliable code much faster. At least that is the state of things as of today.

    • @keyser456
      @keyser456 Před 4 měsíci +33

      Perhaps anecdotal, but it (AI Assistant in my case, I'm using JB Rider, pretty sure that's tied to ChatGPT) seems to get better with time. After finishing a method, I have another method already in mind. I move the cursor and put a blank line or two in under the method I just created in prep for the new method. If I let it sit for just a second or two before any keystrokes, often times it will predict what method I'm about to create all on its own, without me even starting the method signature. Yes, sometimes it gets it very wrong and I'll just hit escape to clear it, but sometimes it gets it right... and I mean really scary right. Like every line down to the keystroke and even naming is spot on, consistent w/ naming throughout the rest of the project. Yes, agreed, you still need to review the generated code, but I suspect that will only continually get better with every iteration. Rather then autocompleting methods, eventually entire files, then entire projects, then entire solutions. It's probably best for developers to try to learn to work with it in harmony as it evolves, or they will fall behind their peers that are embracing it. Scary and exciting times ahead.

    • @pvanukoff
      @pvanukoff Před 4 měsíci +17

      @@keyser456 Same experience for me. It predicts what I was about to write next about 80% of the time, and when it gets it right, it's pretty much spot on. Insane progress just over the past year. Imagine where it will be in another year. Or five years. Coding is going to be a thing of the past, and it's going to happen very quickly.

    • @rayyanabdulwajid7681
      @rayyanabdulwajid7681 Před 4 měsíci +7

      If it is intelligent enough to write code, it will eventually become intelligent enough to debug complex code, as long as you tell it what is the issue that arises

    • @CausallyExplained
      @CausallyExplained Před 4 měsíci +11

      You are training the llm for the inevitable.

    • @derAtze
      @derAtze Před 4 měsíci +2

      Oh man, now i really want to get into coding just to get that same transformative experience of a tool thinking ahead of you. I am a Designer, and to be frank, the experience with AI in my field is much less exciting, its just stockfootage on steroids, all the handywork of editing and putting it together is sadly the same. But the models are evolving rapidly and stuff like AI object select and masking, vector generation in Adobe Illustrator, transformative AI (making a summer valley into a snow valley e.g.) and motion graphics AI are on the horizon to be good or are already there. Indeed, what a time to be alive :D might get into coding soon tho

  • @PauloJorgeMonteiro
    @PauloJorgeMonteiro Před 4 měsíci +375

    Linus..... My man!!!
    I would probably hate working with him, because I am not a very good software engineer and he would be going nuts with my time-complexity solutions... but boy has he inspired me.
    Thank you!

    • @MrFallout92
      @MrFallout92 Před 4 měsíci +25

      bro do you even O(n^2)?

    • @PauloJorgeMonteiro
      @PauloJorgeMonteiro Před 4 měsíci +54

      @@MrFallout92 I wish!!!
      These days I have a deep love for factorials!

    • @TestTest12332
      @TestTest12332 Před 4 měsíci +40

      I don't think he would. His famous rants on LKML before he changed his tone were ate people who SHOULD HAVE KNOWN BETTER. I don't remember him going nuts at newbies for being newbies. He did go nuts at experts who tried to submit sub-par/lazy/incomplete/etc work and should have know it's sub-par and needs fixing and didn't bother doing that. He was quite accurate and fair in that.

    • @Saitanen
      @Saitanen Před 4 měsíci +4

      @@TestTest12332 Has this ever happened? Do you have any specific examples?

    • @uis246
      @uis246 Před 4 měsíci +8

      ​@@Saitanenthat time when fd-based syscall returned file not found error code. Linus went nuts.

  • @alcedob.5850
    @alcedob.5850 Před 4 měsíci +279

    Wow finally someone who acknowledges the options LLMs give without being overhyped or calling out an existential threat

    • @darklittlepeople
      @darklittlepeople Před 4 měsíci +5

      yes, i find him very refreshing indeed

    • @MikehMike01
      @MikehMike01 Před 4 měsíci

      LLMs are total crap, there’s no reason to be optimistic

    • @deeplife9654
      @deeplife9654 Před 4 měsíci +28

      Yes. Because he is not a marketing guy or not ceo of a company.

    • @genekisayan6564
      @genekisayan6564 Před 4 měsíci +1

      Man they can t even count additions. Of course they are not a threat. At least yet

    • @curious_banda
      @curious_banda Před 4 měsíci

      ​@@genekisayan6564 never used gpt4 and other later models?

  • @Kaelygon
    @Kaelygon Před 4 měsíci +697

    While AI lowers the bar to start programming, I'm afraid it also makes programming bad code easier. But with like any other tool, more power brings more responsibility and manual review should still be just as important.

    • @footballuniverse6522
      @footballuniverse6522 Před 4 měsíci +56

      as a cloud engineer I gotta say chatgpt with gpt 4 really turbocharges me for most tasks, my productivity shot up 100-200% and i'm not kidding. You gotta know how to make it work for you and it's amazing :)

    • @alexhguerra
      @alexhguerra Před 4 měsíci +15

      There will be more than one AI , for each task, to create code and to validate code. Make no mistake, AGI is the last target, but the intermediate ones are good enough to speed up the whole ordeal/effort

    • @musiqtee
      @musiqtee Před 4 měsíci +86

      Ok, speed, efficiency, productivity… All true, but to what effect? Isn’t it so that every time we’ve had a serious paradigm shift, we thought we could “save time”.
      Sadly, since corporations are not ‘human’, we’ve ended up working *more* not less, raising the almighty GDP - having less free time and not making significantly more money.
      Unless… you own shares, IP, patents and other *derivatives* of AI as capital.
      AI is a tool. A sharp knife is also one. This “debate” should ask “who is holding the tool, and for what purpose?”. That question reveals very different answers to a corporation, a government, a community or a single person.
      It’s not what AI is or can do. It’s more about what we are, and what we do with AI… 👍

    • @westongpt
      @westongpt Před 4 měsíci +15

      Couldn't the same be said of Stack Overflow? I am not disagreeing with you, just adding an example to show it's not a new phenomenon.

    • @pledger6197
      @pledger6197 Před 4 měsíci +17

      It reminds me about talk in some podcasts before LLM, where speaker said that they tried to use AI as an assistant for medical reports and they faced the following problem:
      sometimes people see that AI gets the right answers and then when they disagree with it, they still choose the AI's conclusion, because "system can't be wrong".
      So to fight it, they programmed the system to sometimes give the wrong results, and ask the person to agree or disagree with it, to force people to chose the "right" answer and not to agree with anything that system says.
      And this is what I believe the weak point of LLM.
      While it's helpful in some scenarios, in other it can give SO deceiving answers which looks exactly how it should be, but in fact it's something that doesn't even exists.
      E.g. I tried to ask it about best way to get an achievement in the game, and it came with things that really exists in the game and sounds like they should be related to the achievement, but in fact they not.
      Or my friend tried to google windows error codes, and it came up with the problem and their descriptions, though it doesn't really exists either.

  • @vlasquez53
    @vlasquez53 Před 4 měsíci +143

    Linus sounds so calmed and relaxed until you see his comments on others PRs

    • @thewhitefalcon8539
      @thewhitefalcon8539 Před 3 měsíci +17

      That was a terrible PR though

    • @Alguem387
      @Alguem387 Před 2 měsíci +2

      I think he does it for fun tbh

    • @gruberu
      @gruberu Před 2 měsíci +10

      who amongst us that hasnt had a bad day because of a bad PR cast the first stone

    • @MechMK1
      @MechMK1 Před 2 měsíci +2

      You gotta let off steam somehow

    • @__Henry__
      @__Henry__ Před měsícem +1

      Yeah :/

  • @ZeroPlayerGame
    @ZeroPlayerGame Před 4 měsíci +122

    Man, Linus looks a noticeably older, wiser man than I've seen him in older talks. More respect for the guy.

    • @RyanMartinRAM
      @RyanMartinRAM Před 4 měsíci +10

      Great people often age like wine.

    • @ZeroPlayerGame
      @ZeroPlayerGame Před 4 měsíci +17

      @@RyanMartinRAMI have another adage - with age comes wisdom, but sometimes age comes alone. Not this time though!

    • @DielsonSales
      @DielsonSales Před 8 dny

      I think age makes anyone more humble, but sometimes less open minded. It’s good to see Linus recognize that LLMs have their uses, while some projects like Gentoo have stood completely against LLMs. Nothing is black and white, and when the hype is over, I think LLMs will still be used as assistants to pay attention to small stuff we sometimes neglect.

  • @Pantong
    @Pantong Před 4 měsíci +163

    It's another tool like static and dynamic analysis. No programmer will follow these tools blindly, but can use them to make suggestions or improve a feature. There have been times i've been stuck on picking a good data structure, and gpt has given more insightful ideas or edge cases i was not considering. That's this most useful moment right now. A Rubber Ducky.

    • @AM-yk5yd
      @AM-yk5yd Před 4 měsíci

      >No programmer will follow these tools blindly
      My sweet summer child. CURL authors already have to deal with "security reports" because some [REDACTED]s used Bard to find "vulnerabilities" to get a bug bounty. Wait for next jam in style "submit N PRs and you get our merch" and instead of PRs that fix a typo, you'll get even worse - the code that doesn't compile.

    • @conrad42
      @conrad42 Před 4 měsíci +16

      I agree that it can help in these scenarios. People should make aware of this, as the current discussion is way over the top and scare people in losing their jobs (an therefore their mental health). Another thing is, as sustainability was a topic, I'm not sure if the energy consumed by this technology justifies these trivial tasks. Talking with a colleague seems more energy efficient.

    • @LordChen
      @LordChen Před 4 měsíci

      aha. until it writes a Go GTK phone app (Linux phone) zero to hero with no code review and only UI design discussions.
      6 months ago. just chatgpt4.
      programming is dying and you people are dreaming.
      in 2023 there were 30% less new hires across all programming languages.
      for 2024, out of 950 tech companies, over 40% plan layoffs due to AI.
      a bit tired to link the source

    • @larryjonn9451
      @larryjonn9451 Před 4 měsíci +23

      You underestimate the stupidity of people

    • @Gokuguy1243
      @Gokuguy1243 Před 4 měsíci +15

      Absolutely, im convinced the other commenters claiming LLMs will make programming obsolete in 3 years or whatever are either not programmers or bad programmers lol

  • @illyam689
    @illyam689 Před 4 měsíci +434

    I think that Linus, in 2024, should run his own podcast

    • @TalsBadKidney
      @TalsBadKidney Před 4 měsíci +40

      and his first guest should be joe rogan

    • @SergioGomez-qe3kn
      @SergioGomez-qe3kn Před 4 měsíci +70

      @@TalsBadKidney
      Linus: - "What language do you think should be first tought at elementary school, Joe?"
      Joe: - "Jujitsu"

    • @turolretar
      @turolretar Před 4 měsíci +4

      @@TalsBadKidneythis is such a great idea

    • @ton4eg1
      @ton4eg1 Před 4 měsíci +1

      And has a stand-up.

    • @madisonhanberry6019
      @madisonhanberry6019 Před 4 měsíci +4

      He's such a great speaker, but I doubt he would have much time between managing Linux, family life, and whatever else

  • @duffy666
    @duffy666 Před 4 měsíci +139

    "we are all autocorrects on steroids to some degree" - agree 100%

    • @alang.2054
      @alang.2054 Před 4 měsíci +8

      Could you elaborate why do you agree? Your comment adds no value right now

    • @RFC3514
      @RFC3514 Před 4 měsíci +12

      I think he really meant to say "autocomplete". Because it basically takes your prompt and looks for what answer is mostly likely to follow it, based on material it has read.
      Which _is_ indeed kind of how humans work... if you remove creativity and the ability to _interact_ with the world, and only allow them to read books and answer written questions.
      And by "creativity" I'm including the ability to spot gaps in our own knowledge and do experiments to acquire _new_ information that wasn't part of our training.

    • @sbqp3
      @sbqp3 Před 4 měsíci +19

      The thing people with the interviewers mindset misses is what it takes to predict correctly. The language model has to have an implicit understanding of the data in order to predict. And ChatGPT is using a large language model to produce text, but you could just as well use it to produce something else, like actions in a robot. Which is kind of what humans do; they see and hear things, and act accordingly. People who dismiss the brilliance of large language models on the basis that they're "just predicting text" are really missing the point.

    • @RFC3514
      @RFC3514 Před 4 měsíci +1

      @@sbqp3 - No, you couldn't really use it to "produce actions in a robot", because what makes ChatGPT (and LLMs in general) reasonably competent is the huge amount of material it was trained on, and there isn't anywhere near the same amount of material (certainly not in a standardised, easily digestible form) of robot control files and outcomes.
      The recent "leap" in generative AI came from the volume of training data (and ability to process it), not from any revolutionary new algorithms. Just more memory + more CPU power + easy access to documents on the internet = more connections & better weigh(t)ing = better output.
      And in any application where you just don't have that volume of easily accessible, easily processable data, LLMs are going to give you poor results.
      We're still waiting for remotely competent self-driving vehicles, and there are billions of hours of dashcam footage and hundreds of companies investing millions in it. Now imagine trying to use a similar machine-learning model to train a mobile industrial robot, that has to deal with things like "finger" pressure, spatial clearance, humans moving around it, etc.. Explicitly coded logic (possibly aided by some generic AI for object recognition, etc. - which is already used) is still going to be the norm for the foreseeable future.

    • @duffy666
      @duffy666 Před 4 měsíci +4

      ​@@alang.2054 I like his comment because most thinking humans do is in fact system 1 thinking - which is reflex-like and on a similar level as what LLMs do.

  • @elliott8596
    @elliott8596 Před 4 měsíci +205

    Linus has really mellowed out as he has gotten older.

    • @duffy666
      @duffy666 Před 4 měsíci +41

      In a good way.

    • @Munchkin303
      @Munchkin303 Před 4 měsíci +58

      He became hopeful and humble

    • @mikicerise6250
      @mikicerise6250 Před 4 měsíci +27

      The therapy worked. 😉

    • @Rajmanov
      @Rajmanov Před 4 měsíci

      no therapy at all, just wisdom @@mikicerise6250

    • @darxoonwasser
      @darxoonwasser Před 4 měsíci +22

      ​@@Munchkin303Linus hopeful and humble Torvalds

  • @heshercharacter5555
    @heshercharacter5555 Před 3 měsíci +4

    I find LLM's extremely usefull for generating small code snippet very quickly. For example advanced regular expressions. Saved me tons of hours.

  • @vaibhawc
    @vaibhawc Před 4 měsíci +33

    Always love to hear Sir Linus Hopeful Humble Torvalds

    • @latt.qcd9221
      @latt.qcd9221 Před 4 měsíci +1

      Sir Linus Hopeful *_And_* Humble Torvalds

  • @ginebro1930
    @ginebro1930 Před 4 měsíci +55

    Smart answer from Linus.

  • @ChrisM541
    @ChrisM541 Před 4 měsíci +69

    For experienced programmers, most of the mistakes they make can be categorised as 'stupid' i.e. a simple overlook, where the fix is equally stupidly trivial. Exactly the same with building a PC - you might have done it 'millions' of times, but forgetting something stupid in the build is always stupidly easy to do, and though you might not do it often, you will inevitably still do it. At some point. Unfortunately, the fixes seem to always take forever to find.

    • @Jonas-Seiler
      @Jonas-Seiler Před 4 měsíci +15

      That’s the only good take on ai in the video, and maybe the only truly helpful thing ai might ever be used for, finding the obvious mistakes humans make because they’re thinking about more important shit.

    • @autohmae
      @autohmae Před 4 měsíci +5

      That's the problem with computers, you need to do it all 100% correct or it won't work.

    • @hallrules
      @hallrules Před 4 měsíci +5

      @@autohmae That also doubles as the good thing about computers, because it will never do something that you didn't tell it to do

    • @chunkyMunky329
      @chunkyMunky329 Před 4 měsíci +4

      I disagree with this. Simple bugs are easier to find, so we find more of them. The other bugs are more complex which makes them harder to find, so we find less of them. For example, not realising that the HTTP protocol has certain ramifications that become a serious problem when you structure your web app a certain way.

    • @ChrisM541
      @ChrisM541 Před 4 měsíci

      @@chunkyMunky329 It's definitely true that there are always exceptions, though I'd politely suggest "not realising" is primarily a result of inexperience.
      A badly written and/or badly translated urs can lead to significant issues when the inevitable subsequent change requests flood in, especially if there's poor documentation in the code.
      Any organisation is only as good as it's QA. We see this more and more in the games industry, where we increasingly, and deliberately, offload the testing aspect of that onto the end consumer.
      Simple bugs should be easy to find, you'd think, but they're also very, very easy to hide, unfortunately.

  • @vishusingh008
    @vishusingh008 Před 3 měsíci +1

    In such a short video, one can easily witness the brilliance of the man!!!

  • @DAG_42
    @DAG_42 Před 3 měsíci +20

    I'm glad he corrected the host. We are indeed all basically autocorrect to the extent LLMs are. LLMs are also creative and clever, at times. I get the feeling the host hasn't used them much, or perhaps at all

    • @kralg
      @kralg Před 3 měsíci +12

      It _seems_ to be creative and it _seems_ to be clever especially to those who are not. The host was fully correct stating that it has nothing to do with "intelligence", it only _seems_ to be intelligent.

    • @doomsdayrule
      @doomsdayrule Před 3 měsíci +3

      @@kralg If we made a future LLM that is indistinguishable from a human being, that answers questions correctly, that can solve novel problems, that "seems" creative... what is it that distinguishes our intelligence than the model's?
      It's just picking one token before the next, but isn't that what I'm also doing while writing this comment? In my view, there can certainly be intelligence involved in those simple choices.

    • @kralg
      @kralg Před 3 měsíci +1

      @@doomsdayrule Intelligence is much more than just about writing a text. Our decisions are based not only on lexical facts, but on our personal experiences, personal interests, emotions etc. I cannot and not going to much deeper into that, but it must be way more complex than a simple algorithm based on a bunch of data.
      I am not saying less than you will never ever be able to make a future LLM that is indistinguishable from a human being. Of course when you are presented just a text written by "somebody" you may not be able to figure it out, but if you start living with a person controlled by an LLM you will distinguish much sooner than later. It is all because the bunch of data these LLMs are using is missing one important thing: personality. And this word is highly related to intelligence.

    • @KoflerDavid
      @KoflerDavid Před 3 měsíci +4

      @@doomsdayrule As I am writing this comment, I'm not starting with a random word like "As" and then try to figure out what to write next. (Actually, the first draft started with "When")
      I have a thought in mind, and then somehow pick a sentence pattern suitable for expressing it. Then I read over (usually while still typing) and revise. At some point, my desire to fiddle with the comment is defeated by the need to do something else with my day, and I submit the reply. And then I notice obvious possibilities for improvements and edit what I just submitted.

    • @kralg
      @kralg Před 2 měsíci

      @@MarcusHilarius One aspect to this is that we are living in an overhyped world. Just in recent years we have heard so many promises like what you made. Just think about the promises made by Elon Musk and other questionable people. The marketing around these technologies are way "in front" of the reality. If there is just a theoretical possibility of something, the marketing jumps on it, they create thousands of believers in the obvious aim to gather support for further development. I think it is just smart to be cautious.
      The other aspect to this is that many believers do not know the real details of the technologies they believe in. The examples you mentioned are not in the future, at some extent they are available now. We call it automation and they do not require AI at all. Instead they rely on sensor technology and simple logic. Put an AI sticker on it and sell a lot more.
      Sure machine learning will be a great tool in the future, but not much more. We are in the phase of admiration now, but soon we will face the challenges and disadvantages of it and we will just live with them as we did so with many other technologies from the past.

  • @joemiller8409
    @joemiller8409 Před 3 měsíci +53

    the deafening silence when that phone alarm dared to go off mid torvalds dialogue 😆

  • @nathanmccarthy6209
    @nathanmccarthy6209 Před 4 měsíci +8

    There is absolutely no doubt in my mind that things like co-pilot are already part of pull requests that have been merged into the Linux kernel.

  • @datboi449
    @datboi449 Před 3 měsíci

    I have used llms to help me learn react when I only was familiar with angular at the time. I knew angular jargon and could prompt for a react version of my angular thought process. then take the response and pinpoint the features to research further

  • @shroomer3867
    @shroomer3867 Před 4 měsíci +127

    At 1:10 you can see how Linus is locating the Apple user and was considering to kill him on the spot but decides against it and continues his thought

  • @bergonius
    @bergonius Před 4 měsíci +10

    "You have to kinda be a bit too optimistic at times to make a difference" -This is profound

  • @nox5282
    @nox5282 Před 4 měsíci +6

    I use ai as a learning tool, if I get stuck I bounce ideas similar to a person, I then use it as a basis to keep going. I discover things I didn’t consider and continue reading other sources. Right now ai os not good to teach you, but great to get directions to explore or make of things or concepts to lookup.
    That being said next generation will be unable to form thoughts without ai, how many knows how to do long division anymore by hand

  • @themartdog
    @themartdog Před 3 měsíci +2

    OMG Linus is so smart, I love that he called out the (common) misconceptions of the interviewer: If LLMs are "autocomplete on steroids" then what does that make humans? If LLMs hallucinate bugs, then what do humans hallucinate? Very realistic and non-prideful way of thinking. Many people seem to forget how fallible humans are when discussing AI, Linus does not!

  • @draoi99
    @draoi99 Před 3 měsíci +2

    Linus is always chill about new things.

  • @mdimransarkar1103
    @mdimransarkar1103 Před 4 měsíci +3

    could be a great tool for static analysis.

    • @chunkyMunky329
      @chunkyMunky329 Před 4 měsíci +3

      If it was great at static analysis then people would probably already be using it for static analysis

  • @aniellodimeglio8369
    @aniellodimeglio8369 Před 4 měsíci +4

    LLMs are certainly useful and can very much assist in many areas. The future is really is open source models which are explainable and share their training data.

  • @TjPhysicist
    @TjPhysicist Před 2 měsíci +1

    I love this little short. I think what both of them said is true. LLM is definitely "autocorrect on steroids", as it were. But honestly, a lot of programming or really a lot of jobs in general don't really require higher level of intelligence, as Linus said - we all are autocorrect on steroids to some degree, because for the most part a lot of things we do, that's all you need. The problem is knowing the limitations of such a tool and not attempting to subvert human creativity with it.

  • @LiebeGruesse
    @LiebeGruesse Před 4 měsíci +1

    3:02 So true. And so rarely heard. 🙏

  • @br3nto
    @br3nto Před 4 měsíci +15

    LLMs are interesting. They can be super helpful to write out a ton of code from a short description, allowing you to formulate an idea really quickly, but often the finer details are wrong. That is using an LLM to write unique code is problematic. You may want the basic structure of idiomatic code, but then introduce subtle differences. When doing this, the LLM seems to struggle, often suggesting methods that don’t exist, or used to exist, or starts mixing methodologies from multiple versions of the library in use. E.g trying to use WebApplicationFactory in C#, but introducing some new reusable interfaces to configure the services and WebApplication that can be overridden in tests. It couldn’t find/suggest a solution. It’s a reminder that it can only write code it’s seen before. It can’t write something new. At least not yet.

    • @elle305
      @elle305 Před 4 měsíci +9

      you'll spend more time making sure it didn't add confident errors than it would take to write the code in the first place. complete gimmick only attractive to weak programmers

    • @br3nto
      @br3nto Před 4 měsíci +3

      @@elle305 I don’t think that’s accurate. Sure, you need the expertise to spot errors. Sure, you need the expertise to know what to ask for. But I don’t agree with the idea that you’ll take more time with LLMs than without. It’s boosted by productivity significantly. It’s boosted my ability to try new ideas quickly and iterate quickly. It’s boosted my ability to debug problems in existing code. It’s been incredibly useful. It’s a soundboard. It’s like doing pair programming but you get instant code. I want more of it, not less.

    • @elle305
      @elle305 Před 4 měsíci +3

      @@br3nto i have no way to validate your personal experience because i have no idea of your background. but I'm a full time developer and have been for decades, and I'm telling you that reviewing llm output is harder and more error prone than programming. there are no shortcuts to this discipline and people who look for them tend to fail

    • @Jonas-Seiler
      @Jonas-Seiler Před 4 měsíci

      ⁠@@elle305 it’s no different for any other discipline. but sometimes doing it the hard way (fucking around trying to make the ai output work somehow) is more efficient than doing it the right way, especially for one-of things, like trying to cobble together an assignment. and unfortunately more often than not, weak programmers (writers, artists, …) are perfectly sufficient for the purposes of most companies.

    • @elle305
      @elle305 Před 4 měsíci

      @@Jonas-Seiler i disagree

  • @sidharthv
    @sidharthv Před 4 měsíci +19

    I learned python on my on from CZcams and online tutorials. And recently I started learning Go the same way, but this time also with the help of Bard. The learning experience has been nothing short of incredible.

    • @Spacemonkeymojo
      @Spacemonkeymojo Před 4 měsíci +3

      You should pat yourself on the back for not asking ChatGPT to write code for you.

    • @incremental_failure
      @incremental_failure Před 4 měsíci

      @@Spacemonkeymojo Only my CZcams comments are written by ChatGPT, not my code.

    • @etziowingeler3173
      @etziowingeler3173 Před 4 měsíci

      Bard and code, only for simple stuff

  • @srinivaschillara4023
    @srinivaschillara4023 Před 22 dny

    so nice, and also the quality of comments for this video.... there ishope for humanity.

  • @pullingweeds
    @pullingweeds Před 3 měsíci

    Great to hear Linux comment on the opening statement made by the interviewer. I think he maybahve expdcted Linus to agree with him.

  • @AlbertCloete
    @AlbertCloete Před 4 měsíci +93

    Those subtle bugs are what LLMs produce copious amounts of. And it takes very long to debug. To the degree where you probably would have been better off if you just wrote the code by hand yourself.

    • @xSyn08
      @xSyn08 Před 4 měsíci +17

      @@user-qd4xs8zb8sWhat, like a "Prompt Engineer"? It's ridiculous that this became a thing given how LLMs work.
      It's all about intuition that most people can figure out if they spend a day messing around with it.

    • @joshmogil8562
      @joshmogil8562 Před 4 měsíci +4

      Honestly this has not been my experience using GPT4

    • @tbunreall
      @tbunreall Před 4 měsíci +2

      Disagree. Since humans constantly creating bugs when coding themselves, even subtle, even the best of the best. LLM are amazing. I realized my python code ended up needing to be multi threaded. I fed it my code, and it multi threaded everything. They are incredible and only this is just beginning? 5 years will blow peoples minds, completely. People who don't find how amazing llms are, just aren't that bright in my opinion.

    • @asterinycht5438
      @asterinycht5438 Před 4 měsíci +2

      thats why you must input the psuedocode on llm to control the output more be precise to what you want.

    • @gabrielkdc17
      @gabrielkdc17 Před 4 měsíci +7

      It's amusing how we, as programmers, often tell users that if they input poor quality data into the system, they should expect poor quality results. In this case, the fault lies with the user, not the system. However, now we find ourselves complaining about a system when we input low-quality data and receive unsatisfactory results. This time, though, we blame the system instead of ourselves

  • @lmamakos
    @lmamakos Před 3 měsíci +14

    Is cut-and-paste from StackOverflow that far from asking the LLM for the answer?

    • @derekhettinger451
      @derekhettinger451 Před 3 měsíci +15

      ive never been insulted by gpt

    • @David-gu8hv
      @David-gu8hv Před 3 měsíci +1

      @@derekhettinger451 Ha Ha!!!!!

    • @VoyivodaFTW1
      @VoyivodaFTW1 Před 2 měsíci

      Lmao. Well, a senior dev is likely on the other end of a stack overflow answer, so basically yea

    • @pauldraper1736
      @pauldraper1736 Před měsícem

      @@VoyivodaFTW1 optimistic I see

  • @mingzhu8093
    @mingzhu8093 Před 4 měsíci +2

    Program generated code goes way back for decades, if you ever use any ORM almost all of them generate tables for class and sql and vice versa. But I don’t think anybody just takes it as is without reviewing.

    • @caLLLendar
      @caLLLendar Před 3 měsíci

      Reviewing can be automated.

  • @pablorodriguez6318
    @pablorodriguez6318 Před 2 měsíci

    LLMs are not simply predicting the next word. that is really an understatement. I can see this in the RNNs, but not in the world of the transformers with attention mechanism

  • @alextrebek5237
    @alextrebek5237 Před 4 měsíci +61

    (Average typing speed*number of working days a year)/6 words per line of code ~=1milLOC/year. But we dont write that much. Why? Most coding is just sitting and thinking, then writing little
    LLMs are great to get started with a new language, library or writing repetitive datastructs or algs, but bad for production or logic (design patterns such as the Strategy pattern) due to not logically understanding the problem domain, which from our napkin math just proved is the largest part coding assistants arent improving

    • @antman7673
      @antman7673 Před 4 měsíci +2

      I wouldn’t even agree.
      Imagine yourself just getting the job to code x project.
      In that case, you can rely on a very limited amount of information.
      Within the right, there are very few ways in which LLMs fail.

    • @coryc9040
      @coryc9040 Před 4 měsíci +1

      Maybe if many programmers sit down and explain their thought process on multiple different problems it can learn to abstract the problem solving method programmers use. While the auto correct on steroids might be technically accurate for what it's doing, the models it builds to predict the next token are extremely sophisticated and for all we know may have some similarity to our logical understanding of problem domains. Also LLMs are still in their infancy. There are probably controls or additional complexity that could be added to address current shortcomings. I'm skeptical of some of the AI hype, I'm equally skeptical of the naysayers. I tend to think the naysayers are wrong based on what LLMs have already accomplished. Plenty of people just 2-3 years ago would've said some of the things they are doing now are impossible.

    • @SimGunther
      @SimGunther Před 4 měsíci +5

      Read the original documentation and if there's something you don't understand, Google it and be social. Only let the LLM regurgitate that part of the docs in terms you understand as a last resort.
      I'm surprised at the creativity LLMs have in their own context, but don't replace reading the docs and writing code with LLMs. You must understand why the algo/struct is important and what problems each algorithm solves.
      If you think LLMs replace experience, you're surely mistaken and you'll be trapped in learned helplessness for eternity.

    • @mobbs8229
      @mobbs8229 Před 4 měsíci +4

      I literally asked chatGPT today to explain to MVCC pattern (Which I could've sworn is called the MVVC pattern but it corrected me to that) and its explanation got worse after every attempt of me telling it, it was not doing a good job.

    • @RobFisherUK
      @RobFisherUK Před 4 měsíci +3

      ​@@SimGuntherreading the docs only works if you know what you're looking for. LLMs are great at understanding your badly written question.
      I once proposed a solution to a problem I had to ChatGPT and it said: that sounds similar to the technique in statistics called bootstrapping. Opened up a whole new box of tricks previously unknown to me.
      I could have spent months cultivating social relationships with statisticians but it would have been a lot more work and I'm not sure they'd have the patience.

  • @7rich79
    @7rich79 Před 4 měsíci +20

    Personally I think that while it will be extremely useful, there will also be this belief over time that the "computer is always right". In this sense we will surely end up with a scandal like Horizon in the future, but this time it will be much harder to prove that there was a fault in the system.

    • @arentyr
      @arentyr Před 4 měsíci

      Precisely this. With Horizon it took years of them being incredulous that there were any bugs at all, that it must be perfect and that instead thousands of postmasters were simply thieves. Eventually the bugs/errors became so glaring (and finally maybe someone competent actually looked at the code) that it was then known that the software was in fact broken. What then followed were many many more years of cover ups and lies, with people mainly concerned with protecting their own status/reputation/business revenue rather than do what was right and just.
      Given all this, the AI scenario is going to be far worse: the AI system that “hallucinates” faulty code will also “hallucinate” spurious but very plausible explanations.
      99.99% won’t have the requisite technical knowledge to determine that it is in fact wrong. The 0.01% won’t be believed or listened to.
      The terrifying prospect of AI is in fact very mundane (not Terminator nonsense): its ability to be completely wrong or fabricate entirely incorrect information, and then proceed to explain/defend it with seemingly absolute authority and clarity.
      It is only a matter of time before people naturally entrust them far too much, under the illusion that they are never incorrect, in the same way that one assumes something must be correct if 99/100 people believe it to be so. Probability/mathematics is a good example of where 99/100 might think something is correct, but in fact they’re all wrong - sometimes facts can be deeply counterintuitive, and go against our natural intelligence heuristics.

    • @mattmaas5790
      @mattmaas5790 Před 2 měsíci

      Maybe. But it depends what we allow ai to be in charge of. Remember, if we vote out the gop we can like pass laws again to do things for the benefit of the people including ai regulations if needed.

  • @fafutuka
    @fafutuka Před 4 měsíci +1

    The fact that you can to him about code reviews its just humbling, man hasn't change at all

  • @calmhorizons
    @calmhorizons Před 4 měsíci +3

    There is a fundamental philosophical difference between the type of wrong humans do, and the type AI does (in its present form). I think programmers are in danger of seriously devaluing the relative difference between incidental errors and constitutive errors - that is, humans are wrong accidentally, LLMs are wrong by design - and while we know we can train people better to reduce the former, it remains to be seen if the latter will remain inherent in the implementation realities of the latter - i.e. relying on statistical inference as a substitute for reason.

    • @caLLLendar
      @caLLLendar Před 3 měsíci

      You got stuck in your own word salad. Start over; Think like a programmer. Break the problem down. How would you go about proving the LLM's code is correct using today's technology?

    • @calmhorizons
      @calmhorizons Před 3 měsíci +1

      ​@@caLLLendar
      First, I don't appreciate your tone. I know this is CZcams and standards of discourse here are notoriously low, but there is no need to be rude.
      I wasn't making a point about engineering.
      The issue is not the code, code can of course be Unit Tested etc. for validity.
      The issue is that the method of producing the code is fundamentally statistical, and not arrived at through any form of reason. This means there is a ceiling of trust that we must impose if we are to avoid the obvious pitfalls of such an approach.
      As a result of the inherent nature of ML, it will inevitably perpetuate coding flaws/issues in the training data - and you, as the developer, if you do not preference your own problem solving skills are increasingly relegated to the role of code babysitter. This is not something to be treated casually.
      Early research is now starting to validate this concern: visualstudiomagazine.com/Articles/2024/01/25/copilot-research.aspx
      These models have their undeniable uses, but I find it depressing how many developers are rushing to proclaim their own obsolescence in the face of a provably flawed (though powerful) tool.

    • @caLLLendar
      @caLLLendar Před 3 měsíci

      @@calmhorizons Have one developer draft psueudocode that is transformed to whatever scripting language that is preferred and then use a boatload of QA tools. The output from the QA tools prompt the LLM. Look at Python Wolverine to see automated debugging. Google the loooooonnnnng list of free open source QA tools that can be wrapped around the LLMs. The LLMs can take care of most of the code (like writing unit tests, type hinting, documentation, etc).
      The first thing you'd have to do is get some hands on experience in writing the pseudocode in a style that LLMs and non-programmers can understand.
      From there, you will get better at it and ultimately SEE it with your own eyes. I admit that there are times that I have to delete a conversation (because the LLM seems to become stubborn). However, that too can be automated.
      The result?
      19 out of 20 developers fired. LOL I definitely wouldn't hire a developer who wouldn't be able to come up with a solution for the problems you posed (even if the LLM and tools are doing most of the work).
      Some devs pose the problem and cannot solve it. Other devs think that the LLM should be able to do everything (i.e. "Write me a software program that will make me a million dollars next week).
      Both perceptions are provably wrong. As programmers it is our job to break the problem down and solve it.
      Finally, there are ALREADY companies doing this work (and they are very easy to find).

    • @vibovitold
      @vibovitold Před 7 dny

      @@calmhorizons exactly. Agreed, and very well put. Respect for taking time to reply to a rather shallow and asinine comment.
      "As a result of the inherent nature of ML, it will inevitably perpetuate coding flaws/issues in the training data "
      I would add that this will likely be exacerbated once more and more AI-generated code makes its way into the training datasets (and good luck filtering it out).
      We already know that it has a very deteriorating effect on the quality (already proven for the case of image generation), because all flaws inherent to the method get amplified as a result.

  • @WokerThanThou
    @WokerThanThou Před 4 měsíci +4

    Man .... I really wanted to see what would happen if that phone rang again.

  • @LuicMarin
    @LuicMarin Před 2 měsíci

    It is already helping review code, just look at million lint, it's not all AI but it has aspects where it uses LLMs to help you find performance issues in react code. A similar thing could be applied to code reviews in general

  • @Willow1w
    @Willow1w Před 4 měsíci +137

    AI is helpful with beginner programming tasks. It's fantastic for converting textual data between formats. But as soon as you ask for help with more advanced subjects like for example, help writing a KMDF driver or a bottom-up parser is will spit out complete garbage. Training the model on text scrapped from the internet will only take you so far.

    • @jumpstar9000
      @jumpstar9000 Před 4 měsíci +13

      I'm pretty sure it can sketch out both, and then you can use the model to drill down and fill in the pieces. At least, that is how I use it. It does pretty well. I currently have to keep an eye on it, but it isn't stupid and is quite capable of writing novel code (with some prompting), or converting algorithms to AVX2 or writing CUDA or...
      The value seems to be in the eye of the beholder. If you approach it with skepticism and cynicism and refuse to put some effort in, well, you get what you deserve, imho.

    • @thegoldenatlas753
      @thegoldenatlas753 Před 4 měsíci +5

      Part of the issue is quantity. There are far fewer resources on the lower level concepts and that lack of resources hampers chances of improving quality.
      Ai in programming is essentially a programmer that's only ever done tutorials and you'll be hard pressed to find enough tutorials for something low level like a driver when compared to something like a website. So of course an AI will spit gibberish for a driver.
      Personally ive used ai mostly for quickly finding out if a thing already exists for what I'm doing, like if you didn't know the map function existed asking the ai how you could combine two sets of values together the ai tells you about map.

    • @Jcossette1
      @Jcossette1 Před 4 měsíci +19

      You have to cut the tasks into smaller individual prompts. You can't ask it to code an OS in one prompt.

    • @ukpropertycommunity
      @ukpropertycommunity Před 4 měsíci +3

      @@Jcossette1it’s a form of supervised learning, so you need the knowledge to specify the expected supervised behaviour that you can write it yourself already, such that they can just do autocorrect in steroids. As for long lines of code, it might not face the 128K context window limitation directly, but it will face sparse self-attention issues that will delete random lines of code way before that!

    • @IrregularPineapples
      @IrregularPineapples Před 4 měsíci +5

      you say that like you're an expert -- AI LLM's like chatgpt have only been around for like 6-12 months

  • @wabbajocky8235
    @wabbajocky8235 Před 4 měsíci +5

    linus with the hot takes. love to see it

  • @denisblack9897
    @denisblack9897 Před 4 měsíci

    This made my day, thanks!

  • @johncompassion9054
    @johncompassion9054 Před 12 dny

    This is why Linus is Linus. Just look at his intelligence, attitude to life and optimism. No negativity, rivalry or hate. My respect.

  • @user-rh2xc4eq7d
    @user-rh2xc4eq7d Před 4 měsíci +27

    A responsible programmer might use AI to generate code, but they would never submit it without understanding it and testing it first.

    • @traveller23e
      @traveller23e Před 4 měsíci +13

      Although by the time you read and fully understand the code, you may as well have written it.

    • @user-rh2xc4eq7d
      @user-rh2xc4eq7d Před 4 měsíci +2

      @@traveller23e if the code fails for some reason, I'll be glad I took the time to understand it.

    • @knufyeinundzwanzig2004
      @knufyeinundzwanzig2004 Před 4 měsíci +4

      @@traveller23e actually true. if you understand every aspect of the code, why wouldn't you just have written it yourself? at some point when using llms these people will become used to the answers being mostly correct so they'll stop checking. productivity 200% bla bla, yeah sure dude. man llms will ruin modern software even more, todays releases are already full of bugs

    • @MrHaggyy
      @MrHaggyy Před 4 měsíci

      @@traveller23e Well the same goes for the compiler. If you "fully understand" the code there should never be a warning or error. Most tools like GitHub-copilot require you to write anyway, but they give you the option of writing a view dozen chars with a single keystroke. This is pretty nice if most of your work is assembling different algorithms or data structures, not creating new ones.

    • @Mpanagiotopoulos
      @Mpanagiotopoulos Před 4 měsíci

      I submit all the times code I don't understand, I simply ask in english the LLM to explain it to me. I have written a whole app in javascript without learning JS in my entire life

  • @Standbackforscience
    @Standbackforscience Před 4 měsíci +9

    There's a world of difference between using AI to find bugs in your code, vs using AI to generate novel code from a prompt. Linus is talking about the former, AI Bros mean the latter.

  • @timothybruce9366
    @timothybruce9366 Před 2 měsíci

    My last company started using AI over a year ago. We write the docblock and the AI writes the function. And it's largely correct. This is production code in smartphones and home appliances world-wide.

  • @user-qz6em2ss4n
    @user-qz6em2ss4n Před 3 měsíci

    You're right. But we're also hearing some negative stories in terms of teamwork. For example, there are some situations where a junior developer sits and waits for an AI code that keeps giving different answers instead of writing code, or it takes more time to analyze why the code was written the way it was, as opposed to the other way around, but it still helps to gain insight or a new approach, even if it's a completely different answer.

    • @tapetwo7115
      @tapetwo7115 Před 2 měsíci

      That junior coder needs more GitHubs so we can bring them on as a lead dev to work with AI. The middle management and entry level is over in the future.

  • @samson_77
    @samson_77 Před 4 měsíci +44

    Good interview, but I disagree with the introduction, where it is said that LLM's are "auto-correction on steroids" . Yes, LLMs do next token prediction. But that's just one part. The engine of a LLM is a giant neural network, that learned a (more or less sophisticated) model of the world. It is being used during inference to match input information against and, based on that correlations, creates new output information which leads, in an iterative process, to a series of next token. So the magic happens, when input information is matched against the learned world model, that leads to new output information.

    • @thedave0004
      @thedave0004 Před 4 měsíci +19

      Agreed! This is the type of thing people say somewhat arrogantly when they've only had a limited play with the modern LLMs. My mind was blown when I wrote a parser of what I would call medium complexity in python for a particular proprietary protocol. It worked great but it was taking 45 mins to process a days worth of data, and I was using it every day to hunt down a weird edge case that only happened every few days. So out of interest I copied and pasted the entire thing into GPT4 and said "This is too slow, please re-write it in C and make it faster" and it did. Multiple files, including headers, all perfect. It compiled first time, and did in about 30s (I forget how long exactly but that ballpark) what my hand written python program was doing in 45 mins. I don't think I've EVER written even a simple program that's compiled first time, let alone something medium complicated.
      To call this auto complete doesn't give it the respect it deserves. GPT4 did in a few seconds what would have taken me a couple of days (if I even managed it at all, I'm not an expert in C by a long stretch).

    • @davidparker5530
      @davidparker5530 Před 4 měsíci +9

      I agree, the reductionist argument trivializes the power of LLMs. We could say the same thing about humans, we "just predict the next word in a series of sentences". That doesn't capture the power and magic of human ingenuity.

    • @thegoncaloalves
      @thegoncaloalves Před 4 měsíci +4

      Even Linus says that. Some of the things that LLMs produce are almost black magic.

    • @mitchhudson3972
      @mitchhudson3972 Před 4 měsíci +8

      So... Autocorrect

    • @mitchhudson3972
      @mitchhudson3972 Před 4 měsíci +5

      ​@@davidparker5530humans don't just predict the next word though. Llms do. Neural networks don't think, all they do is guess based on some inputs. Humans think about problems and work through them, llms by nature don't think about anything more than what they've seen before.

  • @memaimu
    @memaimu Před 4 měsíci +5

    "Linus Benedict Torvalds is a Finnish-American software engineer who is the creator and lead developer of the Linux kernel, used by Operating Systems such as Chrome OS, Android, and GNU/Linux distributions such as Debian and Arch. He also created the distributed version control system Git."

  • @laughingvampire7555
    @laughingvampire7555 Před 3 měsíci

    I'm more interested in code synthesizers, is something the PLT folks are doing, using a sophisticated type system and theorem prover to generate the code that fits the given criteria.

  • @caesare1968
    @caesare1968 Před 3 měsíci

    How nice letting the advertisement after the program, aplauses

  • @kibiz0r
    @kibiz0r Před 4 měsíci +24

    As a central figure in the FOSS movement, I'm surprised he doesn't have any scathing remarks about OpenAI and Microsoft hijacking the entire body of open source work to wrap it in an opaque for-profit subscription service.

    • @nothingtoseehere93
      @nothingtoseehere93 Před 2 měsíci

      He has to be careful now that the SJWs neutered him and sent him to tolerance camp. Thank the people who wrote absolute garbage like the contributor covenant code of conduct

    • @haroldcruz8550
      @haroldcruz8550 Před měsícem

      Then you're not in the loop. Linus was never the central figure of the FOSS movement. While his contribution to the Linux Kernel is appreciated he's not really considered one of the leaders when it comes to the FOSS movement.

    • @jasperdevries1726
      @jasperdevries1726 Před 12 dny

      @@haroldcruz8550 Well said. I'd expect stronger opinions from Richard Stallman for instance.

  • @avananana
    @avananana Před 4 měsíci +27

    I personally believe, much like many others, that AI/ML will only speedup the rate at which bad programmers become even worse programmers. Part of the art of writing software is writing it efficiently, and you can't do that if you always use tools to solve your problems for you. You need to experience the failures and downsides in order to fully understand how it works. There is a line when it turns from an efficient tool to a tool used to avoid actually thinking about solutions. I fully believe that there is a place for AI/ML in making software, but if people blindly use them to write software for them it'll just lead to hard-to-find bugs and code that nobody knows how it works because nobody actually wrote it.

    • @cookie_space
      @cookie_space Před 4 měsíci +7

      You don't always have to reinvent the wheel when it comes to learning how to code.
      Everyone starts by copying code from Stack Overflow and many still do that for novel concepts they want to understand.
      It can be pretty helpful to ask AI for specific things instead of spending hours trying to search for something fitting...
      Sure thing, if you just stop at copying you don't learn anything

    • @conchitacaparroz
      @conchitacaparroz Před 4 měsíci

      @@cookie_space but i think that's the thing, the risk of "just copying" will be higher because all the AI tools and AI features in our IDEs will make it a lot easier and more probable to get the code ready for you

    • @Markus-iq4sm
      @Markus-iq4sm Před 4 měsíci +1

      @@cookie_spaceeveryone? Man don't throw everyone to the same bucket. Are you the guy who can not even write a bubble sort out of your head and you need to google every single solution? Well, that is sad

    • @cookie_space
      @cookie_space Před 4 měsíci +3

      @@Markus-iq4sm I wasn't aware that your highness was born with the knowledge of every programming language and concept imprinted in your brain already. It might be hard to fathom for you, but some of us actually have to learn programming at some point

    • @Markus-iq4sm
      @Markus-iq4sm Před 4 měsíci +1

      @@cookie_space you learn nothing by copy-pasting, actually it will even make you worse especially for beginners

  • @raielschwartz6837
    @raielschwartz6837 Před 4 měsíci +2

    It's truly fascinating to hear Torvalds' insightful perspective on how Artificial Intelligence is molding the programming landscape. This video does a commendable job of breaking down complex concepts into understandable dialogue for the viewers. AI's potential in automating tasks and improving efficiency is a game-changer, and it's exciting to see what the future holds in this sphere. Thank you for sharing such an enlightening discussion. Looking forward to more content like this.

  • @Nerdtronic
    @Nerdtronic Před 2 měsíci +2

    I use GPT4 to help me program all the time now. It's much more impressive than what you would think just "guessing what the next word should be" language model should be able to do. I tell it what I'm wanting to do and it helps me do it even to the point of writing full php scripts. It found a stupid bug I had been dealing with for a while which was an extra 0 that turned 10 min of ms into 100 min. Pretty easy to do. I'm not a php programmer or a database programmer. But it wrote a simple server side for a hobby project for me. I can say "now I want to add this feature" and it'll write the new version of the code. It's pretty amazing.

  • @sfacets
    @sfacets Před 4 měsíci +20

    If programmers aren't debugging their own work, then they will gradually loose the ability to do so. Just like when a child learns to multiply with a calculator and not in their minds - they lose the ability to multiply, and become reliant on the machine.
    Programmers learn as they program. It is mind-expanding work. Look at Torvalds and you see a person who is highly intelligent, because he has put the work in over many years.
    We can become more efficient programers using AI tools - but it will come at a cost.
    "Everywhere we remain unfree and chained to technology, whether we passionately affirm or deny it. But we are delivered over to it in the worst possible way when we regard it as something neutral; for this conception of it, to which today we particularly like to do homage, makes us utterly blind to the essence of technology." - Martin Heidegger
    When a programer, for example, is asked to check on a solution given by AI, and lacks the competency to do so (because, like the child, they never learned the process) then this is a dangerous position we as humans are placing ourselves in - caged in inscrutable logic that will nonetheless come to govern our lives.

  • @CausallyExplained
    @CausallyExplained Před 4 měsíci +8

    Linus is definitely not the sheep, you can tell just how different he is from the general.

    • @chunkyMunky329
      @chunkyMunky329 Před 4 měsíci +2

      He is different but something I've noticed is that smart people have a great at understanding things that the rest of us struggle with, but they are kinda dumb when it comes to things of simple common sense. Like for him to not understand the down side to an AI writing bad code for you is just kinda silly. It should be obvious that a more reliable tool would be better than a less reliable tool.

    • @justsomerandomnesss604
      @justsomerandomnesss604 Před 4 měsíci +3

      ​@@chunkyMunky329There is no "more reliable tool" though
      It is about tools in your toolbox in general
      Just because your hammer is really good at hammering in a nail, you're not gonna use it to saw a plank.
      Same with programming. You use the tools that get the job done.

    • @pauldraper1736
      @pauldraper1736 Před měsícem

      @@chunkyMunky329 You have an implicit assumption that people are more reliable tools than LLMs. I think that is up for debate.

    • @chunkyMunky329
      @chunkyMunky329 Před měsícem

      @@pauldraper1736 "people" is a vague term. Also, I never said that it was a battle between manual effort vs LLMs. It should be a battle between an S-Tier human invention such as a compiler vs an LLM. Great human-built software will cause chat GPT to want to delete itself

    • @pauldraper1736
      @pauldraper1736 Před měsícem

      @@chunkyMunky329 linter is only one possible use of ai

  • @roylxp
    @roylxp Před 3 měsíci +2

    no one commenting on the moderator? He is doing a great job driving the conversation

  • @frantisek_heca
    @frantisek_heca Před 4 měsíci +2

    The full talk is where please?

    • @Hobbitstomper
      @Hobbitstomper Před 4 měsíci +2

      Full interview video is called "Keynote: Linus Torvalds, Creator of Linux & Git, in Conversation with Dirk Hohndel" by the Linux Foundation channel.

  • @EllaJameson
    @EllaJameson Před 3 měsíci +119

    As someone with a degree in Machine Learning, hearing him call it LLMs "Autocorrect on steroids" gave me catharsis. The way people talk and think about the field of AI is totally absurd and grounded in SciFi only. I want to vomit every time someone tells me to "just use AI to write the code for that" or similar.
    AI, as it exists now, is the perfect tool to aid humans (think pair programming, code auto-completion for stuff like simple loops, rough prototypes that can inspire new ideas, etc.) Don't let it trick you into thinking it can do anyone's job though. It's just a digital sycophant, never forget that.

    • @vuralmecbur9958
      @vuralmecbur9958 Před 2 měsíci +12

      Do you have any valid arguments that make you think that it cannot do anyone's job or is it just your emotions?

    • @legendarymortarplayer9453
      @legendarymortarplayer9453 Před 2 měsíci

      ​@@vuralmecbur9958if your job relies on not thinking and copy pasting code then yes it can replace you but if it is not,if you understand code and can modify it properly to your needs and specifications it can not replace you,I work on ai as well

    • @user-zf4nq1dy2n
      @user-zf4nq1dy2n Před 2 měsíci

      ​@@vuralmecbur9958it's not about AI not being an "autocorrect on steroids". It's about "there are a lot of jobs out there, that could be done by autocorrect on steroids"

    • @DDracee
      @DDracee Před 2 měsíci

      @@vuralmecbur9958do you have any valid arguments as to why people will get layed off instead of companies scaling up their projects? 200-300% increase in productivity simply means 200-300% increase in future project sizes, the field you're working in is already dying anyway if scaling up isn't possible and you're barking at the wrong tree
      where i'm working were constantly turning down projects because there's too much to do and no skilled labour to hire (avionics/defense)

    • @jeromemoutou9744
      @jeromemoutou9744 Před měsícem

      ​@@vuralmecbur9958 go prompt it to make you a simple application and you'll see it's not taking anyone's job anytime soon.
      If anything, it's an amazing learning tool. You can study code and anything you don't understand, it will explain in depth. You don't quite grasp a concept? Prompt it to explain it further.

  • @TheClonerx
    @TheClonerx Před 4 měsíci +3

    Im still very worried about the copyright implications, and the hidden immoralities happening to classify training data

    • @rithikgandhi3685
      @rithikgandhi3685 Před 4 měsíci +1

      Yes no one is talking about the effect on creativity LLMs will cause

    • @fabsi154
      @fabsi154 Před 4 měsíci

      @@rithikgandhi3685yeah

  • @tecTitus
    @tecTitus Před 2 měsíci

    If you don't mind being a prompt engineer and a code reviewer of LLMs, they are great

  • @nettebulut
    @nettebulut Před 4 měsíci +1

    03:34 "hopeful and humble that's my middle name" and laughing.. :)

  • @flokar6197
    @flokar6197 Před 4 měsíci +15

    I have never programmed before in my life and with GPT4 I have programmed several little programs in Phython. From code that helps me renaming large amount of files to more advanced stuff. LLMs give me the opportunity to play around. Only thing I need to learn is how to prompt better.

    • @kevinmcq7968
      @kevinmcq7968 Před 4 měsíci +2

      you're a programmer in my eyes!

    • @twigsagan3857
      @twigsagan3857 Před 4 měsíci +4

      "Only thing I need to learn is how to prompt better."
      This is exactly the problem. Especially when you scale. You can't prompt to make a change to an already complex system. It then becomes easier to just code or refactor yourself.

    • @chunkyMunky329
      @chunkyMunky329 Před 4 měsíci

      The fact that anybody needs to "prompt better" suggest that LLMs are not very good yet

    • @flokar6197
      @flokar6197 Před 4 měsíci

      @@twigsagan3857 Only problem is when the code exceeds the Token Limit. Otherwise I can still let the LLM correct my code. Takes a while to get there but It works.. And no I am not at all a programmer xD

    • @flokar6197
      @flokar6197 Před 4 měsíci +1

      @@chunkyMunky329 huh? LLMs predict the most likely answer. So the way you describe the Task is the most important thing in dealing with it..

  • @tiagocerqueira9459
    @tiagocerqueira9459 Před 4 měsíci +6

    Humans also "hallucinate" when we write bugs and often LLM can catch them. And we will still be here to catch LLM hallucinations.

    • @vibovitold
      @vibovitold Před 7 dny

      OK, but the goal is to create a tool that's superior to humans, not just replicates humans, still making errors but faster : )
      "Here's a new thing I invented - it's called a calculator, and you can use it to add and multiply big numbers. The only snag is that some of the results will be incorrect"
      "That's fine - when humans count by hand, some of the results are incorrect, too" ;)
      Well, of course, but that's not quite the point...
      And there's still a difference, because the AI have an inherent 100% "sense of confidence" when they hallucinate.
      It's not a bug, it's a feature.
      As someone put it, they are by definition "dream machines".
      They work by creating plausible looking solutions, based on the assumption that a plausible looking solution is likely to actually be plausible. (And more often than not, it is... until it's not).
      We do make mistakes, we make "off-by-one" errors when iterating over an array, we may confuse dates discussing historical events
      But if I ask you about a non-existent novel, you may confuse it with something you actually read.
      But the difference is that an AI doesn't "run out of steam" when it hallucinates.
      You can ask ChatGPT about a non-existent novel.
      It will make it up on the fly. You can ask it about specific scenes, characters, and it will invent them all.
      That's not exactly how a human mind - fallible as it is - works.
      Besides:
      "And we will still be here to catch LLM hallucinations."
      I wouldn't be so optimistic about it.
      Reviewing and debugging code is notoriously harder than writing it, and AI hallucinations may not be easy for humans to detect, because we're trained to spot errors made by other humans, who think kind of like us. The AIs are not, and won't be "thinking" like we do.
      Once AI is tasked with writing large and complex systems - and at a much faster pace than we do - it may become very hard to effectively supervise it.

  • @programmingwithyunusemrevu7222
    @programmingwithyunusemrevu7222 Před 4 měsíci +1

    For those commenting there won't be coding in a couple of years, I'd like to remind scientific calculators and software for them. We didn't stop doing math by hand. We just made some tasks faster and more accurate. You will always need to learn the 'boring' parts even if there is a 'calculator'. Your brain needs the boring stuff to create more complex results.

  • @mikey1836
    @mikey1836 Před 8 dny

    Amazing that Linus accepts AI. Some techies are disparaging of AI. A truly smart person looks at the pros and cons, rather than just being dogmatically for or against.

  • @shobanchiddarth_old
    @shobanchiddarth_old Před 4 měsíci +3

    Link to original?

    • @Hobbitstomper
      @Hobbitstomper Před 4 měsíci +1

      Full interview video is called "Keynote: Linus Torvalds, Creator of Linux & Git, in Conversation with Dirk Hohndel" by the Linux Foundation channel.

  • @DemPilafian
    @DemPilafian Před 4 měsíci +4

    Auto-correct can cause bugs like tricking developers into importing unintended packages. I've seen production code that should fail miserably, but pure happenstance results in the code miraculously not blowing up. AI is a powerful tool, but it will amp up these problems.

    • @caLLLendar
      @caLLLendar Před 3 měsíci

      No. Thinking like a programmer, are you able to come up with a solution?

  • @germanrinaldi7830
    @germanrinaldi7830 Před 4 měsíci

    Where can I find the full interview?

    • @Hobbitstomper
      @Hobbitstomper Před 4 měsíci +1

      Full interview video is called "Keynote: Linus Torvalds, Creator of Linux & Git, in Conversation with Dirk Hohndel" by the Linux Foundation channel.

  • @BCOOOL100
    @BCOOOL100 Před 3 měsíci

    Link to the original?

  • @MrVampify
    @MrVampify Před 4 měsíci +24

    I think LLM technology will make bad programmers faster at being bad bad programmers and hopefully push them to be better programmers faster as well.
    LLMs I think will make good programmers more efficient at writing good code they probably would already write.

    • @melvin6228
      @melvin6228 Před 4 měsíci +7

      LLMs solve not needing to remember how you write things. You still have to be able to read it and have good judgement on where the code is subpar.

    • @skyleite
      @skyleite Před 4 měsíci +7

      @@melvin6228 This is nonsense. How can you audit code that you yourself don't remember how to write?

    • @yjlom
      @yjlom Před 4 měsíci +2

      @@skyleite is that function you use twice a year called "empty_foo_bar" or "clear_foo_bar"? Or maybe "foo_bar_clear"? Those kinds of questions are very important and annoying to answer when writing, useless when reading.

    • @unkarsthug4429
      @unkarsthug4429 Před 4 měsíci +5

      ​@@yjlom Or even just something as simple like the question of how you get the length of an array in the particular language you are using. After using enough languages, they kind of all blend together, and I can't remember if this one is x.length, x.length(), size(x), or len instead of length somewhere. I'm used to flipping between a lot of languages quickly, and it's really easy to forget the specifics of a particular one sometimes, even if I understand the flow I would like the program to follow. Essentially, having an AI that can act as a sort of active documentation can really help.

    • @RobFisherUK
      @RobFisherUK Před 4 měsíci

      I was using ChatGPT to help me write code just today. I'm making a Python module in Rust and I'm new to Rust.
      I wanted to improve my error handling. I asked how to do something and ChatGPT explained that I could put Results in my iterator and just collect at the end to get a vector if all the results are ok or an error if there was a problem. I didn't understand how that worked and asked a bunch of follow-up questions about various edge cases. ChatGPT explained it all.
      Several things happened at once: I got an immediate, working solution to my specific problem. I didn't have to look up the functions and other names. And I got tutored in a new technique that I'll remember next time I have a similar situation.
      And it's not just the output. It's that your badly explained question, where you don't know the correct terminology, gets turned into a useful answer.
      On a separate occasion I learned about the statistical technique of bootstrapping by coming up with a similar idea myself and asking ChatGPT for prior art. I wouldn't have been able to search for it without already knowing the term.

  • @hyphenpointhyphen
    @hyphenpointhyphen Před 4 měsíci +10

    I think some humans would be glad if they still had the time to hallucinate, dream or imagine things from time to time.

    • @asainpopiu6033
      @asainpopiu6033 Před 4 měsíci +1

      good point xD

    • @verdiss7487
      @verdiss7487 Před 4 měsíci

      I think most project leads would not be glad if one of their devs submitted a PR for code they hallucinated

    • @hyphenpointhyphen
      @hyphenpointhyphen Před 4 měsíci

      @@verdiss7487 not what i am talking about

    • @pueraeternus.
      @pueraeternus. Před 4 měsíci

      late stage ca-

    • @asainpopiu6033
      @asainpopiu6033 Před 4 měsíci

      @@pueraeternus. cannibalism?

  • @gleitonfranco1260
    @gleitonfranco1260 Před 2 měsíci

    There are already Lints tools for several languagens that kinda do this work

  • @terry-
    @terry- Před 2 měsíci

    Great!

  • @lindhe
    @lindhe Před 4 měsíci +3

    "Hopeful and humble" sounds like a good name for a Linux release. Just saying…

  • @roaringdragon2628
    @roaringdragon2628 Před 4 měsíci +9

    I find that in their current state, these models tend to make more work for me deleting and fixing bad code and poor comments than the work they save. It's usually faster for me to write something and prune it than to prune the ai code. This may be partially because it's easier for me to understand and prune my own code than to do the same with the generated stuff, but there is usually a lot less pruning to do without ai.

    • @voltydequa845
      @voltydequa845 Před 4 měsíci

      No. Your comment was for me like a fresh air in the middle of all this pseudo-cognitive farting about the so-called AI. No, it is not only for you. Those who say otherwise are just posers, actors, mystifying parrots repeating the instilled marketing hype.

  • @piotrek7633
    @piotrek7633 Před 2 měsíci +2

    You people dont understand, it never was if ai would replace programmerw, it always was if ai will reduce job position by a critical amount so that its hard to get hired

  • @supernewuser
    @supernewuser Před 4 měsíci +1

    actually a surprising take from him but it's an accurate one

  • @Kersich86
    @Kersich86 Před 4 měsíci +4

    my main fear is that this is something we will start relying on too much. especially when people start even autocompletion can become a crutch so much so that a developer becomes useless without it. imagine that but when it comes to thinking about code. we are looking at a feature where all software will be as bad as modern web develooment.

    • @kevinmcq7968
      @kevinmcq7968 Před 4 měsíci

      technology as an idea is reliable - a hammer will always be a hard thing + leverage. We have relied on technology since the dawn of mankind, so I'm not sure what you're saying here.

    • @knufyeinundzwanzig2004
      @knufyeinundzwanzig2004 Před 4 měsíci

      @@kevinmcq7968 llms are reliable? how so? can you name a technology that we have relied on in the past that is as random as llms? I am genuinely curious

    • @diadetediotedio6918
      @diadetediotedio6918 Před 3 měsíci

      @@kevinmcq7968
      I think you are just intentionally misunderstanding what he is saying. He is not saying tools are not usefull, he is saying that if the tool starts to replace the use of your own mind it can make you dependent at the point that it will prejudice your own reasoning skills (and we have some evidence that this is happening, that's why some schools are turning back to use handwritting for example / Miguel Nicolelis also has some takes on this matter).

  • @nissimtrifonov5314
    @nissimtrifonov5314 Před 4 měsíci +5

    Somehow, Palpatine returned 😯😯😯

  • @cesarlapa
    @cesarlapa Před 4 měsíci +1

    That Canadian guy was lucky enough to be given the name of a true tech genius

  • @oneforallah
    @oneforallah Před 9 dny

    Linus Torvalds shined through as a BIGGER MAN for me, he appreciated the sophistication of how LLMs worked (a field I work on btw) and said they were not simply very rudimentary autocorrect on steroids etc, and that we all are in a much much more complex way which is 100% true. Lastly his appreciation of practical use cases is based.

  • @EdwardBlair
    @EdwardBlair Před 4 měsíci +2

    “Auto correct on steroids” is what people who are experts in their field of engineering say when they aren’t SME of ML. Human intelligence is just “auto correct on steroids” we predict what we believe is the most logical next step. Just in a much more efficient manner than our current silicon hardware can execute.

  • @LarsLarsen77
    @LarsLarsen77 Před 4 měsíci +4

    This host underestimates how hard a task autocorrect is. You have to understand human sentiment to predict the next word, which is really hard.

  • @HonoredMule
    @HonoredMule Před 4 měsíci +8

    This is the first time I've seen a public figure push back on the humancentric narrative that LLMs are insufficient because (description of LLMs with false implicit assumption it contains a distinction from human intelligence). He's also one of the last people in tech I'd expect to find checking human exceptionalism bias, but but that's where assumptions get you.
    Then his role as kernel code gatekeeper probably gives him pretty unique insights into the limits of _other_ humans' intelligence, if not also his own. 😉
    Anyway I hope to see more people calling out this bias, or fewer people relying on it in their arguments. If accepted, it tends to render any following discussion moot.

    • @Jonas-Seiler
      @Jonas-Seiler Před 4 měsíci

      you shouldn’t conclude llms to not be dumb as fuck just because they happen to be smarter than you

  • @TehIdiotOne
    @TehIdiotOne Před 3 měsíci

    I'm actually surprised that he seems quite open to it, but his points do make a lot of sense.

  • @chrisakaschulbus4903
    @chrisakaschulbus4903 Před 3 měsíci

    Just pasting some java code i wrote and asking a stupid question like "why does it overflow?" or something has saved me many headaches.
    Googling these problems can lead to much unrelated info and then it's just other people pasting their own code and asking questions.

  • @marc_frank
    @marc_frank Před 4 měsíci +9

    i'm really good at bugs