Large Language Models and The End of Programming - CS50 Tech Talk with Dr. Matt Welsh

Sdílet
Vložit
  • čas přidán 28. 10. 2023
  • The field of Computer Science is headed for a major upheaval with the rise of large AI models, such as ChatGPT, that are capable of performing general-purpose reasoning and problem solving. We are headed for a future in which it will no longer be necessary to write computer programs. Rather, I believe that most software will eventually be replaced by AI models that, given an appropriate description of a task, will directly execute that task, without requiring the creation or maintenance of conventional software. In effect, large language models act as a virtual machine that is “programmed” in natural language. This talk will explore the implications of this prediction, drawing on recent research into the cognitive and task execution capabilities of large language models.
    Matt Welsh is Co-founder and Chief Architect of Fixie.ai, a Seattle-based startup developing a new computational platform with AI at the core. He was previously head of engineering at OctoML, a software engineer at Apple and Xnor.ai, engineering director at Google, and a Professor of Computer Science at Harvard University. He holds a PhD from UC Berkeley.
    ***
    This is CS50, Harvard University's introduction to the intellectual enterprises of computer science and the art of programming.
    ***
    HOW TO SUBSCRIBE
    czcams.com/users/subscription_c...
    HOW TO TAKE CS50
    edX: cs50.edx.org/
    Harvard Extension School: cs50.harvard.edu/extension
    Harvard Summer School: cs50.harvard.edu/summer
    OpenCourseWare: cs50.harvard.edu/x
    HOW TO JOIN CS50 COMMUNITIES
    Discord: / discord
    Ed: cs50.harvard.edu/x/ed
    Facebook Group: / cs50
    Faceboook Page: / cs50
    GitHub: github.com/cs50
    Gitter: gitter.im/cs50/x
    Instagram: / cs50
    LinkedIn Group: / 7437240
    LinkedIn Page: / cs50
    Medium: / cs50
    Quora: www.quora.com/topic/CS50
    Reddit: / cs50
    Slack: cs50.edx.org/slack
    Snapchat: / cs50
    SoundCloud: / cs50
    Stack Exchange: cs50.stackexchange.com/
    TikTok: / cs50
    Twitter: / cs50
    CZcams: / cs50
    HOW TO FOLLOW DAVID J. MALAN
    Facebook: / dmalan
    GitHub: github.com/dmalan
    Instagram: / davidjmalan
    LinkedIn: / malan
    Quora: www.quora.com/profile/David-J...
    TikTok: / davidjmalan
    Twitter: / davidjmalan
    ***
    CS50 SHOP
    cs50.harvardshop.com/
    ***
    LICENSE
    CC BY-NC-SA 4.0
    Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International Public License
    creativecommons.org/licenses/...
    David J. Malan
    cs.harvard.edu/malan
    malan@harvard.edu

Komentáře • 2K

  • @donesitackacom
    @donesitackacom Před 6 měsíci +1049

    "AI will replace us all, anyway here's my startup"
    Exactly 8 days later, OpenAI released a single feature (GPTs) that solved the entire premise of his startup.

    • @CGiess
      @CGiess Před 6 měsíci +61

      So true hahahaha

    • @tomasurbonas5835
      @tomasurbonas5835 Před 6 měsíci +25

      Oh my god, thought exactly the same!

    • @miguelfernandes6533
      @miguelfernandes6533 Před 6 měsíci +75

      Funny thing is he said programming will die but it was exactly through programming that the new feature that solved the premise of his startup was created

    • @KP-sg9fm
      @KP-sg9fm Před 6 měsíci +93

      Which just further reaffirmed everything else he said. Too many people are coping right now, LLM's are gonna put a lot of people out of work, not just programmers. I work customer service and internally I am freaking out right now.

    • @ste1zzzz
      @ste1zzzz Před 6 měsíci +30

      so he was correct, AI will replace us all ))

  • @miraculixxs
    @miraculixxs Před 5 měsíci +316

    'See I don't know how it works and I'm ok with that' - that pretty much sums up the presentation.

    • @hlibborysov3655
      @hlibborysov3655 Před 3 měsíci +20

      Yeah, you don't have to know every detail of a Honda, just buy it and drive it

    • @rmsoft
      @rmsoft Před 3 měsíci +14

      Well, you can get pieces of code and I've done it already, chatting with chatgpt helps a lot to get inside once you ask right questions. This presentation is just babbling, I'm waiting for full useful application development presentation using AI.

    • @contanoiutube
      @contanoiutube Před 3 měsíci

      @@hlibborysov3655but then don’t call yourself a car engineer

    • @CapeSkill
      @CapeSkill Před 3 měsíci

      @@hlibborysov3655 you can drive it, but you cannot lecture people about how it works and how it's going to revolutionize the ''future''

    • @davidlee588
      @davidlee588 Před 3 měsíci

      @@hlibborysov3655 but people who built Honda know every detail of a Honda.

  • @amansahani2001
    @amansahani2001 Před 5 měsíci +561

    "People, writing in C is a federal crime in 2023" is the most misleading statement, Man how you design low latency embedded systems without C? Lot of low level devices are depenedent on C. Even Tesla FSD or Autopilot uses C++. IOT devices use C.

    • @happywednesday6741
      @happywednesday6741 Před 5 měsíci +58

      No one cares bro

    • @anilgandhi
      @anilgandhi Před 5 měsíci

      Tesla is going to rewrite 300k lines of code using neural networks, no more C or C++.

    • @easygreasy3989
      @easygreasy3989 Před 5 měsíci +25

      I bet u I can get my gran to type that into GPT4 and would do better than what ur whole team could do 2 years ago. U better hold on bra, I don't think ur ready. 😶

    • @amansahani2001
      @amansahani2001 Před 5 měsíci +142

      @@easygreasy3989 bruh, go and ask your GPT Boi to write assembly code for newly designed chips from any vendor. Those LLMs can't generate code outside of the scope of training data. If you've written the LLMs from scratch or at least read the paper then you know what I'm talking about. Else I strongly suggest you go and study CS 182.

    • @happywednesday6741
      @happywednesday6741 Před 5 měsíci +16

      @@amansahani2001 God of the gaps my guy, soon an AI will be better at that too, why wouldn't they?

  • @imba69420
    @imba69420 Před 5 měsíci +297

    LLMs are going to replace idiots doing stupid talks 100%.

    • @DarthKumar
      @DarthKumar Před 3 měsíci +2

      Lmao 😂😂😂

    • @DipeshSapkota-lo3un
      @DipeshSapkota-lo3un Před 2 měsíci +4

      natural language programming is a thing now accept it

    • @gdwe1831
      @gdwe1831 Před 2 měsíci +4

      ​@@DipeshSapkota-lo3un natural language is imprecise and makes a poor programming language.

    • @DipeshSapkota-lo3un
      @DipeshSapkota-lo3un Před 2 měsíci +2

      Yes i get it but which basically means we don't need to have software cycle anymore. all those clean code rules for dev to dev visibility is not required now since just need to understand what is the function doing and for that dev will be there 😉 what matters now is input output and definition of function and that's what the business wants too !

    • @imba69420
      @imba69420 Před 2 měsíci +5

      @@DipeshSapkota-lo3un Tell me you've never touched code without telling me.

  • @linonator
    @linonator Před 6 měsíci +515

    I get the clickbait title but it can be really discouraging to people who are thinking about getting into software engineering. “Like why even try if ai is gonna do it?”
    Mainly because it’s coming from an institution like this. I know it’ll take time to eventually get there but A lot of people have already lost hope and new students thinking about joining may just turn a different direction
    Note: I’m not speaking of myself here, I’m a senior engineer and I volunteer at coding camps on weekends and tutor online and I get this sentiment from the people I coach and teach. When you’re completely new to a field and you see things like this from a reputable institution along with all the hoopla of tech bloggers online, it does discourage many people from trying to enter this field.

    • @samk6170
      @samk6170 Před 6 měsíci +49

      perhaps, but such is reality.

    • @sineadward5225
      @sineadward5225 Před 6 měsíci +35

      Still, 'everyone should learn to code' is valid. Just do it anyway for your own intellectual development. No point in trying to blame a video title for not doing something. Just do it.

    • @Boogieeeeeeee
      @Boogieeeeeeee Před 6 měsíci +6

      It's the presentation name, bud. Don't get discouraged, presenters often put a clickbaity title but then debunk said title during the presentation. In any case, it's what this guy wanted to call his presentation, can't really fault Harvard for it.

    • @fintech1378
      @fintech1378 Před 6 měsíci +9

      we've got to face this 'harsh' reality head on, there is nothing you can do

    • @phsopher
      @phsopher Před 6 měsíci +71

      Somwhere in 1889: Welcome to my talk titled "Cars and the end of horse carriages".
      Someone in the audience: Very mean and dicouraging title, dude, what about all the people who want to become a horse carrage driver?

  • @fredg8328
    @fredg8328 Před 5 měsíci +345

    That reminds me when I was in middle school. My teacher had to teach us how to program in Basic but he really didn't want to. So he simply told us "in 2 or 3 years we will have speech recognition so you don't need to learn programming". That was 35 years ago... That's a bit bold to tell that programming languages have not improved the way we code in 50 years and to think AI will save us.

    • @dansmar_2414
      @dansmar_2414 Před 5 měsíci +15

      one day they will get it right

    • @vladimir945
      @vladimir945 Před 5 měsíci +34

      I remember one of my teacher, while not been bold enough to speak about speech recognition in the early 90-s, saying that there are _already_ only system programmers left, the application programmers have been made obsolete by - are you ready for it? - SuperCalc, a spreadsheet software for MS-DOS and such. Makes me wonder, now that I think of it, why would there still be a need for system programmers if MS-DOS was already a sufficient operating system for the only applied task that was left - the one of running SuperCalc...

    • @edmundkudzayi7571
      @edmundkudzayi7571 Před 5 měsíci +3

      You've clearly not used Grimoire. It's game over.

    • @IAAM9
      @IAAM9 Před 5 měsíci +8

      Most probably You have not used AI enough, its magical in some sense. Soon you will realize give it a year or two

    • @raylopez99
      @raylopez99 Před 5 měsíci +3

      But speech recognition is really good these days...it just took about 10-35 years, depending on how 'good' you think 'good' is (I recall speech recognition that was decent about 25 years ago).

  • @MarceloDezem
    @MarceloDezem Před 6 měsíci +240

    "If the dev is not using copilot then he's fired". Tell me you never worked in a commercial application without telling me you've never worked in a commercial application.

    • @jak3f
      @jak3f Před 3 měsíci +19

      What do you think hes writing? Personal pet projects? Lmao.

    • @tracyrreed
      @tracyrreed Před měsícem +3

      ​@@jak3fHe's marketing. Not writing.

    • @LarsRyeJeppesen
      @LarsRyeJeppesen Před 21 dnem

      I wager that Code Assist with Gemini 1.5 is much better than Copilot now.

    • @gaiustacitus4242
      @gaiustacitus4242 Před 5 dny

      @@jak3f Have you ever heard of copyright law? Are you seriously unaware that federal courts have already ruled that AI generated output is ineligible for copyright protection?

    • @jak3f
      @jak3f Před 5 dny

      @@gaiustacitus4242 good luck proving that

  • @alborzjelvani
    @alborzjelvani Před 5 měsíci +406

    The example with Conways game of life does no justice to the 50 years of programming language research he refers to. Also, Rust was designed to overcome the memory safety problems that plagued C and C++; it is a programming language that emphasizes performance and memory-safety. Programming languages like Fortran and C were designed the way they are for a very specific reason: They target Von Neumann architectures, and fall under the category of "Von Neumann programming languages". The goal of these languages is to provide humans with a language to specify the behavior of a Von Neumann machine, so of course the language itself will have constructs that model the von Neumann architecture. Programming languages like Rust or C do exactly what they were designed to do, they are not "attempts" to improve only code readability for Conways game of life when compared to Fortran.

    • @hanielulises841
      @hanielulises841 Před 5 měsíci +7

      Totally agree your comment

    • @datoubi
      @datoubi Před 5 měsíci +12

      well they could become irrelevant though. Because the programming language of the future probably looks like minified JavaScript and will be designed by AI for AI.

    • @true_xander
      @true_xander Před 5 měsíci

      @@datoubi good luck with that, see you in 10 years. Humans should not loose control over their own life and things that life depends on. As soon as they do, they'll become slaves of their own technology. And despite there still won't be a cent of consciousness in a machine in 50 years, if humans will loose the ability to understand the software on their own without "AI" help, it could quickly become a tragedy because of 1000 other reasons than the comic-book 'machine revolt'.

    • @ruffianeo3418
      @ruffianeo3418 Před 5 měsíci +14

      If a natural language were such a SUPERIOR specification language, there would not be on going efforts to find working specification languages. What he claims is, that plain english is the best you can ever get :)

    • @wi2rd
      @wi2rd Před 5 měsíci +12

      True, yet non of that is an argument against his point.

  • @fayezhesham1057
    @fayezhesham1057 Před 6 měsíci +254

    I think it's time for Dr. Matt and his team to pivot away from fixie's custom chat GPT idea after OpenAI released GPTs.
    How unexpected!

    • @castorseasworth8423
      @castorseasworth8423 Před 6 měsíci +15

      I was thinking the same. It is basically the GPTs concept, although Fixie’s AI.JSX still offers seamless integration into a react app. Let’s see OpenAI’s response to that

    • @merridius2006
      @merridius2006 Před 5 měsíci +23

      @@rahxl while you are right it doesn't mean he's wrong

    • @brandall101
      @brandall101 Před 5 měsíci

      @@castorseasworth8423So you can just use their Assistant's API and create a React front-end on your own.

    • @TransgirlsEnjoyer
      @TransgirlsEnjoyer Před 5 měsíci +16

      @@rahxl whether he does it or somebody else, it is immaterial, openAI just proved his concept was right and worthy. he is already successful while u need to find a good job

    • @NicolasNarvaezB
      @NicolasNarvaezB Před 5 měsíci

      @@merridius2006 ​ @TheObserver-we2co this is not scientifically correct, a program written for a given task X can be written (and exist in hardware) so its the theoretical most performant solution, while an AI can cost a million times more to run the same task, take for example "2+2", at the same time, a program is a crystallized form of ontology and intelligence, that means, instead of reasoning the solution on every execution, programs grow as a library of efficient solutions that dont need to be thought over and over again, in the future is programming languages what will remove the need to write code, as we aproach an objective description of computable problems that we will be able to write for the last time, in a way we already did this with libraries (in a disorganized way), and obviously we will use AI to help write these programs, but because we will solve these problems a single time for the infinite we will review and read and write them ourselves as a way of verification, just as today. After that we will use an optimized form of AI that maps these solved solutions on user request, but interfaces will also be mature enough (think of spatial gesture and contextual interfaces) to make speech obsolete. Current LLMs are more a trend of our current times than the ideal, efficient, unfallibe solution we need to standarize on all aspects of society from IT.
      If all the software thats already running in your computers would run using AI, it would cost thousands more in energy and time, software is already closer to the theoretical maximal efficiency, the ideal software is closer to solved math than to stochastic biology or random neuron dynamics. Training better a model wont solve any of these things.
      And AIs that evolve into more performant solutions are statistical models programmed into known subsets of the problem after the mathematical model of the problem is understood enough to do that, is the same as we have already done since forever, statistics like that used in modern LLM have always been used in computers and are part of what programs are required to do.
      Just imagine if every key we pressed were interpreted by AI just to reach your browser.
      Along all these, we still have a lot of work to do, i would say we have only written a third of all the software that we need in the world, and at the same time, almost all the software that already exists needs to be rewritten in new languages more closer to the new level of abstraction and ontological organization described here, given time all code in c++ will be moved to rust, and rust will be replaced by an even better language, and no institution will just let you do it with AI and not read or understand what it did.
      Just go study and stop being silly thinking you know what programming is without any real experience in the field, all these opinions come from marketers, hustlers, wannabes, teenager ai opiniologists and doomers.

  • @cruzjay
    @cruzjay Před 6 měsíci +49

    He called CSS "a pile of garbage" and that writing C should be a federal crime. I smell senior engineer burnout, that want's to just cash in on his startup and work on a farm.

    • @-BarathKumarS
      @-BarathKumarS Před 5 měsíci +9

      his startup flopped horribly btw lol.

    • @anthonyd4703
      @anthonyd4703 Před 5 měsíci +1

      Hahhaha even as a newbie, i kinda agree with you

  • @SmoothHitt
    @SmoothHitt Před 6 měsíci +23

    Do not be discouraged.
    Enjoy life and study what you are interested in. Everything else will fall into its rightful place. Tomorrow is not guaranteed, do not fret about things beyond your control.

  • @frankgreco
    @frankgreco Před 5 měsíci +15

    His startup is completely based on a Javascript framework. You don't have to use an LLM to tell you that was a bad idea.

  • @rohan2962
    @rohan2962 Před 5 měsíci +42

    He starts off with no one will code and he ends with his own programming language for AIs. lol

  • @ldandco
    @ldandco Před 6 měsíci +283

    Software Engineering will eventually be the role of just a few, not because of AI replacing jobs, but because of discouragment many people will feel and quitting before even starting the journey

    • @darylallen2485
      @darylallen2485 Před 6 měsíci +39

      One day, people may look at code the same way we look at the Pyramids. The knowledge of Pyramid making came and went.

    • @reasonerenlightened2456
      @reasonerenlightened2456 Před 5 měsíci +13

      we need 4 mechanical engineers and 2 electronic engineers for every software engineer, because software is easy.

    • @hungrygator4716
      @hungrygator4716 Před 5 měsíci

      @@reasonerenlightened2456 software is easy. Good software is hard.

    • @dwight4k
      @dwight4k Před 5 měsíci +1

      Or will we need coders for the lower levels?

    • @KienHoang-jc6gw
      @KienHoang-jc6gw Před 5 měsíci +26

      @@reasonerenlightened2456 you dont even know the difference between engineer and developer...

  • @TheOriginalJohnDoe
    @TheOriginalJohnDoe Před 6 měsíci +228

    Dr. Welsh does make good statements I think we all can agree on, but as an AI student and Software Engineer for 10+ years, regarding what Welsh said: "People still program in C in 2023", well if you study AI you will even learn Assembly, very very low-level programming and since models have been written by programmers, we still need programmers to maintain and improve on these. AI is getting there, but it's still at a very immature level compared to the maturity we seem to desire as a humanity. We still need PhD students with a solid programming and AI background to do extensive research within the field of AI in order to help invent new technologies, specialized chips, improved algorithms etc. We are still far away from letting AI generate code that is as good as a programmer who has mastered it. Sure, it can write code, but there's still ton of scenarios where it fails to make things work.

    • @timsell8751
      @timsell8751 Před 6 měsíci +28

      2 more years should do the trick!

    • @reasonerenlightened2456
      @reasonerenlightened2456 Před 5 měsíci +20

      Before thinking of AI use in the society we must agree who will Profit from it, who will own it and who will pay for the mistakes of the AI? Is it going to be like, "Oh well, bad luck" when AI ends someone's life?

    • @LucidDreamn
      @LucidDreamn Před 5 měsíci +11

      I give it 5 more years before AI is super-intelligent

    • @headlights-go-up
      @headlights-go-up Před 5 měsíci +23

      @@LucidDreamnbased on what data?

    • @chuangcaiyan7114
      @chuangcaiyan7114 Před 5 měsíci +4

      I think the problem is about the purpose or the goal of the program that you are programing, in case of the Conway's Game of Life, the concept it self it is not easy to explain even with human language, we could get some ideas watching it performe but to be able to understand it completly, from logic to meaning or even to purpose and what coorelation it has with other topics such as math, physic or phylosophy, it is just not easy to understand, it won't be easy anyway

  • @Denzelzeldi
    @Denzelzeldi Před 6 měsíci +594

    Don't waste your time on this talk, complete waste of time. Just another guy pitching his AI start up in the disguise of a lecture/talk. Also I really expected something revolutionary from his start up, it is basically just a different syntax for using ChatGPT API. Didn't expect this being endorsed on CS50.

    • @bradyfractal6653
      @bradyfractal6653 Před 6 měsíci +55

      Took the words out of my mouth.

    • @codybishop7526
      @codybishop7526 Před 6 měsíci +62

      Yeah, the fact that they snake oil salesmen is being given a platform by CS50 is really discouraging to me. Anyone who understands how these models work will understand their limitations. At best they act as a tool, and at worst they are a hinderance.

    • @royaltoadclub8322
      @royaltoadclub8322 Před 6 měsíci +53

      He's a professor of Computer Science at Harvard University. What are your credentials?

    • @mytech6779
      @mytech6779 Před 6 měsíci

      @@royaltoadclub8322 Totally irrelevent to the critiques being made.

    • @headlights-go-up
      @headlights-go-up Před 6 měsíci +101

      @@royaltoadclub8322 you thinking that being a professor is valid credentials in the context of actually building products shows your ignorance. academia is a far cry from realistic business operation

  • @pjcamp-eq1mj
    @pjcamp-eq1mj Před 6 měsíci +124

    The talk was a perfect segway for AI startup ad

    • @joseoncrack
      @joseoncrack Před 6 měsíci +7

      Indeed.

    • @jimbobkentucky
      @jimbobkentucky Před 6 měsíci +13

      Seems like a lot of the invited speakers are hawking something.

    • @poeticvogon
      @poeticvogon Před 5 měsíci +1

      I am pretty sure it was all an ad.

    • @gaditproductions
      @gaditproductions Před měsícem

      @@poeticvogon this is cs50...its a class...they wont just do a add and risk loosing credibility...if this is coming from a institution like this...things are very very serious.

    • @poeticvogon
      @poeticvogon Před měsícem

      @@gaditproductions Of course they would. They just did.

  • @kpharck
    @kpharck Před 5 měsíci +43

    Law is written in plain English too. For reproducible results, the limit of input precision will lie where the modern legal jargon reaches it's least understandable form. You will be left with an input that is still as hard to comprehend as a programming language text, but much less precise. Good for CZcams descriptions perhaps, but not for avionics.

    • @oldspammer
      @oldspammer Před 3 měsíci +1

      The constitution and most contracts are in legalese which looks like English but is strictly NOT. To know and appreciate fully what is said in legal documents, you must use a legal dictionary. Capitalization is often key. Amature researchers have uncovered much-hidden history by seeing what is said and meant in older legal documents. The world turns out to be more nuanced than I thought by the lectures by these legal scholars telling us what the elite have in store for us.
      Here is an example,
      London the strawman identity youtube
      You have a person, you are not a person. A person is a legal fiction--legal paperwork of identification issued by the government. Ergo, you have a person, you are not a person. That is why a corporation is considered a person and has personhood--it is all about legal fictions written in all capital letters--in the dead handwritten on an individual's tombstone.
      Some tricky legislation was at one time written in a hidden way in some foreign language so that the public would be much less likely to discover what trickery was being done by their so-called elected officials. This was in the 1600s in order to reduce the power of the church and increase that of the crown which turns out to be the inns of court of the crown temple in the City of London that is a separate state than England or UK similar to how the Vatigan in Rome is its own city-state, and that of Washington DC that is its own city-state.
      This was all explained years ago in a video on CZcams that gave away many secrets so likely it is banned now. but few watched the entire video because of TLDR.
      I found a copy still on CZcams:
      Ring of power - Empire of the city [Documentary] [Amen Stop Productions]

    • @mikecole2837
      @mikecole2837 Před 2 měsíci +2

      ie if Product Managers could specify what they wanted with enough precision to create a product, they would be coders.

    • @gaditproductions
      @gaditproductions Před měsícem

      law will be impacted heavily. But law has a human aspect - the motivational speaker and projection and questioning a witness with emotional appeal...that's the difference and why its safer.

    • @oldspammer
      @oldspammer Před měsícem

      @@gaditproductionsThere is a difference between a living individual, a machine, and an entity with personhood such as an immoral & immortal corporation who holds the debt of people, and nations that cannot be repaid due to usury compounded semi-annual interest charges.
      What if all money in existence was borrowed as debt into existence? Well, that is what has ended up happening as a trick of financial mathematics--the implications of which simple folk do not appreciate the implications, so vote for more government free stuff with their hands out waiting.
      Patrick Bet David of Valuetainment breaks down the information regarding the hyperinflation seen in Venezuela and what other countries did when they saw this same thing happening to them, namely Israel got rid of practically all its debt and so has one of the lowest rates of inflation.
      Lower standards of living are on the way if one is not careful who one has been representing them in Government.
      I had an epub formatted book. I used the ReadAloud Microsoft store app read it to me. It horribly mispronounced some specific word when reading back the material therein. The book was from 1992.
      Here are some of the epub formatted docs in my downloads folder.
      Lords of Creation - Frederick Lewis Allen
      The Contagion - Thomas S. Cowan
      The Gulag Archipelago, 1918-1956. Abridged (1973-1976), Aleksandr Solzhenitsyn
      Votescam of America (Forbidden Bookshelf) - James M. Collier
      Wall Street and the Russian Revolution, 1905-1925 by Richard B. Spence
      The individual voice types in the Windows TTS system determine how to break into syllables each word, and to pronounce well or badly any given word. The word that came out very badly, I believe, was "elephantine." Sometimes some of these TTS voices use online AI to assist in the pronunciations and smooth transitions between sentences, pitch of voice elevation during questions and so forth. Obviously, if there was a Nuke or EMP, the entire power grid would go down for decades unless the well intending people rebuild everything overnight without the build back better destroyers holding them back from doing so.
      As such, it might be better to have each computer holding a small chunk of civilization and enlightenment, lest it all be lost should a key datacenter be targeted directly.
      What safety precautions have your local officials done? How about your electric grid suppliers--what safeguards are in place to get everything back running after there has been no phones, no power grid, no gas station pumps working, no diesel truck fuel pumps running, no credit card transactions, no banking, and so on?
      I asked an AI about EMP precautions. I suggested wrapping spare electrical transformers and generators in metal wrap--thick aluminum foil layers, then burying them somewhat deep in the ground to reduce pulse damage. It said that the foil had better be thick enough and very well grounded to displace the electrical energy.

  • @abnabdullah
    @abnabdullah Před 6 měsíci +32

    I am amazed that students didn't ask about anything related to "security" because, right now, we are just seeing an innovation but what about the future, when, on a larger scale, if we say we want to build a public program like Facebook or any other platform. This is presuming to be a live programming or language model building whatsoever it is so how can we encrypt all of our data from building to running and so on.

    • @rookie_racer
      @rookie_racer Před 6 měsíci +6

      While security is something lacking I feel your focus is on the wrong aspect of it. You reference encryption which isn’t necessary for the source code so its ability to assist you to build won’t be impacted. I’m more concerned about the data you’re providing to the LLM. If I’m building a proprietary function and I need some insight from an LLM and I need to upload my source code for them to evaluate I am potentially sharing some seriously protected intellectual property. What happens to that? Can that code snippet show up in someone else’s code when trying to solve the same problem? Maybe your competitor?

    • @Invariel
      @Invariel Před 6 měsíci

      @@rookie_racer More importantly than that, he's already demonstrated in his talk that these LLMs have -- call it "undocumented" or "emergent" or whatever you want -- behaviour that gives the questioner control over how the answer is given. Recall the "my dear deceased grandmother" "attack" that let people ask about how to make napalm or pipe bombs or whatever. Giving LLMs unfettered access to proprietary data, and having those LLMs all be based on the same nugget/core/kernel vulnerable to the same attack vectors means giving attackers access to all of that proprietary data by "casually" using your interface.

    • @abnabdullah
      @abnabdullah Před 5 měsíci +3

      @@rookie_racer yes, you are right... actually what I was trying to highlight is "Data" and I mean how can we trust our confidential information to something that is "open source and a third party revolving around and across the internet.

  • @caneridge
    @caneridge Před 5 měsíci +67

    The purpose of computer science in a nutshell was not to translate ideas into programs. The goal was to find higher levels of abstractions to enable describing and solving ever bigger problems. Programming and programming languages were emergent properties of that goal. The question for LLMs is if they will be able to continue the quest for higher and simpler levels of abstraction or forever get stuck in the mundane as most programmers did by their jobs.

    • @katehamilton7240
      @katehamilton7240 Před 4 měsíci +3

      Thanks, I'm saving this idea

    • @mriduldeka850
      @mriduldeka850 Před 3 měsíci +2

      Thats a deep thought. I feel purpose of comp science is to automate task which humans can do or think of doing. Programming is just one step for it. Instead of create models which can write code, humans should think of bigger ideas which can impact living beings. It may be accomplish by manual or automatic programming, does not matter

    • @switzerland
      @switzerland Před 3 měsíci +2

      Reality is near infinitely complex. As programmers we create a finite abstraction. AI will do it better yet can't solve exponential complexity. AI is not infinite and has not infinite compute. Infinite is usually a warning signal of a lack of knowledge. Infinity means everything starts to behave weird. There is also physics … latency, a set of fundamental problems

    • @aoeu256
      @aoeu256 Před měsícem

      We have too many people doing software so software salaries are going to go down, we need to tell Indians & Chinese and Westerners to focus on swarm robotics, mini-robots, having the robot sworms build things etc... Take a robot-hand, make all of its parts like legos that it itself can assemble. Then make it so that it can either print out its parts, sketch out its parts, or mold its ports. Have it replicate itself in smaller and smaller until you hav e a huge swarm of robots, but you also need a lot of redundancy and "sanity checks". Swarm robots can do stuff like look for minerals/fossils/animals, look for crime, map out where everything is so you know where you put your cellphone, build houses/food/stuff/energy collectors/computers. @@mriduldeka850

    • @mriduldeka850
      @mriduldeka850 Před měsícem

      @@aoeu256 That's a good point. Japaneese are good at building robots. Indians are good and abundant in software sector but lagging way behind in manufacturing and hardware industry. Chinese have strength in manufacturing sector so perhaps they can adopt to robotics growth more quickly than Indians.

  • @epajarjestys9981
    @epajarjestys9981 Před 6 měsíci +37

    I'm at 6:43 and all I've seen so far is that guy projecting his incompetence onto the rest of humanity.

    • @jzimmer11
      @jzimmer11 Před 3 měsíci +6

      Indeed! I mean WTF? Of course, you can always write programs in the least understandable way possible.

    • @Henry_Wilder
      @Henry_Wilder Před měsícem

      You call an Harvard Computer Science prof. incompetent?, you fool😂😂

    • @Henry_Wilder
      @Henry_Wilder Před měsícem

      Why don't you go ahead and answer the questions, since you're the competent one then🤨...ya'll just come on to the comment section talking trash, no sense🤧

    • @epajarjestys9981
      @epajarjestys9981 Před měsícem

      @@Henry_Wilder Which questions?

    • @Henry_Wilder
      @Henry_Wilder Před měsícem

      @@epajarjestys9981 the questions posed at him that he couldn't answer. He kept saying "I don't know " remember?

  • @alphabee8171
    @alphabee8171 Před 6 měsíci +59

    It's not that gpt blew up because it was super good overnight. Well sort of but the real reason is it's ease of use. It's just like back when home computers became popular, when you introduce a computer as a marvel of engineering nobody cares about that but if you say "it's a box that lets you play some games and music etc with a bunch of clicks" you have everyone's attention. The idea of making it feasible for the masses that's what kicked it off, poured in billions of dollars and years of research to make computing better and better, same stuff happened with gpt and it's again on the same path but at a much much faster rate.

    • @reasonerenlightened2456
      @reasonerenlightened2456 Před 5 měsíci

      GPT-4 shows fake intelligence. For example, It struggles with fingers, and with drinking beer. LLM are a dead-end for AGI because they do not !(understand)! the implications of their outputs! Also, GPT-4 is designed by the Wealthy to serve their needs!

    • @brianallossery4628
      @brianallossery4628 Před 5 měsíci

      Computational power increases made gpt possible from what I understand

    • @LyricalMurderer1
      @LyricalMurderer1 Před 2 měsíci

      That and it was super good… understood that a lot has to do with data and compute but it really is very good as a product right now…

  • @firefiber8760
    @firefiber8760 Před 6 měsíci +190

    I genuinely cannot understand how humans are just... incapable of thinking of the future. Like, the idea of 'just 'cause you can, doesn't mean you should' is just so much the case, right now. But nope, because we can, we will.
    Okay, so we all slowly forget how to program, and we, generation after generation, depend more on language models writing code for us, and us just instructing the language models. Great, let's just, for a second, take this further shall we? First, the ways we communicate with language models are going to eventually become more like programming languages, because people are lazy, and the entire reason we have ANY symbols in mathematics PROVES this. We don't like to write more than we absolutely have to.
    (EDIT: To expand on this - what I'm trying to say is this: we use specific patterns of sound in our languages to wrap up concepts, or ideas. We do this so that more complex communication can happen, by building on top of the layer below. We create functions in programming to wrap up sets of actions so that we can build on top of that. This is how abstraction works. I've used mathematical symbols as an example, but the same concept applies pretty much anywhere you look. Condense repetition, so that we can build more complexity on top.)
    So we're going to get "AI" based programming dialects, you could say (look at the way image generation prompting has already evolved as an example).
    Then, as we also develop these language models, the models themselves are going to have free rein on the 'coding' part. We will obviously instruct these systems to create newer programming languages that will, after a while, become unreadable to us. And we will ask, well, why do we need to understand it? The machines are there to handle it (this is essentially what this guy is saying). So now we have dialects of humans telling machines what to do, and then we have machines telling other machines what to do in a language we don't understand.
    Does ANYONE see the issue with this? Like, even a little?
    Just because programming is hard does not mean that we have to eliminate it. What absolutely idiotic thinking is this? It must always be a constant pursuit of efficiency. That's the whole point. We always remain in control. We always ultimately KNOW what is happening. By literally INTENTIONALLY taking ourselves out of the equation, we write our own Skynet. I don't mean that in an apocalyptic sense, I mean that in a "we are so fucking dumb as a species, like literally what is the point of programming, or doing anything at all, if not for our own benefit?" kind of way.
    Sure, use these systems and tools to write better code, write better documentation, I mean these are the actual areas where AI systems can help us. Literally to write the documentation and help us write better, more efficient, cleaner code, faster than we ever could. But still code that WE READ, AND WE WRITE, for US.
    This guy literally called Rust and Python "god awful languages" and apparently we need to take the humans out of developing things. Who does he think development is for?
    What's weird is that this is on CS50?

    • @ChrisHarperKC
      @ChrisHarperKC Před 6 měsíci +37

      This will be lost on most people, especially academics who live in a fantasy world. Your comments are obvious to anyone who does regular old work.

    • @hamslammula6182
      @hamslammula6182 Před 6 měsíci +24

      I think your thinking is a bit biased and shortsighted. And I’m guessing it’s because like me you’re a programmer. What I think you’re wrong about is that once we move up the abstraction layer, we don’t simply forgot the stuff underneath. People can still understand assembly and write programs using it if they so choose to but it’s ultimately a waste of time.
      I don’t think people will simply forget how to program, instead they’ll focus on more important things like solving problems that people are willing to pay for.
      I’m sure if you wanted to, you could rig up a set of logic gates to do some addition and subtraction operations but is that a business problem people are willing to pay you for?
      Essentially ai will be a layer of abstraction which allows us to focus on more complex problems rather than having to focus on getting all the right packages before even attempting to solve the problems of the users.

    • @noone-ld7pt
      @noone-ld7pt Před 6 měsíci +15

      Dude, what are you on about? This is what coding has always been, a simplified version for us to convey ideas to computers. We don't write code in binary, we have compilers and interpreters that do that for us. The difference is that now instead of having to learn Python or Rust you can use English or Spanish or whatever to convey your ideas and have them be implemented. You can then ask the LLM directly questions about the implementation of different algorithms and optimize for whatever variable is relevant to your vision. Programming languages have been becoming more and more readable for decades now, this will just be the final step where we can finally interface with computers without having to learn a new language.

    • @gammalgris2497
      @gammalgris2497 Před 6 měsíci +8

      Language has its own issues. It's context sensitive and highly ambiguous. Our "experimentations" with programming languages was an exercise in formalized and more precise languages. On the lower levels it's just signal processing with circuits. We built different levels of abstractions on top of that. We can only hide the complexity but we cannot make it vanish. Language models are just another layer of abstraction with its own pitfalls. The best thing one can do is heed the scientific method. Maintain a suitable degree of transparency so that things can be verified by others. 'Others' may be other developers, scientists, AI based tools, etc.. Completely removing humans from the equation will violate the scientific method.

    • @draco4717
      @draco4717 Před 6 měsíci +11

      What if LLM write a buggy code in maybe 50 years from now and that code is only understandable this machine and it again writes another buggy code because it does not understand what it is doing and writes another buggy code till infinity 😅 the we as a human have to dust off those old BASIC books in order to start over and how cool is that 🙂

  • @MarkMusu92
    @MarkMusu92 Před 6 měsíci +16

    I’m legally mandated to pitch my startup… that’s all I needed to know.

  • @GigaFro
    @GigaFro Před 5 měsíci +16

    I believe that in the short term there will be a shift in both time and focus from coding a solution to the architecture design, testing, and security of that solution.

    • @christislight
      @christislight Před 4 měsíci +1

      Architecture is KEY

    • @sourenasahraian2055
      @sourenasahraian2055 Před 4 měsíci +2

      Architecture is nothing but the applications of known patterns and reasoning/ tradeoffs . I use chatGPT for my architecture challenges all the time and I say though it’s not perfect, it’s already doing a decent job . It will get even better , exponentially better .

    • @Gauravkumar-jm4ve
      @Gauravkumar-jm4ve Před 3 měsíci

      agreed

  • @usurpercries
    @usurpercries Před 6 měsíci +60

    Me: Asks chat gpt to help me with a bug I am facing in my code.
    ChatGPT: Returns my exact same code
    (This was a joke)

    • @luckydevil1601
      @luckydevil1601 Před 6 měsíci +6

      Ahah yeh, same sh*t happens to me too 😂

    • @invysible
      @invysible Před 6 měsíci +3

      true broo... happend to me a few days ago

    • @mykyta_soloviov
      @mykyta_soloviov Před 6 měsíci +13

      In this way ChatGPT hints that the main bug in your code is you :)

    • @IntrospectiveMinds
      @IntrospectiveMinds Před 6 měsíci +8

      GPT 3.5 I'm guessing? Try 4. People keep coping by saying it doesn't work but are using the outdated model or have poor instructions.

    • @jbo8540
      @jbo8540 Před 6 měsíci +3

      Try 4, and if that doesnt improve things, you need to work on your prompt engineering.

  • @Tetsujinfr
    @Tetsujinfr Před 5 měsíci +29

    We are not yet to the stage where one can ask chatGPT4 to write chatGPT5, at least as far as I know. Also, if you ask chatGPT4 to produce the model of the physical world unifying general relativity with the standard model, you will notice it struggles quite a bit and does not deliver. Those models cannot just create new knowledge, or not in a scientific proven way. Maybe through randomness they will to some extend though, but let's see.

    • @christislight
      @christislight Před 4 měsíci +5

      You need code to build. God coded humans, we code businesses. Just using language to create code doesnt mean coding is obsolete

    • @DiegoSita
      @DiegoSita Před 4 měsíci +5

      AIs are making some breakthroughs in science and math already. Look up the new matrix multiplication algorithm discovered by an AI.

    • @ingmarxhoftovningsr6144
      @ingmarxhoftovningsr6144 Před 4 měsíci

      Well, the code for chatGPT5, at least for the model as such, is likely not very complicated, so chatGPT4 might be able to write it. Someone has to tell it what the program should do, though. At this point, that would be a human.

    • @dblezi
      @dblezi Před 3 měsíci

      That’s because there has to be an overseer. Like someone else stated God created mankind and this ecosystem. Men manipulated and created based on this ecosystem. The creations of Men didn’t invent themselves. The best special software of AI can do is create derivates of digital data that is digital known to said AI model. Look at art for instance many AI models steal and scan what mankind created to make a model. An AI model would never create a Star Wars, blade runner or mass effect story/universe out base coding blocks which dictate how the software runs. AI needs to plagiarize to create. It’s just that these plagiarized derivates with procedural generation full many normies into thinking it’s so great.

    • @ingmarxhoftovningsr6144
      @ingmarxhoftovningsr6144 Před 3 měsíci

      @@dblezi could you please clarify "has to be"? Where does that knowledge come from? What's the logic explanation? What does "an overseer" mean? What does "an overseer" do, in practical terms?

  • @howiedick6857
    @howiedick6857 Před 5 měsíci +70

    It's not the end of programming, it's the beginning of better programming, faster programming and easier for people to learn programming.

    • @kolyxix
      @kolyxix Před 5 měsíci +15

      Go back and watch the video again, it is the END of programming wheather you want to accept it not.

    • @midiminion6580
      @midiminion6580 Před 5 měsíci +2

      One thing that Ive realized is that no matter the amount of tools and ease of doing it, most people hate programming. Has nothing to do with coding.

    • @howiedick6857
      @howiedick6857 Před 5 měsíci +11

      @@kolyxix it's not the end, but the beginning.

    • @howiedick6857
      @howiedick6857 Před 5 měsíci +3

      @@midiminion6580 most people hate doing anything other than setting on their couch, and playing videogames, smoking pot and fucking. Ai won't change that

    • @flipflap4673
      @flipflap4673 Před 5 měsíci +2

      @@howiedick6857 Unless it makes you jobless and you no longer have money for videogames and pot. Getting to f*** someone might also become more difficult as money seems to attrackt 🙂

  • @Babble_Gum
    @Babble_Gum Před 6 měsíci +9

    "The line, it is drawn, the curse, it is cast
    The slow one now will later be fast
    As the present now will later be past
    The order is rapidly fading
    And the first one now will later be last
    For the times, they are AI-changin'"

  • @manabukun
    @manabukun Před 6 měsíci +98

    Back in the real world, you still need to double check the code generated by copilot which often is wrong. I'm not sure if I'm bad at using copilot or the people using it are simply not checking what has been generated.
    Not to mention, none of the large companies are willing to use a version of copilot that allows it to send the learned data from their private repos back home for obvious reasons.

    • @Peter-bg1ku
      @Peter-bg1ku Před 5 měsíci +28

      that's the problem I find with AI generated code. You have to verify it, which is a task that takes as much, if not more effort that writing the code by hand.

    • @djcardwell
      @djcardwell Před 5 měsíci +1

      @@Peter-bg1kuwrong

    • @djcardwell
      @djcardwell Před 5 měsíci +2

      wrong

    • @Peter-bg1ku
      @Peter-bg1ku Před 5 měsíci +1

      @@djcardwell what do you mean?

    • @djcardwell
      @djcardwell Před 5 měsíci

      @@Peter-bg1ku that isn't the problem to worry about. We are so close to solving hallucinations.

  • @restingsmirkface
    @restingsmirkface Před 5 měsíci +23

    In almost all scenarios, AI represents an "it runs on my machine" approach to problem-solving - a "good enough", probabilistic mechanism.
    But maybe that is sufficient. We get by in the world despite uncertainty at the quantum level... maybe once _everything_ is AI-ified, the way we think about the truth will shift just enough, away from something absolute and concrete, to something probabilistic, something "good enough" even if we'll never be sure it's at 100% outside of the training-sets run on it.

    • @bens5859
      @bens5859 Před 5 měsíci +3

      > the way we think about the truth will shift just enough, away from something absolute and concrete, to something probabilistic, something "good enough"
      This is a deep insight. Many great minds of the western philosophical tradition have expressed this view in one way or another. In fact it's the school of thought known as American Pragmatism (which is known as the quintessentially "American" school, in philosophy circles) which most closely aligns with this view.
      Some pithy quotes about truth from the most notable figures in Pragmatism:
      - William James (active 1878-1910): “Truth is what works.”
      - Charles Sanders Peirce (1867-1914): “The opinion which is fated to be ultimately agreed to by all who investigate is what we mean by the truth.”
      - John Dewey (1884-1951): “Truth is a function of inquiry.”
      - Richard Rorty (1961-2007): “Truth is what your contemporaries let you get away with saying.”

    • @lubeckable
      @lubeckable Před 4 měsíci

      dockerize AI problem solved xd lmao

  • @snarkyboojum
    @snarkyboojum Před 6 měsíci +71

    I prefer this take - natural language isn't well suited for describing to computers what they should do, which is why programming languages were developed. LLMs can do some translation from natural languages to programming languages, but not very well and not as accurately as we would like (yet), so they're good for getting you part of the way there, and currently they'll likely generate less than accurate or reliable code, but if you're not trying to write reliable programs, they could be helpful :D

    • @Siroitin
      @Siroitin Před 5 měsíci +9

      Good to remember that rigorous symbolic notation for math is pretty modern idea in itself. One could argue that math is just "esoteric language" like Matt Welsh is implicating about programming language.

    • @restingsmirkface
      @restingsmirkface Před 5 měsíci +6

      I agree. AI can do things like computing Pi, finding factors, and other relatively trivial things which could just be bits of static data. It may not even be generating code - just returning the closest match. If it is generating code, it's not very useful yet unless you know exactly how to speak those sweet-nothings. I asked ChatGPT about a week ago to create a website in the style of Wikipedia with 4 page-sections relevant to simulation-theory. It gave me an HTML tag with 4 empty DIV elements - nothing else. No other structure, no content, no styling, no mock-up of interactive elements.

    • @Siroitin
      @Siroitin Před 5 měsíci

      @@restingsmirkface You might have to do some "prompt engineering".
      When I try ML and statistics related stuff, I often just copy text book formulas. The copied text is obscure for humans but somehow ChatGPT is able to understand it. Also it is really hard to ask python code for neural networks because it forces the use of external packages. C language doesn't have external packages so I often ask ChatGPT to write in C code and I translate the code to Python or Julia

    • @keiichicom7891
      @keiichicom7891 Před 5 měsíci +4

      Agree. I noticed, although AI chatbots like ChatGPT can write complex Python programs( I asked it to create simpler neural net chatbots in Tensorflow / Keras), it is often buggy, and it has a hard time fixing the bugs if you ask it.

    • @choc3732
      @choc3732 Před 5 měsíci

      @@Siroitinthis is very interesting, ChatGPT has a better hit rate when it comes to writing in C?
      I’ve only tried Python so far, will have to give this a go

  • @thomasr22272
    @thomasr22272 Před 6 měsíci +53

    My main question is: in which of the LLM ai startups is he an investor?

    • @RoyRope
      @RoyRope Před 6 měsíci +5

      crossed my mind lol

    • @rollotomasi1832
      @rollotomasi1832 Před 6 měsíci +1

      Please listen to the talk with an open mind, and face this was reality.

  • @simonmeier
    @simonmeier Před 5 měsíci +29

    Dr. Matt Welsh points out the crucial point about AI in programming: The better it gets and the more we trust in it, without actively know how to code or without knowing how it does what it's doing, we lose power over our daily automatic routines. Imagine what a risk AI generated code would be in a nuclear power plant. I think this talk is rather a great wake up call for learning how to code and coding inside AI instead of just letting it go.

    • @randotkatsenko5157
      @randotkatsenko5157 Před 4 měsíci +1

      Humans are fundamentally lazy and default to the option which takes the least energy and effort. Meaning, most people will try to automate their own work as much as possible. AI learns from this and gets increasingly better, until human-in-the-loop is not needed anymore. Eventually, AI might be even better than humans at programming. As for nuclear power plant, I dont know, depends how reliable the system is.

    • @gordonramsdale
      @gordonramsdale Před 3 měsíci +5

      Except in 5 years, you might be saying the opposite. Humans introduce error inherently. Think how much better AI is now than it was programming 5 years ago, give it 5 more years, and writing human code will seem like the insecure risky option.

    • @Ivcota
      @Ivcota Před 3 měsíci +1

      @@gordonramsdale My take: A good chunk of software bugs exist because requirements were not refined well enough by the engineer breaking down the work. They make assumptions and write code that does something it shouldn't. With good testing no real bugs get into the system and we have modern compilers that remove the issues with syntax errors. AI coding will likely produces the same errors and make these types of assumptions humans make when working with poorly defined requirements.

    • @dblezi
      @dblezi Před 3 měsíci

      Nuclear power plants have a strict design and review process that is fully vetted. So i would not worry about this specialized software aka AI in this application.

    • @simonmeier
      @simonmeier Před 3 měsíci

      @@dblezi Hi, I think I understand what you are saying. But then again what does fully vetted mean in that context? We also have a review process where each Merge Request is fully vetted but still, errors can slip trough. AI MRs might slip through more easily.

  • @sortof3337
    @sortof3337 Před 6 měsíci +16

    surprise surprise, guy selling the shovel says gold rush is the best.

    • @ldandco
      @ldandco Před 6 měsíci

      Yep... noticed the same.

  • @MatchaLatteVlog
    @MatchaLatteVlog Před 6 měsíci +98

    Professor: Ai will replace all programmers
    Students who took student loans to become programmers: 👁️👄👁️

    • @nickmcgee1481
      @nickmcgee1481 Před 6 měsíci +7

      Professor: Programing sucks lets let the robots do it!

    • @llothar68
      @llothar68 Před 6 měsíci +17

      I don't understand why people think Professors know anything about programming. They have not time to get real practice

    • @tomashorych394
      @tomashorych394 Před 6 měsíci +2

      yep. Pretty harsh reality

    • @lmnts556
      @lmnts556 Před 6 měsíci +4

      Not the case tho, at least not now lol. Ai is not even close to taking programmers jobs, AI is not very good at programming, just very basic functions and can't put the pieces together.

    • @tomashorych394
      @tomashorych394 Před 6 měsíci +7

      @@lmnts556 Are you sure? It can do a lot of stuff. Then, you have all the no code solutions. Then, you have all the SaaSs and libraries. In the end. You need 1 engineer to build a platform instead of a 100. "At least not now" can mean in 5 years (which is very realistic)

  • @suryamanian8492
    @suryamanian8492 Před 6 měsíci +46

    the ‘gotch’ in using AI is we need to know if the code is right or not
    so we need to know basic stuffs

    • @augustnkk2788
      @augustnkk2788 Před 6 měsíci +6

      For now, eventually it will be able to write perfect code on its own, reducing the need from 100 software engineers to 5-10

    • @Pavel-wj7gy
      @Pavel-wj7gy Před 6 měsíci +1

      What is the basic stuff in a pyramid of abstractions? Assembly code?

    • @tiagomaia5173
      @tiagomaia5173 Před 6 měsíci +4

      @@augustnkk2788 I don't think it'll replace all good software engineers so soon. And I really don't think it will get to a point of always generating perfect code.

    • @augustnkk2788
      @augustnkk2788 Před 6 měsíci

      @@tiagomaia5173 Itll replace maybe 90%, some still need to make sure its safe, but no one will work in wed dev f. ex; all tech work is gonna be about AI, unless the governemnt steps in. I give it 10 years before it can replace every software engineer

    • @dekooks1543
      @dekooks1543 Před 2 měsíci

      you have the confidence of someone who doesn't know what they're talking about

  • @KaLaka16
    @KaLaka16 Před 6 měsíci +116

    If programmers will get replaced, who will not get replaced? Programming is one of the most difficult fields for humans. If most of it can be automated, most of everything else can be automated too. This AI revolution won't affect just programmers, it will affect everyone. Programmers are more aware of it than the average person though.
    It might still take 20 years for us to see AGI. Probably way less, but nobody really knows.

    • @BARONsProductions
      @BARONsProductions Před 6 měsíci +38

      Manual labour isn't going to be replaced. Nurses, waitress, handyman, plumber... shit like that

    • @KaLaka16
      @KaLaka16 Před 6 měsíci +17

      @@BARONsProductions Eventually it is, unless we specifically want humans for the roles. Machines will do everything better once we get to artificial superintelligence. We will probably get it before 2040, but who knows, it could take way longer. Also, people need time to adapt to technology. When something is invented, it doesn't get immediately applied on the practical level.

    • @MatchaLatteVlog
      @MatchaLatteVlog Před 6 měsíci +15

      @@BARONsProductionsif anything manual labour is going to be replaced faster due to the repetitiveness of their roles.

    • @Nobodylihshdheuhdhd
      @Nobodylihshdheuhdhd Před 6 měsíci +8

      ​@@BARONsProductionsthose jobs are more likely to be replaced than programmers

    • @dineshbs444
      @dineshbs444 Před 6 měsíci +22

      The physical labour will take more time. For that, actual physical robots should be built that won't be any good for like 10 years at least (I believe). Yeah the digital ones are ones that will take the hit first.

  • @kostian8354
    @kostian8354 Před 5 měsíci +2

    About prompt program
    - Can you reason about it's performance and class of algorithmic complexity ?
    - Can you reason about the resources required to run it, like RAM ?
    - Can it process more data than fits into RAM ?
    One day it will, but not yet...

  • @Hangglide
    @Hangglide Před 4 měsíci +3

    Great presentation! Thank you!
    One nitpick: 19:23 "average lines of code checked in per day ~= 100" I can tell you that is not the case for average SWEs in the silicon valley do. ~10 lines/day would already be pretty good.

  • @cityofmadrid
    @cityofmadrid Před 6 měsíci +8

    Why hasn’t the “lecture” started saying “today we are gonna have my buddy which has an AI for programmers startup”, it would have saved me an hour of this info-commercial

  • @joaoguerreiro9403
    @joaoguerreiro9403 Před 6 měsíci +71

    Something I did not understand was how would Computer Science become obsolete? So okay, you replace programming with prompting. But who will develop all those magical models that you are prompting? Aren’t they built by computer scientists and SWEs?
    What I mean is, if you are bold enough to claim programming will become obsolete, then doesn’t that mean learning mathematics and physics would also become obsolete? Like I could just ask some AI model to develop what I need in the context of physics and mathematics… and won’t need to understand the dynamics of those sciences, I just need to know how to speak English and ask for something.
    Note: I actually can see programming becoming more automated. But Computer Science? I can’t see that happening… aren’t we supposed to understand how do computers and AI work? Should they be seen as black boxes in the future?
    Also, programming would still not be fully automated because it’s weird to believe that an ambiguous sequence of tokens (English language) can be mapped with precision to a deterministic sequence (code) without any proper revision by a human… what if AI starts to hallucinate and not align with human goals? At best we would create a new programming language that is similar to “Prompting”…
    What are your opinions on these?

    • @stefanbuica5502
      @stefanbuica5502 Před 6 měsíci +9

      My opinion is that before doing a ratinal action, there is an emotional action. So all decisions you can write on the prompt, cannot be accurate.
      My take is that technology will automate further and transform and humans will have the opportunity to use more of their creativity and thus becoming more human!

    • @algro9567
      @algro9567 Před 6 měsíci +8

      There are two main concepts that you need to wrap your mind around:
      1) Ease of use, 2) Programming as a tool
      When Welsh talks about 'the end' of programming, he means to future mass adoption of LLM models to program for them instead of programming themselves due to ease of use. Essentially, LLM's will be the new user interface for people to use programming languages, so the need for expert programmers will be limited to specialty roles in the future, like how can I write an API for LLMs to interact with or how can I make this LLM that checks that another LLM works properly?
      Obsolete is not the right word here, as you can see Welsh using copilot himself even though he is still technically a programmer. It's just the science of writing code by hand will be displaced by prompting to ask an AI to manipulate code for you. For now, you need to read the code the LLM wrote to use it, but in the future, it might as well be a magical black box that does x for you, testing and implementation included.
      Or in other words:
      LLM's are going to be easier to use than programming by hand, and LLM's will use coding as a tool instead of people. Computer science is then the art of getting better code from LLMs instead of getting humans to write code faster and better.

    • @tomashorych394
      @tomashorych394 Před 6 měsíci +3

      You are right. These people will still be needed. But AI might reduce the number of such positions down to

    • @jpcfernandes
      @jpcfernandes Před 5 měsíci +10

      Not only that, who develops all the connections between LLMs and all existing systems. Who will replace existing systems that nobody knows what are doing with systems that can use AI. In the short term at least, I foresee more programmers needed, not less.

    • @metadaat5791
      @metadaat5791 Před 5 měsíci +14

      I for one will be glad when the people who think that "programming sucks" and "no progress has been made in 50 years" will actually give up and leave the field, they have no idea what CS entails. Computer Science is about computer programming like Astronomy is about looking through telescopes.

  • @advocat-bgcom
    @advocat-bgcom Před 6 měsíci +17

    The problem with LLM is that they cannot solve independently computationally irreductible problems. So there is interaction between classical computation and LLM in symbiosis. So I do not agree that computer languages should disappear completely. Also right now checking google is much more energy efficient than prompting chatgpt. So there are the energy efficiency issues. When you build apps with AI somebody has to pay the token bill.

    • @Fs3i
      @Fs3i Před 5 měsíci

      > The problem with LLM is that they cannot solve independently computationally irreductible problems
      It can write programs that do. For example, this is what the current GPT-4 can do on the normal openai chat website (can't post url to conversation because YT spam filter). I've asked "Hey there! Can you give me a word which has an MD5 hash starting with `adca` (in hex)?"
      I've chosen adca, because those were the first four hex letters in your name. This is likely not in its training set.
      The model was "analyzing" for a bit, and then replied
      > A word whose MD5 hash starts with adca (in hexadecimal) is '23456'. The MD5 hash for this word is adcaec3805aa912c0d0b14a81bedb6ff. ​​
      You can see how it answered it, it wrote a python program to solve it. I didn't need to prompt to do it, it knows - like a human! - that it should pass these classically computationally irreducible problems off to a classical computer.
      And yes, there's still programming involved, but like, my 16 years of experience with computer science didn't help me at all, except in terms of coming up with an example.

    • @TheWhiteWolf2077
      @TheWhiteWolf2077 Před 4 měsíci +1

      No code applications getting better and A.I. getting better looks like a programless future is really close or a near programless one at least. Eventually A.I. will be better,faster and cheaper than any human by a large margian.

    • @icenomad99
      @icenomad99 Před 2 měsíci

      What you forgot to add is "YET".

  • @Rico.308
    @Rico.308 Před 2 měsíci +2

    Learning to code right now and I can definitely say this has not made me give up it only shows me the cool tools I will one day be able to build.

  • @EyeIn_The_Sky
    @EyeIn_The_Sky Před 6 měsíci +7

    Guy introducing him: "Hey Kids, this guy is going to make sure that the cripppling debt that you and your parents undertook to send you to college was all for absolutely nothing thanks to his AI"

  • @kostian8354
    @kostian8354 Před 5 měsíci +7

    Even if robots generate code, you would still want it to have less duplication and some abstractions, because it will lower the amount of context tokens required to modify the code.
    You would probably also want to keep interfaces between regenerations, because you would like to keep the tests from the older version...

    • @christislight
      @christislight Před 4 měsíci

      You’ll need to code the robot, or code a solution to code into the robot. It’s deeper than these people understand

    • @sourenasahraian2055
      @sourenasahraian2055 Před 4 měsíci

      No you don’t , they can write optimized code , that’s literally the whole point of AI , it’s an optimization problem, adjust my weight to reduce the cost function and code duplication be yet another parameter.

  • @annoorange123
    @annoorange123 Před 5 měsíci +37

    Last week i was working on some rust code that had to deal with linux syscalls, chatgpt gave incorrect data on every single question. There are limits to how well trained it can be based on the amount of data it was trained on. It's good for common problems, not so in a niche environments that real SWE deal with daily. It just makes JS bootcamps obsolete.
    Now imagine if plane control computers were used to generate all the code, as he suggests, without a person in the loop. Good luck flying that. Until AGI is here, we can't talk about any of this

    • @danri9839
      @danri9839 Před 4 měsíci

      It's true but for now. What about the evolution of these models over 5, 10, or 15 years. BTW, no model yet receives data directly from the physical world. And sooner or later, it will heppen.

    • @annoorange123
      @annoorange123 Před 4 měsíci +2

      @@danri9839 it's a fuzzy black box system. Until we have AGI it's just marketing hype that they are smart, while in reality precision isn't there if there was little training data

    • @not_zafarali
      @not_zafarali Před 3 měsíci +1

      ​@@danri9839 The problem is that large language models get data from the world but can't figure out what's useful and what isn't, what's keep and drop on their own what's useful and what isn't. Right now, humans decide for them. If we want models to make their own choices, they need to understand what's right and wrong, which in itself is already complex even for humans in a lot of cases.

    • @dekooks1543
      @dekooks1543 Před 2 měsíci

      you're the 927483927839273 I've seen who wrote this comment. You sound like the crypto bros who promised an unprecedented economic crash and how the blockchain would revolutionise everything... and yet.

    • @josephp.3341
      @josephp.3341 Před měsícem

      I tried to generate Rust code for a relatively trivial problem (8puzzle) and its solution was wrong and didn't compile. I fixed the compilation errors and the solution was still terrible because it used Box::new(parent.clone()) every time a child node was generated (very, very inefficient). I had already written the code myself so it was easy to spot these errors but I really can't see how chatgpt is supposed to write code better than humans...

  • @CaptTerrific
    @CaptTerrific Před 6 měsíci +14

    The biggest red flag was there at the start: the beginning of the video description says that gpt can do general purpose reasoning. It's neither general purpose nor can it reason

    • @MinecraftN3rd
      @MinecraftN3rd Před 5 měsíci

      Hmmm I think It is both general purpose and can reason

    • @dekooks1543
      @dekooks1543 Před 2 měsíci

      then you should go to a mental health professional

  • @smanqele
    @smanqele Před 4 měsíci +5

    I agree, the biggest problem with humans in programming is how we mentally map how to solve problems. Code reviews can be a huge waste of time if you don't have it in you to push back. It truly makes me wonder the ROI for companies to host a lot of the software development ceremonies today.

    • @jamesschinner5388
      @jamesschinner5388 Před 3 měsíci

      Code review is all about regression to the mean

    • @smanqele
      @smanqele Před 3 měsíci

      @@jamesschinner5388 But we probably haven't got a single methodology to arrive at the mean. Our individual Means are terribly diverse

  • @canosisplays5152
    @canosisplays5152 Před 6 měsíci +16

    It’s a lot to expect everyone to know what they want to enter into a query. It will take some time for the query interface to truly be inviting. I’m also mildly concerned that AI will grow impatient with us end users and spit out something we may not want and will simply say “deal with it 😎”

    • @robbrown2
      @robbrown2 Před 6 měsíci +3

      Seems like an AI that is owned by a company that makes a profit would train it not to do as you describe, since that would drive people away. Chat GPT, in its current state, is incredibly patient, and that is one of its most striking and valuable features. I don't think that's an accident.

    • @robertfletcher8964
      @robertfletcher8964 Před 6 měsíci +7

      @@robbrown2 GPT isn't patient, and doesn't think. All it does is propose the most statistically likely word that should come next given a user provided context.
      This isn't AGI its a predictive model. I'm not trying to be mean or critical, but you need to understand this if you want to use the tool efficiently

    • @noahmetz5892
      @noahmetz5892 Před 6 měsíci

      @@robbrown2 It will literally return the statistically next most likely token as soon as it is physically able. What is your definition of patient for this to meet it?

    • @sgramstrup
      @sgramstrup Před 6 měsíci

      They won't write, but just discuss the final product with the AI while it builds it. No writing is needed/wanted for future programming.

    • @elawchess
      @elawchess Před 6 měsíci +2

      @@robertfletcher8964 The way you've characterised it undersells it quite a bit by saying the stuff about "statistically likely". Don't forget RLHF (Reinforcement Learning with Human Feedback) where many undesirable styles the model might do are weeded out and the model is steered towards answering in a way humans prefer. You say it spits out statistically likely within user context but you seem to not be considering that part of that user context could be "patience", the very thing that you seem to be alleging that it can't do.

  • @moonstrobe
    @moonstrobe Před 6 měsíci +13

    I didn't hear him get into the topic of consistency and feature updates. How about performance based programming for games and ultra efficiency? Or shower thought innovations that create entirely new paradigms and ways of approaching problems? AI might be able to do some of this eventually, but I doubt it will be as rosy as he imagines.

    • @fappylp2574
      @fappylp2574 Před 5 měsíci +1

      yeah, like 99% of people don't invent new paradigms or ways of approaching problems. The vast majority of people in software will be out of jobs, with maybe a few hyper-PhDs sticking around.

    • @dekooks1543
      @dekooks1543 Před 2 měsíci

      stay fappin, fappy. It's not going to happen. Maybe the soydev macbook in startbucks react bros will get replaced but true programming that actually requires deep knowledge ? not happening.

  • @frankgreco
    @frankgreco Před 5 měsíci +3

    46:36 "No one understands how large language models work"... back in 2008, no one understood how derivatives worked.

  • @casla1960
    @casla1960 Před 5 měsíci +10

    Thank you CS50 team for sharing this with all of us

  • @dajunonator
    @dajunonator Před 6 měsíci +124

    Look y’all, AI is going to get better and better and we have to accept that. Lectures like these may seem pessimistic of a programmer’s future, but really it should inform our decision in what we dedicate our time learning towards. Certain jobs may be replaced but new jobs will be created as well.

    • @cusematt23
      @cusematt23 Před 6 měsíci +2

      I think it's fair to say that the pioneers of AI have focused on programming solutions by definition to start

    • @Azikkii
      @Azikkii Před 6 měsíci

      @@cusematt23 AI will never be able to give the person exactly what they want. It will just never happen. There always needs to be somebody there to read the code and refactor and make sure it’s efficient. To say the end of human programming is nigh is silly. Without a human there to review and push it to prod it’s useless. I mean I can use chatgpt 4.0 to give me a solution and I can always find one thing wrong with it (then if I ask chatgpt “what is wrong with this code?” it will say “omg I’m sorry you’re correct” and then rewrite the whole code snippet thinking that the first one is totally wrong). If that means programmers just shift towards a hard focus towards correcting code already in place, well, that’s literally what we do anyways on a daily basis. Conclusion is that AI is nowhere near ready for a company to be totally dependent on AI to code and fix mistakes. You’re off by about 10-15 years at the minimum.

    • @axumitedessalegn3549
      @axumitedessalegn3549 Před 6 měsíci +22

      Based on a lot of the conversation from scientist on the field, I don't think LLM and the deep learning architecture(transformers) that runs them are it. They will be great assistance for developing certain applications in existing platforms and paradigms but if you want to create a new type of OS or truly be creative or original, i don't think LLMs will be able to do that. They can help to a point but another revolution in AI is needed for AI to truly understand our reality and use that knowledge to be on the same level as humans.

    • @thelearningmachine_
      @thelearningmachine_ Před 6 měsíci +9

      In November chatGPT will be only 1 year old and we are already have things like AutoGen...imagine in 5 years? Considering the exponential DL "self learning" capabilities, plus the worries of CEO's (Ms, Google, Amazon, Sam Altman), it is clear for me that if they are worried about the future, and are telling goverments to create regulations for AI...I think I should be worried too 😂. Economy shows us that are way more employees than jobs available, and AI will turn this gap even bigger.
      But since every country right now are in an "AI arms race" to have an edge over this technology (mainly USA x China), any local regulations are virtually impossible at the moment to regulate/control AI implementations

    • @verdantblast
      @verdantblast Před 6 měsíci +19

      It's different this time.
      In the past, new jobs brought by the advancement of technology were only new ways of improving `human` work efficiency. Human are still needed for filling these vacancies.
      However, AI and humans are equivalent concepts(Not enough yet, but foreseeable). AI creates new jobs, AI fills the position.

  • @user-vt3pr1cw3c
    @user-vt3pr1cw3c Před 6 měsíci +12

    That's a pretty funny and bold claim when a lot of AI systems can't count the number of words in a paragraph excerpt correctly.

    • @ksoss1
      @ksoss1 Před 5 měsíci +1

      Can you? All the time? What would it take for you to do it perfectly each time? What would it take for the AI system to do it perfectly every time? Interesting times ahead...

    • @user-vt3pr1cw3c
      @user-vt3pr1cw3c Před 5 měsíci

      @@ksoss1 As far as I'm aware, there looks to be a problem that chatbots seem to have where in terms of computational speed causes them to skip some instructions of code that's not too dissimlair when setting compiler execution speed to a certain level that results in some unwanted glitches like in assembly language programs via accidental instruction skips.

    • @juleswombat5309
      @juleswombat5309 Před 5 měsíci

      You are referring to simple LLMs, the proposed architecture is LLMs+ Compute Tools (c.f. Calculators etc) Just as an normal human can answer 3x 9 =27 off the top of their head, they would need pencil and paper, or just use a a calculator, to answer what is 4567 x 2382?

    • @user-vt3pr1cw3c
      @user-vt3pr1cw3c Před 5 měsíci

      ​@@juleswombat5309So, what does that make my testing of Bing AI's capabilites, built on top of OpenAI tech, in regards to a pretty simple task on a pretty short excerpt of word counting? Because I'm pretty sure Microsofts' proprietary AI app doesn't fall in the category of being powered by a simple LLM.

    • @juleswombat5309
      @juleswombat5309 Před 5 měsíci

      @@user-vt3pr1cw3c It means you have not tested against an LLM combined with access to relevant tools.

  • @christislight
    @christislight Před 4 měsíci +3

    I’m an AI Business Owner - It’s great to know how to program even if programming is obsolete due to AI, you can use code as an asset. I created a model that uses Python to solve any math equation. Could’ve used Google, but using Python makes the solution more accurate and near instantaneous.

    • @aqf0786
      @aqf0786 Před 4 měsíci

      Can you share a reference to your model?

  • @kenjimiwa3739
    @kenjimiwa3739 Před 5 měsíci +17

    There's SO much to SWE jobs aside from just coding, like collaborating with product and design, understanding business needs, convincing management that something is worthwhile. Additionally, someone will need to review the AI code, deal with legacy code, set up services, etc.. I view these AI tools as tools that will make everyone's job more productive but not necessarily replace.

    • @LupusMechanicus
      @LupusMechanicus Před 4 měsíci +4

      The cope is real.

    • @TomThompson
      @TomThompson Před 4 měsíci +10

      ​@LupusMechanicus Anyone who thinks an AI can help anyone write a program to solve problems hasn't worked in the field at all. More often than not a person will bring a problem and their ill conceived solution. Then the experienced software engineer will discuss the original problem, propose alternate solutions, ideas that still solve the problem but better make use of resources (memory, time, etc) and provide a useful and intuitive workflow. That IS part of being a SWE and if you think an AI is going to do that naturally and simply you are out of touch. Say others are "cope" if you want, but perhaps educate yourself more than watching a CZcams video by a guy desperate to sell is product.

    • @LupusMechanicus
      @LupusMechanicus Před 4 měsíci

      @@TomThompson Bruh try to build a house profitably with just your fingers. You need a saw and air hammers, lifts and screw guns. Thusly you can now build a million dollar house with 8 people in 6 months instead of 40 in 1 year. This will eliminate alot of employees, thusly it is cope.

    • @TomThompson
      @TomThompson Před 4 měsíci +9

      @@LupusMechanicus You again miss the point. No one is saying the industry won't be affected, it will. What we are saying is it is uninformed to say the industry is "dead" because of AI. Just look at the history. The job has gone from being primarily hardware based (setting tons of switches) to using a machine level language (assembly). Then gradually to higher level languages (fortran, cobol, c, etc). Then we have gone through adding IDE and lint, and code sharing, and review systems. The introduction of AI will not replace everything and everyone. It will be a tool that will make the job easier. And yes, it could easily mean a company that currently has 100 engineers in staff can gradually cut back to 10. But it also means other jobs will open up in areas such as making these AI and making systems that make using but easier.
      The invention of the hammer didn't kill the home building industry.

    • @2011fallenstar
      @2011fallenstar Před 3 měsíci +1

      There won't be legacy code anymore, having a computer that writes code, so ppl will understand the computer's code sounds pointless. Do you need to know your router's code in order to use the wifi?

  • @simulation5627
    @simulation5627 Před 6 měsíci +10

    It started interesting but it's just an ad for (another one) gpt wrapper.

  • @ChinchillaBONK
    @ChinchillaBONK Před 5 měsíci +18

    The problem with LLMs in Generative AI is that in 5 years time, the AI will be learning upon large percentage of data that other AI have generated and then even longer down the road, how do we know what is real or generated data?
    We still need humans to understand what is fake. The creativity from AI must make sense if the goal for that specific data requires such precision like in the medical industry or other industries for lives are at stake.

    • @verigumetin4291
      @verigumetin4291 Před 5 měsíci +2

      It's been established already that synthetic data is superior for training LLMs, compared to raw human data.
      I mean, think about it, does the open web not have data that is bad? Well ChatGPT was trained don it and it does pretty well. Synthetic data has been proven already to be superior to that, so simply by training the next iteration of the LLM on synthetic data is going to get us to the next step.

    • @ChinchillaBONK
      @ChinchillaBONK Před 5 měsíci

      @@verigumetin4291 What about fake news or lobbyist outlets? Or books/art generated on someone else's copyright? What if bad actors create fake generated data for their own nefarious purposes? Then these scammers or spammers constantly create these fake data? You can already make a fake Obama dancing "Livin' La Vida Loca". How would the AI know it's real or fake once these generative AI become more skilled? Years down the road, our newer AI LLM may not know the difference and use these data to train. We already got bad science news regarding mask wearing and vaccinations. This will become worse when the less than average intelligent human believes in nonsensical data in a world where such synthetic data will be practically spam.

    • @aligajani
      @aligajani Před 5 měsíci

      GPT 4 is getting dumber according to Stanford Research.@@verigumetin4291

    • @tybaltmercutio
      @tybaltmercutio Před 4 měsíci +3

      ⁠@@verigumetin4291Do you have any source for that? Preferably a peer-reviewed paper rather that some „research“ by Google or OpenAI published by themselves.
      I am asking because what you are saying does not make any sense to me.

    • @luzak1943
      @luzak1943 Před 4 měsíci

      ​@tybaltmercutio I think he is talking about the Orca 2 paper

  • @coltennabers634
    @coltennabers634 Před 5 měsíci +5

    19:00 Lines of code is a vanity metric that does not translate to value... this guy is definitely in management

  • @artemkotelevych2523
    @artemkotelevych2523 Před 5 měsíci +25

    The thing with LLMs is that it's just another level of abstraction. If you take a product documentation as a highest level of abstraction to describe how that product should behave, to have it correct you still need to describe all the corner cases and the way some things should be done, you can't just say "this page should show weekly sales report". And all this documentation might not be easy to understand. Code is just a very precise way to describe behavior.

    • @wi2rd
      @wi2rd Před 5 měsíci +1

      Do you trust close friends who know you well to give you a decent result when you ask them "this page should show weekly sales report"?

    • @artemkotelevych2523
      @artemkotelevych2523 Před 5 měsíci +2

      @@wi2rd you understand how documentation work right?

    • @MaiThanh-om5nm
      @MaiThanh-om5nm Před 5 měsíci +1

      From your logic, it's impossible for non-technical project manager to instruct developers on how the application should be programmed.

    • @MaiThanh-om5nm
      @MaiThanh-om5nm Před 5 měsíci +1

      AI can ask clarification questions to make the requirements clearer. It's can do long-term back-and-forth conversations with the whole context of the project.
      It's not just inputting a single prompt and the project is done

    • @marcelocruz7644
      @marcelocruz7644 Před 5 měsíci +2

      @@MaiThanh-om5nm Non-technical and people with low abstraction for the field usually will instruct on how something will behave instead of how something is to be programmed.
      Also project managers manage the team time etc, architects, developers and engineers with know-how to translate expected behaviour from clients to technical field are the ones who instruct how it's programmed. Lots of developers are able to understand what a client want without an intermediate, because developers are system users as well and know what could be better on apps and what they'd like to see, expect etc, also you can see freelancers and github projects all around without a project manager etc, confirming they would understand it anyway with or without those helpers.

  • @SINC0MENTARI0S
    @SINC0MENTARI0S Před 6 měsíci +4

    This reminds me when the clowns of decades ago made the prophecy that Lotus was going to replace COBOL developers. The argument "Oh, but now it's for real" just won't fly.

  • @ChetanVashistth
    @ChetanVashistth Před 3 měsíci

    Questions in this lecture are very interesting. Even better than the whole lecture.

  • @vinipoars
    @vinipoars Před 6 měsíci +14

    I'm wondering if Fixie (35:00) hasn't already become obsolete with OpenAI's announcement on November 7th... lol

    • @ltnlabs
      @ltnlabs Před 5 měsíci +2

      Exactly

    • @ranjancse26
      @ranjancse26 Před 4 měsíci

      AI.JSX, who needs to learn in the era of AI lol

  • @davidsmind
    @davidsmind Před 6 měsíci +7

    "react for building llm applications"
    I cackled for about a minute

  • @TheGamerDad82
    @TheGamerDad82 Před 6 měsíci +22

    Well, generative models might eventually replace some software engineering interns at companies but as a lead developer / architect I don't see my job endangered yet.
    Software development and designs is not only about writing code. Writing code is the easy part - understanding the problem, both functional and non-functional requirements, the operating circumstances and making design decisions and compromises when needed is a whole different dimension.
    I can already see a lot of startups failing miserably by trying to develop software with a few low cost developers armed with some generative AI tool. This is like "we don't need database experts, we have SQL generators" all over again... 😂

    • @bdjfw2681
      @bdjfw2681 Před 6 měsíci +2

      true dude

    • @sgramstrup
      @sgramstrup Před 6 měsíci +4

      Doctors are also claiming they can do more, but AI have already beaten top doctors in diagnosing certain illnesses. I think you'll wake up very soon. No offence oc..

    • @farzinfrank2553
      @farzinfrank2553 Před 5 měsíci

      I agree with you. Its making the coding much easier but analysis is still a challenge

    • @martinkomora5525
      @martinkomora5525 Před 5 měsíci +7

      @@sgramstrup so would you undergo surgery operated fully by AI tomorrow?

    • @Linters-uh1kk
      @Linters-uh1kk Před 5 měsíci +1

      These were my thoughts too... I recently started learning full stack. I don't think Dr. Welsh understood fully the way LLM works and how reliant they are on humans. Any reasonable business should feel worried if a "code monkey" was writing random lines without a way to know specifically what was happening. Problems of the future are likely related to security, not necessarily deploying code that works. We need developers with experience and actual understanding of the code and how it interplays with the system. Other comments above mention programming languages with specific use cases such as memory, NOT necessarily human readability. This reminds me futurists who believed teachers and instruction would be outright replaced by multimedia in the 60's and 70's. The Clark and Kozma debates are a famous example of this. I wonder how many people dreamed of being a teacher and gave it up from fearmongering? The fact is context is everything. Humans are making the context, and we will be doing so for a long time. A threat to this is AGI, not a brain in a jar which is generative AI. If I were in computer science I would take what Dr. Welsh says with a grain of salt. Instead, think about what kind of problems are going to be introduced with AI and understand it as deeply as possible. With every innovation, new problems are born.

  • @HarpaAI
    @HarpaAI Před 6 měsíci +166

    🎯 Key Takeaways for quick navigation:
    00:00 🍕 Introduction and Background
    - Introduction of Dr. Matt Welsh and his work on sensor networks.
    - Mention of the challenges in writing code for distributed sensor networks.
    01:23 🤖 The Current State of Computer Science
    - Computer science involves translating ideas into programs for Von Neumann machines.
    - Humans struggle with writing, maintaining, and understanding code.
    - Programming languages and tools have not significantly improved this.
    04:04 🖥️ Evolution of Programming Languages
    - Historical examples of programming languages (Fortran, Basic, APL, Rust) with complex code.
    - Emphasis on the continued difficulty of writing understandable code.
    06:54 🧠 Transition to AI-Powered Programming
    - Introduction to AI-generated code and the use of natural language instructions.
    - Example of instructing GPT-4 to summarize a podcast segment using plain English.
    - Emphasis on the shift towards instructing AI models instead of conventional programming.
    11:26 🚀 Impact of AI Tools like CoPilot
    - CoPilot's role in aiding developers, keeping them in the zone, and improving productivity.
    - Mention of ChatGPT's ability to understand and generate code snippets from natural language requests.
    17:32 💰 Cost and Implications
    - Calculation of the cost savings in replacing human developers with AI tools.
    - Discussion of the potential impact on the software development industry.
    20:24 🤖 Future of Software Development
    - Advantages of using AI for coding, including consistency, speed, and adaptability.
    - Consideration of the changing landscape of software development and its implications.
    23:18 🤖 The role of product managers in a future software team with AI code generators,
    - Product managers translating business and user requirements for AI code generation.
    - Evolution of code review processes with AI-generated code.
    - The changing perspective on code maintainability.
    25:10 🚀 The rapid advancement of AI models and their impact on the field of computer science,
    - Comparing the rapid advancement of AI to the evolution of computer graphics.
    - Shift in societal dialogue regarding AI's potential and impact.
    29:04 📜 Evolution of programming from machine instructions to AI-assisted development,
    - Historical overview of programming evolution.
    - The concept of skipping the programming step entirely.
    - Teaching AI models new skills and interfacing with software.
    33:44 🧠 The emergence of the "natural language computer" architecture and its potential,
    - The natural language computer as a new computational architecture.
    - Leveraging language models as a core component.
    - The development of AI.JSX framework for building LLM-based applications.
    35:09 🛠️ The role of Fixie in simplifying AI integration and its focus on chatbots,
    - Fixie's vision of making AI integration easier for developer teams.
    - Building custom chatbots with AI capabilities.
    - The importance of a unified programming abstraction for natural language and code.
    39:14 🎙️ Demonstrating real-time voice interaction with AI in a drive-thru scenario,
    - Showcase of an interactive voice-driven ordering system.
    - Streamlining interactions with AI for real-time performance.
    44:55 🌍 Expanding access to computing through AI empowerment,
    - The potential for AI to empower individuals without formal computer science training.
    - A vision for broader access to computing capabilities.
    - Aspiration for computing power to be more accessible to all.
    46:49 🧠 Discovering the latent ability of language models for computation.
    - Language models can perform computation when prompted with specific phrases like "let's think step-by-step."
    - This discovery was made empirically and wasn't part of the model's initial training.
    48:17 💻 The challenges of testing AI-generated code.
    - Testing AI-generated code that humans can't easily understand poses challenges.
    - Writing test cases is essential, but the process can be easier than crafting complex logic.
    50:40 🌟 Milestones and technical obstacles for AI in the future.
    - The future of AI development requires addressing milestones and technical challenges.
    - Scaling AI models with more transistors and data is a key milestone, but there are limitations.
    54:23 🤖 The possibility of one AI model explaining another.
    - The idea of one AI model explaining or understanding another is intriguing but not explored in depth.
    - The field of explainability for language models is still evolving.
    55:44 🤔 Godel's theorem and its implications for AI.
    - The discussion about Godel's theorem's relevance to AI and its limitations.
    - Theoretical aspects of AI are not extensively covered in the talk.
    56:42 🔄 Diminishing returns and data challenges.
    - Addressing the diminishing returns of data and computation in AI.
    - Exploring the limitations of data availability for AI training.
    58:34 🚀 The future of programming as an abstraction.
    - The discussion on the future of programming where AI serves as an abstraction layer.
    - The potential for future software engineers to be highly productive but still retain their roles.
    01:04:12 📚 The evolving landscape of computer science education.
    - Considering the relevance of traditional computer science education in light of AI advancements.
    - The need for foundational knowledge alongside evolving programming paradigms.
    Made with HARPA AI

    • @ericamelodecarvalho5714
      @ericamelodecarvalho5714 Před 6 měsíci +1

      000p

    • @sitrakaforler8696
      @sitrakaforler8696 Před 6 měsíci

      Dam that's niiiice! ! It's like Merlin ?!

    • @jaroslavdanilov
      @jaroslavdanilov Před 6 měsíci

      @@sitrakaforler8696 better :)

    • @reasonerenlightened2456
      @reasonerenlightened2456 Před 5 měsíci +4

      Before thinking of AI use in the society we must agree who will Profit from it, who will own it and who will pay for the mistakes of the AI? Is it going to be like, "Oh well, bad luck" when AI ends someone's life?

    • @user-ri6lc7hf7b
      @user-ri6lc7hf7b Před 5 měsíci +5

      @@reasonerenlightened2456 you guys need to stop think AI as some conscious thing, it is just like a knife or gun. It is entirely about who is using it with what intent.

  • @chenjus
    @chenjus Před 6 měsíci +31

    12:57 that's exactly right. The way I've been describing using GPT-4 for swe is that whereas I used to have to stop to look up error messages and read documentation, now I can ask GPT-4. GPT-4 smooths out all the road bumps for me so I can keep driving.

    • @reasonerenlightened2456
      @reasonerenlightened2456 Před 5 měsíci

      GPT-4 shows fake intelligence. For example, It struggles with fingers, and with drinking beer. LLM are a dead-end for AGI because they do not !(understand)! the implications of their output! Also, GPT-4 is designed by the Wealthy to serve their needs!

    • @miraculixxs
      @miraculixxs Před 5 měsíci +4

      Except when it doesn't. But sure spending an afternoon with Copilot can often safe 5 minutes of RTFM

    • @fappylp2574
      @fappylp2574 Před 5 měsíci +1

      @@miraculixxs "Hello Chat GPT, please read this F manual for me"

  • @jonkbox2009
    @jonkbox2009 Před 6 měsíci +23

    I took a clip of the FORTRAN code and sent it to GPT-4 Vision and asked it what the code did but it could not tell me because the pictured code was incomplete. Understandable. I sent it the BASIC code and it got it right. I asked it if the name CONWAY helped with its answer. It said No. I started a new chat and sent the BASIC program without the program name. It got it right. I sent the APL program and it didn't recognize the language or understand it at all, even that it was a programming language. I told it the language was APL and it got it right. Pretty cool.

    • @reddove17
      @reddove17 Před 6 měsíci +4

      Because they are somewhere in the training set, the presenter got them from somewhere I would assume.

    • @elawchess
      @elawchess Před 6 měsíci

      @@reddove17 The best of them are good enough to recognize a program that was not directly in the training set. Of course something about the program is in the training set e.g the idea of Conways game of life (or whatever it was), but that piece of code itself doesn't need to be in the training data for it to be able recognise it.

    • @reasonerenlightened2456
      @reasonerenlightened2456 Před 5 měsíci

      GPT-4 shows fake intelligence. For example, It struggles with fingers, and with drinking beer. LLM are a dead-end for AGI because they do not !(understand)! the implications of their outputs! Also, GPT-4 is designed by the Wealthy to serve their needs!

  • @user-eb4fc5wg2i
    @user-eb4fc5wg2i Před 5 měsíci +9

    I love Prof. Malan for maintaining such a badass CZcams channel!

  • @mrthanhca
    @mrthanhca Před 5 měsíci +1

    Thank you for the information; it's very useful.

  • @pillaideepakb
    @pillaideepakb Před 4 měsíci

    40:00 I build the similar demo few months back and it’s works wonderful.. and it’s easy to integrate with other data sources to plug in data for further inferencing

  • @pradeepebey6246
    @pradeepebey6246 Před 6 měsíci +10

    I think large language models are really cool, buts too much of black box. Sure there’s plenty of of use cases, as far as entirety replacing code, it needs to be customisable enough and consistent with its functionality. Not sure how that would be possible!

    • @codytownsend3259
      @codytownsend3259 Před 5 měsíci

      I mean we've already almost got there. Won't be long. Context window is huge now

    • @silencedogood7297
      @silencedogood7297 Před 5 měsíci +1

      You are restricting your options to computers as we know them, operating on limited versions of ones and zeroes. We cannot have true AI until we have bio-chips that operate like real brains.

    • @fappylp2574
      @fappylp2574 Před 5 měsíci

      Most of tech is currently already a black box. I write mostly C++ and can't even begin to fathom how these modern optimizing compilers work (and I never will). Heck, even the V8-runtime is almost arcane to most people. Only very few exceptional human beings can understand and work on these systems, everyone else can start to look for toilet cleaning jobs.

  • @shubhamtyagi6281
    @shubhamtyagi6281 Před 6 měsíci +6

    The hush that fell over the room, over the $1200 dollar to 12 cents. The room really never recovered. The background noise factor of before and after is stark

    • @stoogel
      @stoogel Před 6 měsíci +2

      The group of people who control this technology and the infrastructure required to use it is small, and the power they might eventually wield is vast. 99.99% of people will fall into the group without said power. I simply don't believe those who say AI will free humanity.

  • @EnglishGeekWahoo
    @EnglishGeekWahoo Před 2 měsíci +1

    This is a good video for high school students to be careful when they want to go to college, they might not only think not to approach CS but to go to something that wont be replaced by AI soon. Our era is tough, and it has never been any easier.

  • @jjhw2941
    @jjhw2941 Před 5 měsíci

    MetaGPT takes in a single line prompt to specify what software it should create and uses different prompts to assign different roles to GPTs to form a collaborative software entity to meet that spec.

  • @nmg8225
    @nmg8225 Před 5 měsíci +5

    Give credit to the people that coded copilot, chat gpt etc; now to seamless to use these LLMs but behind the scene are still the coders, the statisticians, the scientists, the engineers optimizing these models. You have to know both; how to code and how to use the models.

    • @LuisFernandoGaido
      @LuisFernandoGaido Před 3 měsíci

      Exactly what I think. A SWE needs write code explicity and write models to get solutions implicitly. None of these tasks seen to desapear in the future.

    • @omran2507
      @omran2507 Před 3 měsíci +1

      @@LuisFernandoGaido he never said they will. he just said the way software development happens will change drastically. it already has actually everyone at my job uses copilot

  • @ChilenonetoYoutube
    @ChilenonetoYoutube Před 6 měsíci +8

    problem with this approach is : Users are unable to create a coherent description of what they need, hence the need for people to know how to translate human needs to a coherent specification for the AI to work with. So the need of people that know the inner workings of relational databases or processors will be always there... will require less of them? Maybe... will dissapear? Nope.

  • @nightraver56
    @nightraver56 Před 4 měsíci +1

    The problem with systems is you never figure out every edge-case & the minute you do, management wants new features which by definition introduce points of failure that need proper edge-case testing.
    Root-cause failure-point tracing & analysis & FMEA are part of every system, the owners would never accept a black-box for mission-critical environments, a coder introduces a bug in a traffic-light management conroller he loses his job or management is replaced. First time someone dies in a traffic light accident & it was written by a ChatGPT traffic-management feedback algorthim, noone in the US at least is gonna say "Oh lets give ChatGPT one more chance".

  • @Custodian123
    @Custodian123 Před 4 měsíci +2

    50:45 actually there is a very fast growing body of evidence which basically says that we can make models which are 10x smaller for the same performance. Which in theory implies that if you scale it back to its standard size, would see crazy performance boost.
    Forget the example of Mamba which is a linear time sequence model, vs the standard quadratic time sequence.
    So we will see a duel boost from hardware and software. Everyone knows about the hardware, but not accounting for what the above duel improvement does to the rate of progress.

  • @regularnick
    @regularnick Před 6 měsíci +5

    19:26
    > "I've been coding whole day", but you threw away 90%
    Oh, that's pretty bold claim to say, that with chatGPT you will get correct code snippet first try, without any need to prompt it with like 20 more messages clarifying and making sure id doesn't confuse language, paradigm etc.
    You should not compare "clear code" of SWE with GPT tokens, because you are guaranteed to spend many more than ideal. Considering they are dirty cheap, this may not be the problem though

  • @ivan88buble
    @ivan88buble Před 6 měsíci +3

    Great sales presentation!

  • @dvanrooyen1434
    @dvanrooyen1434 Před 3 měsíci +1

    The simple truth is that the best programmers solve problems deeply rooted in the domain context - this is not the same as what you are seeing with language models.
    A simple example is the encyclopedia- you used to be able to buy the set of books - and look where that ended up…

  • @KaiCanvas
    @KaiCanvas Před 5 měsíci +2

    think of AI as someone's business because it is.
    as a startup founder, I'm gonna tell my secret, I starting to learn low level programming language and electronic to design my own system with my own programming language so when AI get too advanced no one could easily say make a tool like xxxxx. and dynamically change the syntax and mnemonic once it does leaks by my employee.

  • @andrebatista8501
    @andrebatista8501 Před 6 měsíci +7

    If AI can write programs, it’d be able to substitute a lot of people, and not just on tech but on many fields, then we gonna have more efficient services but with so many people unemployed, who would pay for those services?

    • @compateur
      @compateur Před 5 měsíci +4

      This is a very interesting question. Take it to the extreme: LLMs are able to take over any job. What makes live worthwhile? Can ChatGPT enjoy the first sun ray that warms up its AI chip, does it enjoy the tranquility of Nature, can it enjoy the soft sea breeze, can it get excited about new discoveries? What makes the heart of ChatGPT tick? Does it have a heart? Sometimes we forget that we are multidimensional creatures. Maybe we have to come up with a complete new model for society. We have to redefine ourselves.

    • @-BarathKumarS
      @-BarathKumarS Před 5 měsíci +1

      @@compateur dude seriously,think about it! One of my friends works as a consultant and another one works as an accountant at top firm,i have personally looked at the kind of work they do which at the end of the day is the most brain numbing manual repetitive task that i have ever seen...to put it pluntly an high schooler can do their job well enough.
      What will happen to these people then?

  • @abdulshabazz8597
    @abdulshabazz8597 Před 6 měsíci +4

    We must move forward with the advanced computational and reasoning capabilities these software models affords us, but we cannot move forward with these black box models which have no formal method of verification or "instruction manual", so to speak. These models should be considered idle malware. I mean imagine: these advanced advanced models and models like these in our appliances, our aircraft, and our ground transportation systems which cannot be verified yet behave properly 99.99 percent of the time yet cannot be actually Verified correct...

  • @AG-cx1ug
    @AG-cx1ug Před 5 měsíci

    51:42 he mentions some students that are working on building these chips - does anyone know the name of the company?

  • @sanfrance3980
    @sanfrance3980 Před 4 měsíci

    when was this video recorded any idea

  • @1dosstx
    @1dosstx Před 6 měsíci +4

    38:17 what is considered kid safe? Based on what milestones? Emotional ? Psychological? Etc? You need to know what child development sources are peer reviewed , etc. yes you could ask the AI for those but then you’d need to ensure they were not hallucinations. Etc.

  • @sandrinjoy
    @sandrinjoy Před 5 měsíci +3

    That has been the most professional Ad Break I have ever seen in my life. HAHA

  • @thecasualengineer99
    @thecasualengineer99 Před 5 měsíci

    I have tried it for a few days and a job that needs 2-3 days became 4 hours for the first pass code. very nice.

  • @matthewrummler
    @matthewrummler Před 4 měsíci +1

    I'm putting this here as a note for myself (I'll see if that works).
    POINTS REGARDING HIS "IMPOSSIBLE" ALGORITHM (no I don't think he literally means impossible):
    1. The AI is not a simple algorithm itself
    - The AI can not be summarized as an algorithm in the way someone would write one... the complexity is fairly expansive... even to setup the ML models
    2. Most of what he is asking would not be difficult for a reasonably simple program
    - Getting the title, etc...
    3. DO NOT "": This would be the default of a program
    - When he says DO NOT use any information about the world... does not mean do not utilize your predictive analysis it just means don't mix information in that is not in the transcript
    4. Summarizing is hard, a targeted predictive learning model IS probably the best algorithm for this
    - The only very difficult piece for a custom built program (including one or more algorithms to make this infinitely repeatable) IS the summarization
    So, my conclusion: Part of writing code well will, in the future, include targeted ML*
    (though my take is not monolithic, gargantuan systems like Open AI & Google produce... though those could be a good way to train a targeted ML model)

  • @alrasch4829
    @alrasch4829 Před 3 měsíci +5

    A great lecture/talk, illuminating and informative. As a practitioner, I find it very true and relevant.

    • @michellehunter8775
      @michellehunter8775 Před 2 měsíci

      Agreed!

    • @michellehunter8775
      @michellehunter8775 Před 2 měsíci

      Agreed. There's a lot of push-back against his message in the comments, but I'm already seeing it happen within tech companies where, for example, 10% of employees are let go and the ones staying are now doing several of those roles, along with their own, all by using AI.

  • @MikkoRantalainen
    @MikkoRantalainen Před 4 měsíci +7

    Great lecture! I've been writing code professionally for 20 years and I feel like Copilot is a the level of first year university student learning IT stuff. Not perfect co-worker, obviously, but much better than basic autocomplete in your IDE or some other tools you could use. I'm fully expecting to see Copilot rapidly improve so much that I write all my code with it. Right now, I feel that it can provide some support already and with fast internet connection, having it available is a good thing.
    Most of the time Copilot writes a bit worse code than I could do myself but it's much faster at it. As a result, I can do all the non-important stuff with a bit lower quality code that Copilot generates so I can focus my time on the important parts only. I'd love to see Copilot to improve even at the level that the easy stuff is perfect.

    • @ndic3
      @ndic3 Před 4 měsíci

      Copilot is terrible though. Gpt4 is 50x times better. In comparison co-pilot is unusable
      Edit: number is obv made up from what it feels like

    • @MikkoRantalainen
      @MikkoRantalainen Před 4 měsíci

      @@ndic3 Can get GPT-4 integrated in your code editor?

    • @LionKimbro
      @LionKimbro Před 4 měsíci +4

      I’ve been programming for 40 years of my life. Professionally for about 24 years. I absolutely coding with Chat-GPT. But what people don’t get is that architecture still matters. You are still accountable for the code working out. You still need a picture of the system as a whole. You still need to get what’s going on. You still need to understand algorithms, you still need to be able to perform calculations on performance and resources. You still have to know stuff. You have to put the pieces together into a working whole. And the appetite for software is near infinite.
      I don’t think people quite get that.
      Chat-GPT can’t do it all for you, by a long shot. Chat-GPT is a great intern. But you can’t make Excel with even two hundred interns. Not even a thousand interns can make Excel. There are other problems.
      And I am not saying that one day we won’t have AIs that can fully replace competent programmers. We probably will- one day. But that day is not today, and it is not even tomorrow.
      What I tell young people who are afraid, “but will there even be programmers in ten years?” I tell them, “maybe not, but I can tell you this: It has never been easier to learn programming, than it is today. You can ask anything of Chat GPT, and it will answer for you. If you know one programming language, you can now write in any programming language. The cost of learning to program has dropped incredibly. And the money is right- right over there.”

    • @edwardgarson
      @edwardgarson Před 3 měsíci

      ​@@ndic3Copilot is based on GPT-4

  • @shutterrecoil
    @shutterrecoil Před 4 měsíci

    Did I get correct that in the new programming paradigm human should review every time 200k program without modules and duplicated code is regenerated to add a minor feature?

  • @alfonsobaqueiro
    @alfonsobaqueiro Před 5 měsíci +1

    It's sad to see people who do not love what they do. 😢

  • @troyhackney148
    @troyhackney148 Před 6 měsíci +10

    Sir... This is a Dr. Donut.

  • @nathansodja
    @nathansodja Před 6 měsíci +23

    This feels like the Theranos equivalent of the future of software, it's all dreamville

    • @jwesley235
      @jwesley235 Před 6 měsíci +15

      Tell me you don't understand what's going on in AI without saying you don't know what's going on in AI.

    • @nathansodja
      @nathansodja Před 6 měsíci

      Sure, I know nothing JOn SNow.@@jwesley235

    • @AD-ox4ng
      @AD-ox4ng Před 6 měsíci +5

      ​@@jwesley235how about you explain it to us then?

    • @calliped-co5mj
      @calliped-co5mj Před 6 měsíci +3

      ​@@AD-ox4nghow about you do your own research.

  • @walterarlenhenry
    @walterarlenhenry Před 5 měsíci

    [18:38] Lines of code per day? what year is this?

  • @BurningR
    @BurningR Před 5 měsíci

    What's that he makes this slides in? I love it