Is AI Actually Useful?

Sdílet
Vložit
  • čas přidán 23. 02. 2024
  • Get Magical AI for free and save 7 hours every week: getmagical.com/patrick
    A new Harvard Business School study analyzed the impact of giving AI tools, to white collar workers at Boston Consulting Group.
    In the study, management consultants who were told to use Chat GPT when carrying out a set of consulting tasks were far more productive than their colleagues who were not given access the tool. Not only did AI-assisted consultants carry out tasks 25 per cent faster and complete 12 per cent more tasks overall, but their work was also assessed to be 40 per cent higher in quality than their unassisted peers.
    In today's video we look at the pros and cons of using AI at work.
    Harvard Paper: www.hbs.edu/ris/Publication%2...
    Nicholas Carlini Blog: nicholas.carlini.com/writing/...
    Nicholas Carlini Quiz: nicholas.carlini.com/writing/...
    Effects of AI on Employment Paper: papers.ssrn.com/sol3/papers.c...
    Patrick's Books:
    Statistics For The Trading Floor: amzn.to/3eerLA0
    Derivatives For The Trading Floor: amzn.to/3cjsyPF
    Corporate Finance: amzn.to/3fn3rvC
    Ways To Support The Channel
    Patreon: / patrickboyleonfinance
    Buy Me a Coffee: www.buymeacoffee.com/patrickb...
    Visit our website: www.onfinance.org
    Follow Patrick on Twitter Here: / patrickeboyle
    Patrick Boyle On Finance Podcast:
    Spotify: open.spotify.com/show/7uhrWlD...
    Apple: podcasts.apple.com/us/podcast...
    Google Podcasts: tinyurl.com/62862nve
    Join this channel to support making this content:
    / @pboyle

Komentáře • 1,7K

  • @PBoyle
    @PBoyle  Před 3 měsíci +63

    Get Magical AI for free and save 7 hours every week: getmagical.com/patrick

    • @Fx_-
      @Fx_- Před 3 měsíci +4

      You are looking at llms commercialized.
      Look at transformers repurposed. For example instead of guessing the next words we want like an LLm…. They have test some that guess the next evolutions in chemical compounds… can be applied to dna etc.
      Also look at large action models. Im working to put together a universal UI and chat interface with LLM and domain specific vision models as well as backend chatflow-like control over navigation.

    • @DJVARAO
      @DJVARAO Před 3 měsíci

      Dear Patrick,
      Thank you for your usual high-level take on complex subjects. I use AI on a daily basis for writing emails since it does a terrific job with grammar. I also use a new AI tool for finding quick referenced basic info on new subjects (Perplexity). I am very familiar with machine learning since 2000, when my professional career started. So, the best way to help frame the scope and capabilities of any ML model is understanding that they are great interpolators but very bad extrapolators. As expected, you don't expect a cat's face from an AI trained with human faces. Language models are trickier because people think they are more capable, but you clearly pointed out some of their limitations. It can write a poem in the style of Whitman (because, for example, ChatGPT used his works and other works from critics and writers), but it dramatically fails at writing a simple short story from a Latin American Nobel Laureate because it lacks that information. But since it has to give an answer, it hallucinates with conviction.
      Our company leverages ML models to develop the next generation of drug discovery using new technologies, including AI. Thanks to this approach, we can perform precision physics for proteins 10,000 times faster than the best conventional molecular modeling competitor in the market.

    • @alexkirrmann8534
      @alexkirrmann8534 Před 3 měsíci

      I hate that everyone thinks AI is a thing, it's NOT AI. It's just a program that does a specific task. I don't see a point of any of this. Generative? What makes it Generative? Can i cut out a lot of its code and it will regen? I mean even the term AI is misleading. I don't see the difference from AI and say any computer. Morons all of you. Buzz word, I would short all of these companies because eventually we are going to realize it's all nonsense. It's like BITCOIN, ignorance of everyone is being used to steal money from morons with a product that already exists. This is a scam and the simple fact no one is calling them out is almost identical to cryptocurrency and the other tech scams out there. Anyone with a little understanding of computers and coding would be able to tell you this is nothing.

    • @latergator915
      @latergator915 Před 3 měsíci +5

      But is AI actually useful?

    • @Fulminin
      @Fulminin Před 3 měsíci +7

      I actually had to click the link cause I couldn't tell if it was a joke or not

  • @antoinepageau8336
    @antoinepageau8336 Před 3 měsíci +1173

    You can tell this channel is 100% powered by AI, the SIM presenter never blinks.

    • @davidc1878
      @davidc1878 Před 3 měsíci +93

      LOL Can't be Google's AI though as the presenter isn't not white.

    • @howdj
      @howdj Před 3 měsíci +55

      If this channel was AI generated then we would get a lot fewer updates on rap news.

    • @2rx_bni
      @2rx_bni Před 3 měsíci +8

      @@davidc1878i screamed 😂

    • @k54dhKJFGiht
      @k54dhKJFGiht Před 3 měsíci +33

      He outsourced that to Blinkest! Bah Dum Chee!

    • @karmaandkerosene2885
      @karmaandkerosene2885 Před 3 měsíci +26

      I thought Patrick's right side was paralyzed for almost 2 years. Seriously. I never saw him move it on camera.

  • @germansnowman
    @germansnowman Před 3 měsíci +1386

    My favourite quote regarding Large Language Models: “The I in LLM stands for Intelligence.”

    • @yds6268
      @yds6268 Před 3 měsíci +26

      Lmao

    • @nekogami87
      @nekogami87 Před 3 měsíci +29

      gonna steal that one :D

    • @jimbojimbo6873
      @jimbojimbo6873 Před 3 měsíci +72

      They just spit out a shit ton of content based on predicted behaviour. There is no ‘thinking’ involved per se or intelligence. It’s just throwing a billion things at a wall and the more it sticks the more it will spit that out

    • @yds6268
      @yds6268 Před 3 měsíci +32

      @jimbojimbo6873 yeah, if you study a little bit of linear algebra and network theory, those LLM will be completely demystified.

    • @emmanuelbeaucage4461
      @emmanuelbeaucage4461 Před 3 měsíci +3

      spit my five alive laughing!

  • @luckylanno
    @luckylanno Před 3 měsíci +913

    My experience matches the study. I am a senior software engineer, so I write a lot of software and write a lot of documentation about that software. Usually the AI generated code only represents the common use case, which is little better than what I could get out of the typical documentation for an open source project, for example. It's a little bit easier to use the AI as a search engine, but if the problem goes outside of the boundaries of the most common use case even a little bit, I'm on my own. Basically, it quickly writes code that I would have just copied and pasted from an example anyway, but I still have to do the hard parts myself.
    It's a little better for generating documentation, but typically I have to do so many edits to fix errors or get the focus or tone right, that the time savings shrinks dramatically.
    I'm a little worried that AI is generally only going to give a best case 50% productivity boost to most, while the market seems to be assuming a productivity revolution... I'm worried for my 401k, I mean.

    • @stevebezfamilnii2069
      @stevebezfamilnii2069 Před 3 měsíci +114

      Biggest problem is that ai is prone to bullshiting and if you don't review it work number of errors overflow. I tried doing some engineering test using it and recieved roughly 60% correct answers which is far from spectacular.
      The only real use I see is that ai can generate lot of unique text that nobody will read, which is actually quite a chunk of work for many people.

    • @jimbocho660
      @jimbocho660 Před 3 měsíci +6

      @@stevebezfamilnii2069 Did you use a fine tuned LLM or a RAG LLM ? I was led to believe RAG LLMs did very well on Q & A tasks.

    • @Maxkraft19
      @Maxkraft19 Před 3 měsíci +50

      This has been my experience as well. The tools work great with some one that knows what they're doing. But if you blindly follow the AI you might delete part of an sql database.

    • @tohafi
      @tohafi Před 3 měsíci +31

      Yeah, it feels like a big bubble to me. Might blow up the tech sector (again 😮‍💨)...

    • @michaelredl7860
      @michaelredl7860 Před 3 měsíci +27

      Which AI do you use? I am a developer as well and got a lot of great results using GPT-4 whereas GPT-3 did not perform that great. I always describe what i want in detail and often times GPT-4 returns fully functioning code which got me a little worried about my job.

  • @jalliartturi
    @jalliartturi Před 3 měsíci +289

    I play with AI in SEO. What’s interesting is that due to the error rate and generic content it’s actually quicker to write by hand instead of having AI do it and then fix all the mistakes it does.

    • @RogueReplicant
      @RogueReplicant Před 3 měsíci +20

      Ikr, and then A.I. will regurgitate your PAINSTAKING ORIGINAL RESEARCH and pass it off to some dufus as "A.I.-generated", lol

    • @panamahub
      @panamahub Před 3 měsíci +1

      same here

    • @magfal
      @magfal Před 3 měsíci +19

      This applies to coding too.
      Unless what you write is of generic quality and you're reinventing a wheel for the 6000th time.

    • @personzorz
      @personzorz Před 3 měsíci +9

      As someone involved in SEO you are part of the problem and destroying the internet. Quit your job. You provide negative net value.

    • @Toberumono
      @Toberumono Před 3 měsíci +7

      ⁠@@magfalI actually just tried asking an AI to do my most recent assignment (I gave it the details that I had at the time I got the assignment).
      It tried to teach me how to add an event listener in plain JavaScript. Which, admittedly, is a massive improvement over my last experience. That time, I asked it a basic PHP question and got a response back for Python (and no, the code wasn’t related to my question either).
      (Point being, I’m glad to know I’m not alone)

  • @Zachary-Daiquiri
    @Zachary-Daiquiri Před 3 měsíci +281

    Tldr: Ai helps with things it's good at and hurts with things it is bad at. The problem is that it isn't really clear what ai is good or bad at.

    • @coonhound_pharoah
      @coonhound_pharoah Před 3 měsíci +11

      It's good at creative writing for things like descriptions of architecture or scenes, and for writing cheesy character speeches for use in my D&D sessions. The art generators are great for making maps and character portraits. Just don't expect the LLM to do the heavy lifting of designing a campaign or anything.

    • @tomlxyz
      @tomlxyz Před 3 měsíci +20

      It's currently also only good in combination with a user who's also good in that area

    • @andybaldman
      @andybaldman Před 3 měsíci +12

      So, just like humans then.

    • @Teting7484f
      @Teting7484f Před 3 měsíci +9

      No, it will sometimes get things correct when the training set matches the input… when it doesn’t it will likely be incorrect.
      It cannot fact check its self, you ask me a question on geology and ill say idk or let me look at a book or google.

    • @papalegba6796
      @papalegba6796 Před 3 měsíci +25

      It's really, REALLY good at lying.

  • @robincray116
    @robincray116 Před 3 měsíci +16

    I asked ChatGPT some basic engineering questions I can safely say that ChatGPT is a very knowledgable first year engineering student, at best.
    The problem I think is that the bulk of engineering knowledge is still found in esoteric textbooks, engineering standards behind paywalls and word of mouth between engineers. It also doesn't help that engineering documentation is often company secrets for obvious reasons.

    • @traumateaminternational4732
      @traumateaminternational4732 Před 18 dny

      I am an accounting major, and I can confirm the same in our field. I'm only a Junior, but I could tell that more recent versions of ChatGPT were misinterpreting concepts like Gross Margin. Not at all surprising that the same is true in the field of Engineering.

  • @KathyClysm
    @KathyClysm Před 3 měsíci +308

    I work in software marketing, and quite frankly, as excited as everyone was about the new developments in the beginning, we've essentially stopped using any "AI" or LLMs. For coding, all an LLM can give you is basically the most common code you'd find in any library anyway, and if I ask it to code something longer or more complex, LLMs tend to cause more problems then they solve: because they don't actually "understand", they have no concept of contingency or continuity, so for example they switch how they refer to a specific variable mid-code. Ultimately, for the 30min it saves me in coding from scratch or just looking it up in our documentation, I spend 50min bugfixing the LLM code. Same with user documentation - the LLM texts have no concept of terminological consistency so they keep adding synonyms to terms that have a fixed definition in our company terminology etc. And for the marketing part of it, you'd think LLMs are useful for generating those generic fluff texts you need just to fill a website, but because the output of an LLM is - per definition - the most common sequence of sentences and paragraphs from the data set the LLM was trained on, you end up with marketing fluff that is so incredibly boring, bland and lacking in any uniqueness, that it's not even useful for fluff text. The only use we've found for it so far is automated e-mail replies - which we previously handled via a run-of-the-mill productivity tool.

    • @KindredPlagiarist
      @KindredPlagiarist Před 3 měsíci +49

      It's funny. I'm a novelist and when AI got big certain writers I know were all about using it for characterization and plot. It turned out that AI can't really CONVEY character or at least not less hamfistedly than a teenager writing fanfic. Similarly it can plot a book but the plot is always derivative. Essentially it's a machine that churns out bad writing and if you try to edit what it writes, you quickly realize that it's easier just to write it yourself. Some use cases like condensing paragraphs can be helpful but that's about it. And you can always condense a paragraph yourself if you're past your first couple years of writing. My friend who writes SEO optimized copy for large companies, though, uses it all the time.

    • @MrkBO8
      @MrkBO8 Před 3 měsíci +20

      LLM in reality is limited language model, AI does not understand the why of things or have any concept of the physical real world. AI would not understand that because its raining some behaviours ought to change, it can say something is wet but does not understand the concept of a road becoming slippery because of rain. People understand traction is less during rain and speed needs to be reduced in corners, it would also not understand that visibility is reduced or how a child in the back seat can be a distraction. This is because it relies on words to learn, it cannot experience the "I am about to die" feeling a person would get approaching a sharp turn in the wet on a cliff top

    • @KathyClysm
      @KathyClysm Před 3 měsíci +29

      ​@@KindredPlagiarist it's pretty much just... fancy paragraph-long predictive text based on the most common words/ideas/phrases in a given context, so you will always end up with something that has already been done so often that even the LLM has realised it's a common trope. If you want your marketing to be successful and stand out from the crowd, it's just not good enough. Plus in our experience, anyone who actually reads the fluff text can tell pretty quickly if it's AI-generated and usually has a negative reaction to that - our feedback has shown customers are almost offended at the thought that they weren't considered "important enough for a human to sit down and write something creative". So it's just not worth it.

    • @yuglesstube
      @yuglesstube Před 3 měsíci +6

      It's improving rapidly.

    • @tundeuk
      @tundeuk Před 3 měsíci

      @@MrkBO8Tesla FSD

  • @devonglide1830
    @devonglide1830 Před 3 měsíci +126

    I'm not knocking AI, I use it quite a bit. But my general feeling (based on how I use it) is that all it's done (doing) is quickly web scraped the top 10 Google pages and summarized them for me. Like I said, in my field that is very, very handy and saves me time because I don't need to skim pages and blogs to find answers. On the other hand, it's never offered an answer or solution that would make me think its done anything remotely original.

    • @TheReferrer72
      @TheReferrer72 Před 3 měsíci +2

      Few thing humans do is original so you are not saying much.

    • @MisterFoxton
      @MisterFoxton Před 3 měsíci +9

      "Few" is infinitely better than "zero".

    • @carlpanzram7081
      @carlpanzram7081 Před 3 měsíci +3

      Not yet.
      The growth in ability of ai we have witnessed in the last 3 years has me convinced it's going to replace you within a decade tops.
      It's never going to be worse, it's forever going to increase in power. Eventually it's going to be far more intelligent than any human, and by that point, your position will be impossible to defend.
      We will be a bunch of comparatively stupid apes, led by super intelligent AI, it's basically inevitable.

    • @johnsmith1474
      @johnsmith1474 Před měsícem +2

      Worse, you gain nothing. You do not become smarter/more skilled via the very instructional task of research.

    • @devonglide1830
      @devonglide1830 Před měsícem

      @@carlpanzram7081 For me, that would be great. I'll be retired in that time and if AI is able to solve all my health ailments, provide me with eldercare, sure up the environmental problems we're facing, free up leisure time for the world, and produce all our goods - great!
      Unfortunately, I don't see it happening. The current AI is nothing special in the grand scheme of things. Sure it's revolutionary in a similar way to what google was decades ago, but when you look at the world post-google versus pre-google, realistically the overall well-being and health of the world hasn't really advanced much (some might argue it's even regressed in many domains).
      AI isn't AI (currently), it's a handy tool just as spell checker is. However, even current spell checking and auto-complete still has much to be desired, so the idea that AI is going to replace humans in the next 10 years is pretty fanciful thinking in my opinion.

  • @caty863
    @caty863 Před 3 měsíci +377

    I recently wasted two days trying to diagnose and address a bug in my script thanks to the over-reliance on AI. At the end, I gave up and decided to consult the documentation of the API and scroll through user forums. I immediately got all the answers I needed. I will never think of these AI tools the same ever again.

    • @petersuvara
      @petersuvara Před 3 měsíci +7

      I did videos on my TechSuvara channel which explained the exact same situation! It’s a real concern.

    • @DEBO5
      @DEBO5 Před 3 měsíci +12

      There’s a learning curve. Always cross reference with docs. I don’t really ask it to produce code from scratch rather I ask it to provide me with guidance on where to start and background explanations. It’s a pretty decent refactoring tool as well. Can catch some poor programming practices that you wouldn’t have.

    • @mandisaw
      @mandisaw Před 3 měsíci +21

      Thing is, it's worse than a doc-search tool, because it hallucinates. Even with a user forum, you'll get some illuminating back-and-forth assessing possible variants & gotchas. So many bugs don't bite you until you're already in production 😢

    • @ImmaterialDigression
      @ImmaterialDigression Před 3 měsíci +9

      Using AI for anything interfacing with APIs is really tricky because they more often than not don't know what the current API state is. If you want a bash script it's going to be great, if you want to use an API that has had a major version change in the last year it's going to be fairly useless.

    • @M1and5M
      @M1and5M Před 3 měsíci +1

      If there is a documentation, why did you not upload the documentatiin to gpt and then prompt it?

  • @LockFarm
    @LockFarm Před 3 měsíci +497

    Hard not to notice that the definition of a high end consultancy job requiring top students from elite universities is "Come up with an idea for a drink", and "Come up with an idea for a shoe". Yet the people who have the actual technical knowledge to make the drink, or build the shoe don't get a look in. If we compared the respective salaries, I'm willing to bet that the Apprentice extras will be earning double or more that of the people who actually do the work. So when we hear that these corporate experts might be put out of a job by AI... my sympathy is strangely absent.

    • @yds6268
      @yds6268 Před 3 měsíci +74

      Thank you for pointing that out. The "idea guys" seem to be valued more than the actual engineers who are going to design that stuff. Which often is impossible to manage, considering the MBA's lack of technical knowledge

    • @arpadkovacs2116
      @arpadkovacs2116 Před 3 měsíci +52

      Tech companies have the same issue. As MBAs took over from scientists and engineers, they all seemed to decline eventually.

    • @tomlxyz
      @tomlxyz Před 3 měsíci +18

      The thing is that a good drink isn't necessarily a drink that makes you a lot of money. Nowadays a lot of demand is created by everything but the product itself (lifestyle etc)

    • @cameronhoglan
      @cameronhoglan Před 3 měsíci +9

      It's not what you know it's who you know.... Business has always been like that.

    • @LockFarm
      @LockFarm Před 3 měsíci

      For sure, so if the drinks makers can do "everything but the product itself" with AI, why bother employing expensive consultants?@@tomlxyz

  • @wbmc1
    @wbmc1 Před 3 měsíci +68

    As a scientist, generative AI is (at the moment) very limited in its usefulness. Because it doesn't really 'understand' novel situations, it isn't helpful at planning experiments or studies. The most useful area is in summarizing reports or helping with writing. But even there you have to be careful that the AI isn't missing the major thrust of papers or publications (as it can often fixate on certain things, or misinterpret them).
    Non-generative machine learning has been a tool used for years, though. We use it pretty routinely to help correct for errors in sequencing, for instance, and for assessing the accuracy of variant calls in genetics. I'm of the same belief that, while a useful tool, it is one of a dozen tools in a worker's toolbox -- it doesn't replace the worker.

    • @epicfiend1999
      @epicfiend1999 Před 3 měsíci +7

      The problem is that companies will rush to replace workers with AI to be 'more efficient' and then either
      A. never realize that they made their services worse, or
      B. After realizing, hope that enough of the industry does it that they can get away with making their services worse
      Either this will lead to a re-adjustment period in the market or everything will just get worse. Either way, it will never be how it was, as ai tools get better we will never return to pre-ai even with large scale industry rollback after mass adoption.

    • @mandisaw
      @mandisaw Před 3 měsíci +7

      Journals are likely also struggling to contain the influx of mis-cited and poorly-written submissions. There was already an issue with fake journals and subpar papers being accepted, this is just going to make things a lot worse for academic research.

    • @coryc9040
      @coryc9040 Před 3 měsíci

      ​@@mandisawIt depends. There's tons of junk research out there before AI because the desirable metric was quantity over quality for publication. I could imagine scientific organizations building AI only on high quality publications and creating models that could do most of the heavy lifting for peer review.

    • @joejones9520
      @joejones9520 Před 3 měsíci +2

      there is no conceivable job or task that AI cant eventually do better than a human, ie; all new jobs created by ai will be able to be done by ai better than done by a human....this tech revolution is profoundly different than all others before, in fact, there is no comparison.

    • @wandilekhumalo7062
      @wandilekhumalo7062 Před 3 měsíci +1

      ​@@joejones9520 hi friend, as someone who works in the field of AI I admire your enthusiasm but I must caution it at the same time what you describe is in the realm of AGI something we are currently very very far from. My hopes for the current versions of LLMs is that they show us how our economic systems are outdated but in terms of solving our biggest problems such as climate change, curing cancer, renewable energy we need vastly different approaches. A paradigm shift in the way we build AI perhaps? The current consensus is more power but Im not convinced in this approach....

  • @ricks5756
    @ricks5756 Před 3 měsíci +226

    Just a side note: commercially available freelance art projects are starting to become harder to find.
    Illustrators, concept artists, and background artists are losing a lot of paying work in my experience.

    • @lovisericachii4503
      @lovisericachii4503 Před 3 měsíci +23

      Well... with how things are with hollywood... Those mofos deserved to be replaced by AI

    • @yuglesstube
      @yuglesstube Před 3 měsíci +11

      Look at Sora. It's a video AI. Quite scary. A studio expansion was cancelled when the owner saw Sora.

    • @Studeb
      @Studeb Před 3 měsíci +100

      @@lovisericachii4503 Well well well, what have we here. An anti woke person being over joyous over the lost jobs in the creative industry. We'll see how long it is before you too lose your job, cause nobody is safe here.

    • @HanSolo__
      @HanSolo__ Před 3 měsíci +37

      There is also a visible overflow and fed up of this AI gunk on lots of internet platforms. It's disgusting to see everything around become more and more "meh", to put it mildly.
      CZcams already saw it coming as they target and strike channels made entirely with AI models. Only the footage of the camera worker left life thing.

    • @Meitti
      @Meitti Před 3 měsíci +9

      Its a bit of a tradeoff. Short freelance gigs of illustrating or graphic designing simple ads is gone, but a clever artist can also use AI to speed up some of the processes and create ads faster.

  • @lelik0911
    @lelik0911 Před 3 měsíci +121

    I appreciate the boldness of consulting firms offering predictions on the future of a nascent technology, as though they have any more insight than we do.

    • @M-dv1yj
      @M-dv1yj Před 3 měsíci

      Why?

    • @lelik0911
      @lelik0911 Před 3 měsíci

      The Mckinseys et al have a long history of unsuccessful navel gazing about the future; it’s not uncommon for these predictions to be out by a few orders of magnitude. Search for McKinsey’s market sizing of mobile phones. Or IBM and cloud tech. Or the early bullishness on the internet.
      The future is highly uncertain and the path of technology is unclear. Even where the benefits are clear, the Gartner Hype Cycle advises caution on predicting the pace of adoption or the ultimately end state of its utility. We’re dealing with an unbounded problem.
      Maybe generative AI can replace 70% of tasks done by knowledge workers. Maybe it won’t. Maybe it’ll lead to significant productivity improvements. Maybe it won’t. Maybe it’ll displace labour. Maybe it’ll increase the demand for labour. In future, all these questions will be resolved and hindsight bias will lead us to believe that a certain narrative was nearly laid out. Until then, McKinsey et al are in the same place the rest of us are at: trying to forecast the direction of the wind from the flap of a butterfly’s wing.

    • @take2762
      @take2762 Před 3 měsíci

      I'm pretty sure op is being sarcastic. ​@@M-dv1yj

    • @gamewarrior010
      @gamewarrior010 Před 3 měsíci +28

      @@M-dv1yjin the 1980s McKinsey thought the total addressable market for mobile phones was around 900,000 globally. They really don’t had that much of a deeper understanding than the average viewer of this channel.

    • @TheManinBlack9054
      @TheManinBlack9054 Před 3 měsíci +10

      @@gamewarrior010 no, they do, just because they made a mistake doesnt mean that some random ignorant bloke has the same level of expertise as them

  • @SH-ly1uy
    @SH-ly1uy Před 3 měsíci +97

    The first serious video I see on the topic. So much better than all these sales bros going “AI is going to change the world within the next 2 years. Hire me and I tell you how”

    • @TheManinBlack9054
      @TheManinBlack9054 Před 3 měsíci +4

      But these tech bros are right, AI is incredibly powerful and is going to become only more powerful

    • @bugostare
      @bugostare Před 3 měsíci +12

      ​@@TheManinBlack9054No what you're thinking of is the power CONSUMPTION of so-called "AI", and that is something to worry about, but you got it backwards

    • @100c0c
      @100c0c Před 3 měsíci

      @@bugostare It will for industrial jobs. It just needs to be good as the average worker or much cheaper and a bit more error prone. China is already automating their rail/track constructions with AI.

    • @bugostare
      @bugostare Před 3 měsíci

      @@100c0c China? The same country where new bridges and skyscrapers crumble, and tunnels flood with thousands of people in them?
      The CCP is totally corrupt, incompetent and pretty much everything they say is a lie...
      Regardless, "AI" is a scam as well as it simply doesn't exist, and machine learning is absolutely not intelligent in any way, it is effectively a data analyst bot.
      Whether companies or governments choose to use it to replace jobs has nothing to do with how good it is at anything, just shows management and government incompetence.

    • @Fuckthis0341
      @Fuckthis0341 Před 3 měsíci +6

      Just like the last big thing, algorithms, everyone wants to hype something big and vague but usually can’t name specifics. And if they do experts in those fields quickly see how it’s not going to replace people. Algorithmic automation caused tons of losses in real estate and insurance. In my organization they hired back all the people replaced by algorithms because the losses were unsustainable.

  • @jasonosunkoya
    @jasonosunkoya Před 3 měsíci +44

    Software engineer here..... using LLMs to write code is like having a junior that you have to constantly go back and tell no thats not the solution. Its good at l33t code though because their are soooo many already done solutions of it on github that have been used to train the model. Where it completely sucks is on specialised enterprise code. So my jobs feel safe for a good while. Until an LLM actually learns logic and reasoning im not worried at all

    • @Raletia
      @Raletia Před 3 měsíci +8

      A "LLM" is never going to logic and reason, all it does it predict which letter comes next in a "string". That's literally it. Sure that's really simplifying, but in the end, the fact remains, a LLM doesn't "understand" anything at all, it has no concept of anything, it's not using logic to solve any problems, it's simply predicting what letter comes next based on a string. We'll need MUCH more sophisticated tools to actually do logic and reasoning. Also there's the computing power problem, our best machines would struggle to simulate more neurons than an insect or maybe at best a very small animal. That's a whole other problem to solve.

    • @dontbeafool
      @dontbeafool Před 3 měsíci +2

      Ad a python amateur chat got allowed me to create code that generated millions in revenues for the firm. Could not do that before. I can prototype things before hiring devs to implement . I can automate things that would've taken me months

    • @jasonosunkoya
      @jasonosunkoya Před 3 měsíci +4

      @@dontbeafool a dev could have also written that python code easily enough then.

    • @dontbeafool
      @dontbeafool Před 3 měsíci +1

      @@jasonosunkoya indeed. But our devs are busy enough building complex systems. Why waste time and money having them build small things.

    • @carlpanzram7081
      @carlpanzram7081 Před 3 měsíci +2

      ​@@Raletiadefine "real understanding".
      If AI plays better chess than the best human chess player, beating them 100% of the time, what does that mean to you?
      Ai can teach the best chess players new and novel chess strategies. How is that not a clear. Demonstration of understanding and reasoning?
      How is SORA not demonstrating that AI atleast partly understands the visual aspects of the world, and can therefore estimate a whole bunch of physics.

  • @WorldinRooView
    @WorldinRooView Před 3 měsíci +116

    The skill moat you mention at the end is my gravest concern. I've been at the job I have been for 13 years, and it's my expertise I gained over those years that make me a value to my employer.
    Now with outsourcing tasks, either overseas via remote work, or though AI to do the small and annoying things, you can't learn how the system works to try and push through the annoying things more quickly. This is how humans learn efficiency, and perhaps new methods not thought of by the prior generation.
    Over the past few years, I feel like my workplace is falling backwards more than forwards. I can't fully work with the people I'm supposed to delegate to due to the time zone difference. So it means if they don't get to an urgent task, I have to do it.
    Lately I'm feeling this "AI" thing is literally a salesperson selling a bag of beans hoping for some Deus Ex Machina to save us from our grudging tasks. And to sell the customers 'a solution'. But in the end the "AI" is merely office workers analyzing data with grueling deadlines, not unlike the wizard of oz just being a man behind a curtain.
    The humans will do the work, but the machine will get the credit.

    • @epicfiend1999
      @epicfiend1999 Před 3 měsíci +2

      Well said.

    • @robertruffo2134
      @robertruffo2134 Před 3 měsíci +1

      @@epicfiend1999 Very well said

    • @j3i2i2yl7
      @j3i2i2yl7 Před 3 měsíci +9

      It seems to me that some upper management is inclinded to think of employees 3 or more levels down from them as interchangable, and that type of magager will be inclined to be very enthusiastic about adopting AI.

    • @KevinJDildonik
      @KevinJDildonik Před 3 měsíci +8

      100%. I've has employers in the banking industry talk about replacing everyone with AI. Remind me again the legality of pasting people's private banking information into a random web form. Oh yeah it's a felony. Small detail.

    • @TheManinBlack9054
      @TheManinBlack9054 Před 3 měsíci +1

      With all due respect, you do not understand how powerful and intelligent the AI is going to get. Its not a hammer, its a woodworker with a hammer.

  • @Tudor_Rusan
    @Tudor_Rusan Před 3 měsíci +75

    I'm a medical translator and because I'm a fast typer I prefer translating from scratch to post-editing machine translations.
    Sometimes they are frighteningly smart, but it's a bit like the world's smartest two year-old. You can't rely on it, especially for sensitive documents where you need humans in the loop.

    • @joejones9520
      @joejones9520 Před 3 měsíci +4

      it will rapidly improve...your comment may seem hopelessly dated even within a year.

    • @Tudor_Rusan
      @Tudor_Rusan Před 3 měsíci +21

      @@joejones9520 I'll take my chances. They're useful tools, but should never be unsupervised with sensible information.
      A bit like self-driving vehicles. Road cars are still a no-no, but aircraft still have pilots despite their tasks being mostly automated. Then you have farm vehicles in low-risk areas that can be automated.

    • @Srednicki123
      @Srednicki123 Před 3 měsíci +15

      @@joejones9520 how do you know? maybe your blind optimism will seem hopelessly naive in one year.

    • @fnorgen
      @fnorgen Před 3 měsíci

      ​@@joejones9520 See, the problem is that although it is certain that they will improve, it's hard to tell how quickly they'll improve in specific ways. For example, we can't make a proper robotic lawyer until we find a way to get a drastically lower hallucination rate, which might require a drastically different training strategy or network structure. It doesn't look like scale alone can solve every problem.
      There's also the possibility that AI capability might stagnate somewhat for years with only modest practical improvements, before some new technique is discovered which suddenly makes them drastically better in the span of a few months. One problem these days is for example that it's really hard to get high quality training data in the quantities that these models need to learn properly.
      Hell, in some ways more modern AI are arguably inferior to more primitive ones. I've found myself preferring to work with outdated Stable Diffusion 1.5 based models rather than the more modern SDXL. The old models make more blunders obviously, and aren't as good at following specific instructions, but I find they're way better at spitting out a wide range of possible outputs for any given input. They're also better at combining seemingly mismatched image elements in interesting ways. I just find them way more creative generally, adding all kinds of fun details unprompted, for better and worse. The new ones tend to just go with the most generic options unless prompted otherwise, requiring much more specific prompts to yield interesting results. At least this was the case the last time I played around with image generation.
      Basically, it's really hard to predict a timeline for when AI will get certain capabilities. Research tends to get spectacularly stuck on tasks that were expected to be easy, while seemingly impossible tasks suddenly become very possible. That's how we've suddenly ended up in a world with quite a lot of AI generated artwork, but few robo-taxies.

    • @joejones9520
      @joejones9520 Před 3 měsíci

      @@Srednicki123 "may" means i dont know ijut

  • @kulls13
    @kulls13 Před 3 měsíci +132

    I work in a manufacturing shop and I've used AI to quickly create code to complete certain tasks. We don't have any developers on site obviously and some of our coding needs are fairly simple. AI has allowed me to create simple programs to complete a repetitive task without needing a programmer.

    • @mikeynth7919
      @mikeynth7919 Před 3 měsíci +6

      I was wondering a bit about that. The AI seemed to be good at grabbing things that are predictable such as a chess game like Go, or writing basic code for things that have been done before, but moving out into something less cut-and-dried it again fails. With the consultant study, I kind of thought that each consultant is an individual with preferences based on individual education and experiences. Sure, the AI can help clear up basic stuff (what assistants and aides are for) but coming to a conclusion when different consultants may come to different ones all honestly? Yeah - no.

    • @KevinJDildonik
      @KevinJDildonik Před 3 měsíci

      If you're too lazy to hire a college intern. And nobody notices you pasting sensitive information into a third party web site. Then yeah AI is for you. Also, enjoy in a month when your whole system goes down due to a crypto scam. Because Jesus Christ your security is garbage.

    • @TheManinBlack9054
      @TheManinBlack9054 Před 3 měsíci +14

      @th7919 "things that are predictable such as a chess game like Go" i dont think you understand how go plays. its absolutely not like chess, its far more complex and predictable. thats like comparing tic tac toe with chess

    • @ivok9846
      @ivok9846 Před 3 měsíci +1

      @@TheManinBlack9054 both chess and go are useless to humans. along with machines that play it. and humans that made those one purpose machines. and along with humans that play it, if that's only thing they do

    • @carlpanzram7081
      @carlpanzram7081 Před 3 měsíci +6

      ​@@ivok9846absolutely brainless take. You don't just not understand the nature of games, but you also miss the point of the value of cognitive work.
      The same capacity that allows us to play chess, enables us to plan any task or abstract process in the future.
      If AI can beat any human in chess and go, how long will it take until it will beat any human in any task?

  • @alexanderclaylavin
    @alexanderclaylavin Před 3 měsíci +7

    I asked the Microsoft AI search bar that I found one day on my desktop a very arcane question. It got the question 90% correct, and its incorrect answer would fool someone who did not know otherwise.

  • @Ringofire280
    @Ringofire280 Před 3 měsíci +42

    How do I square the results of this study with the fact that consulting firms don't actually offer real value to firms contracting them regardless of AI usage?

    • @jimbojimbo6873
      @jimbojimbo6873 Před 3 měsíci +14

      Consultancies are just massive marketing machines that latch onto the latest trend to sell a bit of work on stuff they have no expertise or product in.
      I’m a consultant

    • @philallen7626
      @philallen7626 Před 3 měsíci +16

      As far as I can tell, consultants are just arse covering for management. Management can take credit if things go well, but if they go badly, just blame the consultants.

    • @phonyalias7574
      @phonyalias7574 Před 3 měsíci +8

      @@jimbojimbo6873 Not so sure about that. The main value is it gives management cover, because it's this outside "independent" agency that agrees with implementing something, and tries to do it. If it works out, management gets credit, and if it doesn't the consultancy takes the blame.

    • @doncapo732
      @doncapo732 Před 3 měsíci +4

      Deloitte at our company...🤣

    • @amicaaranearum
      @amicaaranearum Před 3 měsíci

      Management consulting is mostly a BS job performed by 20-somethings with little actual business experience, so it doesn’t surprise me that AI was helpful to them.

  • @guyswartwood3924
    @guyswartwood3924 Před 3 měsíci +19

    As a software engineer, i use copilot to assist in making software. I do find it helpful but as it stands, I cannot trust it to write good software. I generally find it's answers wrong about 35% of the time when asking more complex questions which is when I am asking it in the first place. A feature I do really like about copilot is that it sees other code files I am looking at and offers helpful suggestions for the next line I am writing. Right now I don't feel like my job is threatened by ai but who knows about the future...

    • @dstick14
      @dstick14 Před 3 měsíci +1

      I was recently working with a code base where my task involved translating some c++ enums to Java enums. I thought copilot would be able to do this easily since those enums were quite lengthy. Oh how wrong I was.....

    • @wandilekhumalo7062
      @wandilekhumalo7062 Před 3 měsíci +1

      How do you deal with the security concerns surely giving a large conglomerate access to your codebase is dangerous right?

    • @Zoltan1251
      @Zoltan1251 Před 3 měsíci +2

      I am in finance, so accounting basically. Normal person on the street would expect accountants to be replaced first. Now, look at that, actual artists and even sofware engineers are able to use AI while we it cannot even do simple accounting tasks. What a world we live in.

    • @Saliferous
      @Saliferous Před 2 měsíci

      @@wandilekhumalo7062 That's my concern. Everyone is jumping in, but these companies have shown that they don't believe copyright or safety or privacy is a thing. If you create something with AI, what's the assurance that they aren't using your code to train their models and basically stealing your trade secrets. What use are these tools if you can't copyright anything they make? And everyone is able to copy your results.

  • @mwwhited
    @mwwhited Před 3 měsíci +18

    Part of my role is to examine technology to make sure my fellow develop and our clients are well informed and using the right tool. So far my personal experiments with AI show similar results to these studies. The models are okay at easy, highly repetitive and duplicative work but not very good at highly skilled/technical work. They are good at making things up or doing stuff that has been done hundreds of times in their training data but struggle with creative work and it’s nearly impossible to prevent the “hallucinations” from occurring where the models fabricate something when they don’t actually know the answer.

    • @Insideoutcest
      @Insideoutcest Před 3 měsíci

      Easily the worst part of it is the error checking. Because this is just a sophisticated grammar tree, there are no higher frames of reference to understand the completed product as it was manifested apriori. It is what I would call a "win-more" tool. A tool that works only so far as you can supplant it entirely. That is not helpful and wastes my time if anything. I can parse information more intelligently than chat GPT and cutting through the minutiae actually gets easier, not harder, as you become an actualized troubleshooter/creator.

  • @poornoodle9851
    @poornoodle9851 Před 3 měsíci +13

    AI is like a very efficient but very unreliable employee. Can be very fast on simple tasks but may cause more problems for everyone else when they make mistakes on complex things.

    • @sesam2k998
      @sesam2k998 Před 3 měsíci +2

      It will keep making the misstakes over and over. A person probably wont.

  • @Lithilic
    @Lithilic Před 3 měsíci +8

    I have started to use an AI assisted search tool for research sourcing in my work. It is useful in finding answers to questions that are difficult to locate by only querying search terms in a database; however, I've found that you need to be familiar enough with the subject matter you are researching in order to properly screen its responses, which can be wrong or the certainty of the responses being overstated.

  • @tragicslip
    @tragicslip Před 3 měsíci +25

    i asked copilot about an obscure novel and its characters. it made up a story and character details using the title and names provided, correctly identifying the author of the real novel.

  • @yokothespacewhale
    @yokothespacewhale Před 3 měsíci +33

    Ok I’ll bite.
    Speaking strictly in the work setting I have tried to use the assistant in Microsoft’s databricks as a replacement to googling obscure functions and methods etc. it will often give me code that doesn’t do what I want while clearly understanding what I want it to do (from viewing my own code alone) and will actually give me functions that will not work in databricks. Even after I send it the error code saying as much as the input.
    In short, at the moment at least, it would have been an awesome tool for jr level me a few years ago as a smarter search algorithm. But even then it’s still very much the “I feel lucky” google button.

    • @mandisaw
      @mandisaw Před 3 měsíci +4

      Tried using the pre-Copilot tool in Visual Studio to generate doc headers for some 3rd-party libraries. It misinterpreted what the classes did, and in some methods mixed in parameters that didn't exist. Also couldn't handle/understand overrides - no consistency. It's more trouble than just reading/writing docs myself.

    • @aravindpallippara1577
      @aravindpallippara1577 Před 3 měsíci +1

      ​@@mandisawyep mixing in arguments for functions methods where they don't exist or removing them when they do exist has been a common issue with github copilot in my experience
      It occasionally just writes out Turkish phrases for me as well out of the blue

    • @mandisaw
      @mandisaw Před 3 měsíci +1

      @@aravindpallippara1577 Maybe they should sell it as an "immersive" language tutor 😄

  • @IllIl
    @IllIl Před 3 měsíci +14

    Absolutely fascinating video! Thanks, Patrick. A lot of what you mentioned resonates with what I intuitively gleaned from having used LLMs personally and at work. "The jagged frontier" is such an excellent way of talking about LLM capabilities. And it's only through trial and error that one gets to suss out where those frontiers lie. Less experienced workers may get the biggest boost, but also have the greatest risk of blindly using incorrect outputs.

  • @curie3938
    @curie3938 Před 3 měsíci +13

    I think I recently spoke to an AI generated customer service rep from India, it perfectly replicated the same confusing, unintelligible live persons I have spoken to in the past, heavy accent and all.

    • @RogueReplicant
      @RogueReplicant Před 3 měsíci +2

      Ikr, but the Indians are claiming to be "at the forefront of A.I.", lol

    • @N9O
      @N9O Před 5 dny

      Regarding customer support I really find it incredibly annoying that nowadays you really cant get actual humans for support that easily anymore. First you have to talk to some AI chat bot trying to link you to FAQs that you probably already read. Only after you tell it 3 times that your question wasnt answered, it links you to the actual support hotline and you have to start the conversation from scratch.

  • @Keiranful
    @Keiranful Před 3 měsíci +7

    In business development I use gpt to get me started on writing texts that will then be heavily edited, or as a research tool to point me in the direction of the information I seek.

  • @XYZ-ft4hw
    @XYZ-ft4hw Před 3 měsíci +10

    Excellent overview.
    I love gpt for writing emails. Saves maybe a few hours a week.
    Beyond that... its easier to google or lookup source material than double check if it is accurate.
    The confusion is the subtlety in how models work vs what people imagine they are doing. Steven wolfram has the most intuitive technical explanation on his blog I have seen.

    • @ivok9846
      @ivok9846 Před 3 měsíci

      date/subject line of that blog?

  • @easygreasy3989
    @easygreasy3989 Před 3 měsíci

    Such a good set, set up and delivery. Thanks for the value ❤

  • @9NZ4
    @9NZ4 Před 3 měsíci +7

    It's unclear how exactly was performance measured. What does 17% boost mean? Did they completed task faster? If it's about quality then how it's measured?

    • @Docs4
      @Docs4 Před měsícem

      I just got acess 2 month ago to Copilot. As I am testing now for use cases. Well let me tell ya i havent found one. I already automated my boring tasks with VBA and power apps. All my e-mails are already automatic in nature. So it does not help me at all. I am peak efficiency already. Ok well i did find one use case, in making e-mails to higher management more 'ass licker type', ya know what i mean. But that doesn't boost productivity it just makes the tone of rough e-mails 'nicer' to narcisist management types.

  • @stribika0
    @stribika0 Před 3 měsíci +32

    It's so good at adding the expected bullshit to my emails. It can come up with clearly horrible options so that management feels like they had a choice, it can pretend the good option was their idea, it can completely automate bikeshedding, etc. It's awesome.

  • @MrPDLopez
    @MrPDLopez Před 3 měsíci +5

    Thank you Patrick! I am happy to say I recognized some of my own ideas about AI as you spelled them out for all of us. I cannot use AI in my workplace because of safety and confidentiality policies, maybe when an internal knowledge base (off-bounds to everyone else) can be coupled with an LLM I may get the opportunity to use it for work. Otherwise it has been a fun ride when I use AI at home to learn prompt engineering

  • @timjenkins7075
    @timjenkins7075 Před 3 měsíci

    I’ve been waiting all week for another video. Thanks!

  • @robertclark3258
    @robertclark3258 Před 28 dny

    Thank you for your, as always, outstanding commentary!

  • @Kyrieru
    @Kyrieru Před 3 měsíci +14

    I'm an indie game dev, and its currently not possible to replace most tasks with ai (sounds, art, animation, coding, design). The results are too random and lack coherence across works, and any desire for specificity in style or execution makes ai worthless. The prompts for art are too broad to describe low level and important things.
    I'm hoping that ai gets some bigger development in terms of tools which help artists. For example using ai to create digital brushes which perfectly mimic paint rather than mimicking "artwork". Mimicking artwork is not useful, but mimicking paint is.

    • @mandisaw
      @mandisaw Před 3 měsíci +6

      Adobe has had the tools/scripting API to do dynamic brushes (and a lot more) for ages. Similarly a lot of the AI use-cases I've heard in game-dev circles are often things that Unity (and presumably Unreal) already can do, or that are still better done by humans.
      I think folks - indies & large companies alike - are looking for that "make it cheap+fast+good" solution, and there's just no such thing.

  • @murfelpurf5556
    @murfelpurf5556 Před 3 měsíci +8

    My concern is related to long term human team skillset. Such that atrophy of skills will grow in over time because of a lack of use by the people. AI may truly reduce longterm productivity in exchange for short term gains.

  • @janzalud216
    @janzalud216 Před 3 měsíci +2

    this is so insanely interesting! Thank you Patrick!!

  • @besnico
    @besnico Před 3 měsíci

    I have watched countless videos of yours, being initially driven to your channel by a video from the Plain Bagel guy. While I love all your finance/money/business content, it continues to blow my mind how your ability to research topics allows you to go well beyond the norm of financial analysis - you are giving me (and I'm sure many others watching these) a lot of value, whilst being entertaining. Please keep doing what you're doing!!

  • @jerryburg6564
    @jerryburg6564 Před 3 měsíci +3

    I tried to get ChatGPT to write brief narratives for inspection reports. The target audience was insurance underwriters. The AI could write a narrative when provided information from the inspector, but it always sounded like a real estate pitch. I could never get the result expressed properly and finally gave up. It was faster to write it myself because I didn’t need to rewrite the resulting text.

  • @Quacken705
    @Quacken705 Před 3 měsíci +17

    I've tried a few of them. The marginal positive use they have is outweighed by the serious downsides that are immediately apparent. Just like the lawyer who saved time by having it write his brief and then had his tail handed to him because the LLM invented legal precedent whole-cloth, it's not as much of a blessing as the collective wisdom would have you believe.

    • @Custodian123
      @Custodian123 Před 3 měsíci

      So nothing new then? Nothing more than natural selection, fools will use the tools poorly and the rest will benefit.
      The stove is hot.

    • @mandisaw
      @mandisaw Před 3 měsíci

      More than one lawyer! We've had two Federal cases in NYS and one Canadian case, that I'm aware of. "Saving" 2hrs gets you 2yrs' suspension 😮

  • @geospatialindex
    @geospatialindex Před 3 měsíci +1

    thoughtful and useful. thank you for the caution patrick you are a gem

  • @tosvarsan5727
    @tosvarsan5727 Před 3 měsíci

    This was an really interesting episode, bravo!

  • @greenockscatman
    @greenockscatman Před 3 měsíci +3

    I saw someone on CZcams do a tutorial about how to use AI to help with share price analysis. He never fed it any recent price data however, so his AI helper hallucinated a bunch of nonsense every time. Like using a magic 8-ball for your trading insight.

  • @aL3891_
    @aL3891_ Před 3 měsíci +28

    It can be but in a much _much_ narrower scope than most people think.
    Also pretty baller move to have an ai company sponsor this video

  • @hws888
    @hws888 Před 3 měsíci +1

    Thanks for actually providing links to the papers. This is so rare among YT people 😀

  • @marcocatano554
    @marcocatano554 Před 3 měsíci

    Excellent class! thanks
    No really, I wish I'd had more teachers like you at the Uni
    And this was Free!!!

  • @henson2k
    @henson2k Před 3 měsíci +5

    In IT junior positions are already heavily affected by layoffs and AI just makes it harder for a new people to get into the industry.

  • @ddhurry4168
    @ddhurry4168 Před 3 měsíci +5

    Air canada was recently ruled liable for bad advice that its AI chatbot generated for customer questions. The company had argued that the chatbot was an entirely separate entity

    • @greebj
      @greebj Před 3 měsíci +3

      The really interesting case will arise after they introduce a condition of use disclaimer "all responses are for information purposes only and you acknowledge by using the chatbot that you cannot rely on the correctness or truthfulness of any output, which is not a substitute for the wording of the text of the relevant policy or one of our employees"
      Because we all know how many people routinely do not read T&Cs

    • @ddhurry4168
      @ddhurry4168 Před 3 měsíci +10

      @greebj court basically ruled that there is no reason a consumer would judge one part of their website accurate and another part untrustworthy. So if they want to use it as a website feature, they are liable for what it says

    • @creepersonspeed5490
      @creepersonspeed5490 Před 3 měsíci

      @@greebj you can use rag to improve outputs but my question is - it's meant to help users find information, and the information it finds isn't accurate, so it doesn't find information... meaning... now your users get stuck in chatbot loops and get pissed off or get the wrong information... which doesn't help you as a business. just test your fucking software damnit...

  • @andresmlinar
    @andresmlinar Před 3 měsíci

    Always excellent Patrick!

  • @expatlifestyle2000
    @expatlifestyle2000 Před 3 měsíci

    Great video about the reality of AI. Thanks Patrick.

  • @user-kv5gh6le6y
    @user-kv5gh6le6y Před 3 měsíci +6

    The real and unsolvable problem with AI is the inability to interrogate and understand the results it generates.

    • @CapedBojji
      @CapedBojji Před 12 dny

      Prove to me you understand what you just wrote

    • @user-kv5gh6le6y
      @user-kv5gh6le6y Před 11 dny

      @@CapedBojji I’m curious to know what for you would constitute “proof” and why you are interested in me, specifically, providing it.
      That said, the ability to “ interrogate and understand the results” requires a volume of data processing that is beyond human capacity. With that being so one can never be certain that information in support of a result produced does or does not exist.
      And , if when you do try to interrogate said results you can and do find data which supports said results you don’t know if that data was generated by an AI or if it was collected and processed by people. ?????

    • @unvergebeneid
      @unvergebeneid Před 9 dny

      "Unsolvable" is a strong word. It is an already existing technique to feed a model's output back into the model. A model can in fact find fault with its own output. It can also improve on its own output or decide which of several output variants is best. Far from being unsolvable, models interrogating their own output and iterating over it is one of the more heavily researched avenues towards better results rn.

    • @user-kv5gh6le6y
      @user-kv5gh6le6y Před 7 dny

      @@unvergebeneid It is a strong word, and it is correctly used.
      Your conjecture about the models getting better at interrogating themselves simply illustrates my point that humans cannot.

    • @unvergebeneid
      @unvergebeneid Před 7 dny

      @@user-kv5gh6le6y oh, I completely misunderstood you. I thought you were talking about AI not being able to interrogate its own results! You're talking about interpretability!
      Well, I mean that's also an active area of research with some surprising progress. That being said, while I still find "unsolvable" to be too strong a word, along with all of safety research being in its infancy, this is something that worries me a lot.

  • @bigeteum
    @bigeteum Před 3 měsíci +3

    I do use LLMs but what I found is that hey are good mostly for boilerplate knowledge. I use like and sofisticated Google search. Example, I do a lot of graphics all of data. I don't know all the plots code, but I can ask the ai for a starter. After that customization is really precarious with using aí. TBH, I don't think LLM can solve this, they need to understand the graphical packages and the functions under the hood to make good customization.

  • @lomotil3370
    @lomotil3370 Před 3 měsíci +2

    🎯 Key Takeaways for quick navigation:
    00:00 *Generative AI tipping point.*
    01:49 *AI's surprising capabilities and failures.*
    03:44 *Generative AI's ease of use.*
    04:36 *McKinsey: Generative AI impact by 2030.*
    05:55 *Harvard study on AI's impact.*
    08:15 *AI improves productivity and quality.*
    13:33 *AI benefits lower-skilled workers.*
    17:38 *Successful AI use strategies.*
    18:57 *AI output validation importance.*
    21:08 *AI's impact on employment.*
    22:32 *Future challenges and questions.*
    Made with HARPA AI

  • @user-lo4er8wy9l
    @user-lo4er8wy9l Před 3 měsíci +1

    great perspectives presented.

  • @davidedelson9061
    @davidedelson9061 Před 3 měsíci +16

    The thing I have found AI most helpful for is in reducing time spent on tasks you cannot opt to not do, but are both time consuming and relatively low ROI. Not everything you do in a workday requires high precision or competence, and if you can get through that stuff faster, to more effectively prioritize your labor that *is* high ROI, that's a win, imho. The most obvious one among my colleagues is in the automation of writing one's quarterly performance review, which is something you can easily spend one or more full workdays on, which has relatively little value to anyone, unless you are in that moment attempting to get a significant level up. Otherwise, it's just a tax that can be more easily paid by the AI than by you.

    • @phonyalias7574
      @phonyalias7574 Před 3 měsíci +3

      This just becomes an arms race though, with AI to write your performance review and AI to judge your review. Essentially it's AI judging AI output, much like the current employment market where AI makes your linked in profile, AI writes your resume and cover letter, AI acts as the first hiring screen to let resumes through a filter.

    • @creedolala6918
      @creedolala6918 Před 3 měsíci +2

      My limited experience confirms this. I use AI background removal for some product photos. I have the skill to do it in Photoshop but the AI website does it in 2 seconds instead of 2 to 5 minutes. Check GPT also assisted with a script for uploading the images.
      So far it hasn't made my particular job obsolete, it's just made it a little easier.

    • @amicaaranearum
      @amicaaranearum Před 3 měsíci +1

      This is exactly how I use it: for simple tasks that have low stakes, low importance, and low value.

    • @TeamSprocket
      @TeamSprocket Před 11 dny +1

      This is an argument more for removing a process than spending time and money automating it.

  • @jolly-rancher
    @jolly-rancher Před 3 měsíci +3

    Nice blazer Patrick

  • @faroleiro
    @faroleiro Před 3 měsíci

    Great video complementing my previous opinions, thanks

  • @PBoyle
    @PBoyle  Před 3 měsíci +11

    Thanks to our growing list of Patreon Sponsors and Channel Members for supporting the channel. www.patreon.com/PatrickBoyleOnFinance : Paul Rohrbaugh, Douglas Caldwell, Greg Blake, Michal Lacko, Dougald Middleton, David O'Connor, Douglas Caldwell, Carsten Baukrowitz, hyunjung Kim, Robert Wave, Jason Young, Ness Jung, Ben Brown, yourcheapdate, Dorothy Watson, Michael A Mayo, Chris Deister, Fredrick Saupe, Winston Wolfe, Adrian, Aaron Rose, Greg Thatcher, Chris Nicholls, Stephen, Joshua Rosenthal, Corgi, Adi, Alex C, maRiano polidoRi, Joe Del Vicario, Marcio Andreazzi, Stefan Alexander, Stefan Penner, Scott Guthery, Peter Bočan, Luis Carmona, Keith Elkin, Claire Walsh, Marek Novák, Richard Stagg, Stephen Mortimer, Heinrich, Edgar De Sola, Sprite_tm, Wade Hobbs, Julie, Gregory Mahoney, Tom, Andre Michel, MrLuigi1138, sugarfrosted, Justin Sublette, Stephen Walker, Daniel Soderberg, John Tran, Noel Kurth, Alex Do, Simon Crosby, Gary Yrag, Mattia Midali, Dominique Buri, Sebastian, Charles, C.J. Christie, Daniel, David Schirrmacher, Ultramagic, Tim Jamison, Deborah R. Moore, Sam Freed,Mike Farmwald, DaFlesh, Michael Wilson, Peter Weiden, Adam Stickney, Agatha DeStories, Suzy Maclay, scott johnson, Brian K Lee, Jonathan Metter, freebird, Alexander E F, Forrest Mobley, Matthew Colter, lee beville, Fernanda Alario, William j Murphy, Atanas Atanasov, Maximiliano Rios, WhiskeyTuesday, Callum McLean, Christopher Lesner, Ivo Stoicov, William Ching, Georgios Kontogiannis, Arvid, Dru Hill, Todd Gross, D F CICU, michael briggs, JAG, Pjotr Bekkering, Jason Harner, Nesh Hassan, Brainless, Ziad Azam, Ed, Artiom Casapu, Eric Holloman, ML, Meee, Carlos Arellano, Paul McCourt, Simon Bone, Richard Hagen, joel köykkä, Alan Medina, Chris Rock, Vik, Fly Girl, james brummel, Jessie Chiu, M G, Olivier Goemans, Martin Dráb, Boris Badinoff, John Way, eliott, Bill Walsh, Stephen Fotos, Brian McCullough, Sarah, Jonathan Horn, steel, Izidor Vetrih, Brian W Bush, James Hoctor, Eduardo, Jay T, Claude Chevroulet, Davíð Örn Jóhannesson, storm, Janusz Wieczorek, D Vidot, Christopher Boersma, Stephan Prinz, Norman A. Letterman, georgejr, Keanu Thierolf, Jeffrey, Matthew Berry, pawel irisik, Daniel Ralea, Chris Davey, Michael Jones, Alfred, Ekaterina Lukyanets, Scott Gardner, Viktor Nilsson, Martin Esser, Paul Hilscher, Eric, Larry, Nam Nguyen, Lukas Braszus, hyeora,Swain Gant, Kirk Naylor-Vane, Earnest Williams, Subliminal Transformation, Kurt Mueller, KoolJBlack, MrDietsam, Saaientist, Shaun Alexander, Angelo Rauseo, Bo Grünberger, Henk S, Okke, Michael Chow, TheGabornator, Andrew Backer, Olivia Ney, Zachary Tu, Andrew Price, Alexandre Mah, Jean-Philippe Lemoussu, Gautham Chandra, Heather Meeker, John Martin, Daniel Taylor, Nishil, Nigel Knight, gavin, Arjun K.S, Louis Görtz, Jordan Millar, Molly Carr,Joshua, Shaun Deanesh, Eric Bowden, Felix Goroncy, helter_seltzer, Zhngy, lazypikachu23, Compuart, Tom Eccles, AT, Adgn, STEPHEN INGRAM, Jeremy King, Clement Schoepfer, M, A M, Benjamin, waziam, Deb-Deb, Dave Jones, Julien Leveille, Piotr Kłos, Chan Mun Kay, Kirandeep Kaur, Reagan Glazier, Jacob Warbrick, David Kavanagh, Kalimero, Omer Secer, Yura Vladimirovich, Alexander List, korede oguntuga, Thomas Foster, Zoe Nolan, Mihai, Bolutife Ogunsuyi, Hong Phuc Luong, Old Ulysses, Kerry McClain Paye Mann, Rolf-Are Åbotsvik, Erik Johansson, Nay Lin Tun, Genji, Tom Sinnott, Sean Wheeler, Tom, Артем Мельников, Matthew Loos, Jaroslav Tupý, The Collier Report, Sola F, Rick Thor, Denis R, jugakalpa das, vicco55, vasan krish, DataLog, Johanes Sugiharto, Mark Pascarella, Gregory Gleason, Browning Mank, lulu minator, Mario Stemmann, Christopher Leigh, Michael Bascom, heathen99, Taivo Hiielaid, TheLunarBear, Scott Guthery, Irmantas Joksas, Leopoldo Silva, Henri Morse, Tiger, Angie at Work, francois meunier, Greg Thatcher, justine waje, Chris Deister, Peng Kuan Soh, Justin Subtle, John Spenceley, Gary Manotoc, Mauricio Villalobos B, Max Kaye, Serene Cynic, Yan Babitski, faraz arabi, Marcos Cuellar, Jay Hart, Petteri Korhonen, Safira Wibawa, Matthew Twomey, Adi Shafir, Dablo Escobud, Vivian Pang, Ian Sinclair, doug ritchie, Rod Whelan, Bob Wang, George O, Zephyral, Stefano Angioletti, Sam Searle, Travis Glanzer, Hazman Elias, Alex Sss, saylesma, Jennifer Settle, Anh Minh, Dan Sellers, David H Heinrich, Chris Chia, David Hay, Sandro, Leona, Yan Dubin, Genji, Brian Shaw, neil mclure, Francis Torok, Jeff Page, Stephen Heiner, Tucker Leavitt, Peter, Tadas Šubonis, Adam, Antonio, Patrick Alexander, Greg L, Paul Roland Carlos Garcia Cabral, NotThatDan, Diarmuid Kelly, Juanita Lantini, hb, Martin, Julius Schulte, Yixuan Zheng, Greater Fool, Katja K, neosama, Shivani N, HoneyBadger, Hamish Ivey-Law, Ed, Richárd Nagyfi, griffll8, First & Last, Oliver Sun and Yoshinao Kumaga

    • @PsRohrbaugh
      @PsRohrbaugh Před 3 měsíci +2

      I'm so proud to be your #1 Patreon. It's a small price to pay for the value of your content.

  • @richdobbs6595
    @richdobbs6595 Před 3 měsíci +19

    I would bet that the improvements in AI will lead to further progression into industrial feudalism. If a lower skilled worker can still get the job done, it will be easier to use favoritism and non-job performance issues like loyalty and conformance in selecting employees to hire and retain.

    • @paulmcgreevy3011
      @paulmcgreevy3011 Před 3 měsíci +1

      Your favourite employee is usually your most productive. However if you choose to retain an employee you like over one you don’t like then that’s a reasonable choice since you probably think that person will be better for the business overall.

    • @richdobbs6595
      @richdobbs6595 Před 3 měsíci

      @@paulmcgreevy3011 Sure, but it sucks if you are trying to compete based on straight-forward job performance. If you have to be royalty to be king, that is sort of the essence of feudalism. Since you can define best for the business with any number of objective functions, that is pretty much a null statement.

  • @captainfatfoot2176
    @captainfatfoot2176 Před 3 měsíci +19

    AI seems potentially useful for employees who are already knowledgeable, but I have to wonder whether it will stunt the growth of employees.

    • @Omniryu
      @Omniryu Před 3 měsíci +8

      It definitely will. No need for a jr to grow, if they can already punch up. Which (like in the video) will also take away from them understanding what's good or bad.

    • @mandisaw
      @mandisaw Před 3 měsíci +7

      Already seeing it with student & aspiring/junior programmers. People are lazy at their core, and while that's a great motivator for innovation, collaboration, and optimization, it also means a lot of folks will use a faulty crutch (or cheat!) instead of doing the difficult work of learning. I'm really worried about how much crappy code is gonna make its way into public-facing and mission-critical systems😞

    • @Omniryu
      @Omniryu Před 3 měsíci +2

      @@mandisaw I feel like it depends on if where ever they work allows AI. I hope companies are wise enough to not allow AI for coding tests. As an artist, I'm also seeing it from time to time. Mostly from people that couldn't draw that well and are now using it. A lot of "look of this cool thing I did" without the ability to actually see everything that's wrong with it or why it's not good. Missing out on basic art direction and the ability to self critique.

    • @mandisaw
      @mandisaw Před 3 měsíci +2

      @@Omniryu A few higher ed & workplace surveys have already come out showing that a significant % of students & workers are/would continue to use genAI even if their school/workplace explicitly forbade it. Last summer, there was a massive dip in ChatGPT usage corresponding roughly with the summer school holidays.
      Even among school, organization, & business leaders, they haven't really coalesced around a stance irt. genAI usage, and most haven't released guidance on this stuff (if they've drafted any at all).

    • @mandisaw
      @mandisaw Před 3 měsíci +3

      @@Omniryu As for the lack of self-awareness, that's exactly the problem, on both sides. The folks who are below-the-bar in a field can't assess whether the AI use is helping or harming their skill-growth. And the "audience" presumably will become accustomed to vaguely lousy products, maybe complaining occasionally, but never able to put their finger on just why it doesn't 'click'.

  • @metaman1982
    @metaman1982 Před měsícem

    That was a smooth transition into an ad. Had to do a double take at when it was time to tune out!😂

  • @danremenyi1179
    @danremenyi1179 Před 3 měsíci

    Patrick a very useful description of the state of the usefulness of the technology

  • @jextra1313
    @jextra1313 Před 3 měsíci +4

    AI is just going to make us dumber. Imagine a people that haven't needed to generate ideas with their own mind for 20 generations.

  • @Phantom-mk4kp
    @Phantom-mk4kp Před 3 měsíci +4

    I gave Chat GPT a simple task of gear ratio and torque, after ridiculous results and me prompting it hade made a mistake followed by "I apologise you are correct" after four attempts I gave up.

    • @mandisaw
      @mandisaw Před 3 měsíci

      It's closer to autocomplete than a calculator. Could probably stumble (or hallucinate) its way through some basic HS Physics textbook examples, but anything tougher is out, and the math would all be wrong anyway.

    • @greebj
      @greebj Před 3 měsíci +3

      I asked about cofactors for a thyroid enzyme, it missed one, I asked it to find a paper outlining the missing nutrient as a cofactor, then whether the original list should have included it, it admitted the original response was incomplete, I immediately asked the original question again and it repeated the original (incomplete) answer. 😂

  • @Gerberbaby922
    @Gerberbaby922 Před 2 měsíci +1

    Having an AI powered software service for your video sponsor was an interesting choice.

  • @Joel-kw7jf
    @Joel-kw7jf Před 3 měsíci +5

    So far, AI seems to be a net negative rather than a positive. Scams, fake images, misinformation and potential copyright infringement. Except for some fields like medicine, most industries will be better off without AI.

  • @Dan.50
    @Dan.50 Před 3 měsíci +25

    In the real world, "AI" translates to "give me government grants then have the media tell everyone I'm a genius."

    • @butwhytharum
      @butwhytharum Před 3 měsíci +2

      Buzz words drive attention.

    • @j3i2i2yl7
      @j3i2i2yl7 Před 3 měsíci

      So I'll just update our busness plan is simple, just find "blockchain" and replace with "AI" throughout.

  • @Istandby666
    @Istandby666 Před 3 měsíci +17

    They did the Tic Tac Toe calculations in the 80's.
    The AI (Joshua) came to the conclusion the only winning move is to not play the game.
    **Wargames**

    • @johntaggart979
      @johntaggart979 Před 3 měsíci +2

      "Would you like to play a game?"

    • @Istandby666
      @Istandby666 Před 3 měsíci

      @@johntaggart979
      How about a game of Global Thermal Nuclear War?

    • @roxannebrown3061
      @roxannebrown3061 Před 3 měsíci

      Brilliant

    • @user-xl5kd6il6c
      @user-xl5kd6il6c Před 3 měsíci

      The issue is that we are calling Machine Learning as "AI". These models are small architectures full of parameters, it's part of the field of AI, but they aren't AI
      What AI means was basically named as "AGI" now. And these models that just predict the next token using statistics aren't it

    • @Istandby666
      @Istandby666 Před 3 měsíci

      @@user-xl5kd6il6c
      We are on the ground floor. The possibilities, is what makes this interesting.

  • @jeremy____5747
    @jeremy____5747 Před 11 dny +2

    I feel reasonably safe from AI not because I think 'ai will never be as smart or creative as a person' but because my job (I fix laboratory equipment) involves a ton of tiny movements of physical tasks that AI can't do, not even if you 'loaded' it into a robot. My job is just too analog. Its like what I've heard said: Therapists and software engineers are in trouble, whereas plumbers will be fine.

  • @usausausausa
    @usausausausa Před 3 měsíci

    Thanks for the great video

  • @saernst
    @saernst Před 3 měsíci +3

    Thanks Patrick, great video I'm a marketing consultant and I've tried to use AI in every part of my position. The tasks where AI is saving me the most time are email correspondence and answering technical questions.

  • @breauseph
    @breauseph Před 3 měsíci +4

    I work in a complex, data-driven part of media and have to train creative employees on technical concepts. I'm currently setting up training materials for a new employer, and I've found that ChatGPT helps a lot with the copy for training decks and glossaries. One of my favorite prompts has become "Explain [concept] to me as if I were a 20-year-old who's not very technically proficient," for example, and most of the time ChatGPT does pretty well with it. I can explain these concepts fluently to technical people, but deconstructing jargon for creatives takes a lot of effort and I'm happy to have the help. That being said, I *always* edit, because it has gotten things wrong or made assumptions that aren't in line with my team's perspective/philosophy. I also use it for Sheets and SQL formulas, but also have to test and edit because it's not always exactly right or particularly efficient. So, very much in line with what these studies found.

  • @jorgerangel2390
    @jorgerangel2390 Před 2 měsíci +1

    Tech leader here with 6 years of experience making software, I use copilot and being using it since it launched, it makes me faster when developing but as it is or even 10 times more powerful I do not see it developing software on its own

  • @IAsimov
    @IAsimov Před 3 měsíci

    Do you have a link to the Harvard study you mention, in order to test productivity? I'm honestly interested in that comparison, given how much about AI has been talked about, and I would like to learn more whenever if it benefits or harms.

  • @JM-wm6he
    @JM-wm6he Před 3 měsíci +5

    I've been using AI to learn coding but even I as a beginner frequently spot very trivial errors in the code it provides.

    • @jameshughes3014
      @jameshughes3014 Před 3 měsíci +1

      I'm honestly so glad I learned to code before generative AI was a thing. It must be so frustrating to try to find bugs, without knowing what to look for. But that is gonna end up making you a better coder I think, your developing the core thinking skills that really matter

  • @Darkskindiplo
    @Darkskindiplo Před 3 měsíci +12

    I am in chemical manufacturing and CGPT is incredibly helpful for figuring out components in chemical formulas during product development. It saves me tons of time. Also extremely helpful for my other environmental business.

    • @mutthie
      @mutthie Před 3 měsíci +3

      Interesting, I made really bad and even dangerous experiences using GPTs for creating protocol outlines. I think the issue was the inability of gpt for simple calculations. But the formating was alright and saved me some time.

  • @maxb232
    @maxb232 Před 3 měsíci

    Another great day anytime Patrick posts a video :)

  • @ray-mc-l
    @ray-mc-l Před 3 měsíci

    Hey Patrick - could you do a vid on trading/speculation books you'd recommend? I heard you talking about "Education of a Speculator" on a podcast. I'd love to hear what key lessons you took from this and other books.

  • @makemoremusicnow
    @makemoremusicnow Před 3 měsíci +124

    AI is spam generation at an industrial scale and in every medium known to man (text, code, images, audio, video, etc.)
    The bust after this overhyped boom will be spectacular.

    • @andybaldman
      @andybaldman Před 3 měsíci +2

      As long as they aren’t able to self-improve it before that. And you can bet they’re trying.

    • @cameronhoglan
      @cameronhoglan Před 3 měsíci +25

      This 100%. AI tech is fun and all, but it makes scams far worse.

    • @tomlxyz
      @tomlxyz Před 3 měsíci +23

      The output is often so generic and one fear of mine is that everything in the future will be even more generic

    • @Custodian123
      @Custodian123 Před 3 měsíci

      You're using it wrong. No tool works on its own, needs a brain using it.
      These tools are here to stay.

    • @Omniryu
      @Omniryu Před 3 měsíci +19

      Scamming is the only industry that's seen a boom from Ai lol. Everything else is just overhype and speculation

  • @ginger-ale7818
    @ginger-ale7818 Před 3 měsíci +3

    As a writer, I don’t think AI is going to be able to take over real art tasks. It’s too hollow and prone to remixing. Nonetheless, it’s my belief that an ever growing percentage of top 40 pop songs will be AI generated.

  • @admthrawnuru
    @admthrawnuru Před 3 měsíci +2

    For context, I'm a material scientist. I've used AI some both personally a lot and professionally a little. I've found that "idea generation" is overblown, if you use it enough you start to see very repeatative trends.
    Professionally, I've found two main uses:
    1. It's good, but not accurate, at identifying concept terms from descriptions. For example, it was able to tell me the law for a linear relationship between two phenomena that I couldn't find in literature until I knew the right term. It's sometimes inaccurate, but if you just Google the terms of gives you, this can be very useful, because in research often not knowing the right search term can slow down literature searches significantly.
    2. Editing and summarizing. For fun, I've tried having it actually generate sections of a paper I was writing, and even with a lot of promptimg and exsmples it was pretty bad.
    That said, so far I'm unaware of any LLM that's been trained for these tasks. Integrating web or database searches, or else just focusing on content accuracy during training might solve these issues in a few years

  • @ucantSQ
    @ucantSQ Před 3 měsíci

    For learning new things or for use as a reference. As you pointed out, it can greatly improve lower skilled workers productivity. Using AI, I managed to become a much better programmer overnight.
    I don't expect original ideas from it, although it did give me a banger of idea once.

  • @SG-nb9go
    @SG-nb9go Před 3 měsíci +7

    It’s useful only for simple things as a an assistant but not at all for aerospace engineering expertise, I can tell it gives out wrong answers even with basic questions.

    • @yds6268
      @yds6268 Před 3 měsíci +2

      You can, but I bet the CEOs can't

    • @andybaldman
      @andybaldman Před 3 měsíci

      The only thing required to change that is time.

    • @papalegba6796
      @papalegba6796 Před 3 měsíci +2

      Same here, it is useless for anything practical. Actively dangerous in fact.

  • @erinfindsen4953
    @erinfindsen4953 Před 3 měsíci +11

    20:11 the air Canada chatbot did not make up the answer. The erroneous answer was on Air Canadas website.

    • @darthkek1953
      @darthkek1953 Před 3 měsíci +1

      Computers are like Harvard : Garbage In, Garbage Out

    • @zucchinigreen
      @zucchinigreen Před 3 měsíci +1

      Uh oh someone definitely messed up by never updating the website.

    • @hello.claude
      @hello.claude Před 3 měsíci +1

      According to the reporting I’ve read, the information on Air Canada’s website was correct. It was the chatbot that gave the erroneous answer. This was reportedly the basis for Air Canada’s defensive argument in court.

  • @cookie12986
    @cookie12986 Před 3 měsíci

    2:27 My way of looking at it is: Deepmind is, as it suggests, the deeply analytical thinker, while Gemini is the creative aspect of thought, much like how we love to divide these two when discussing cognition in modern media. Perhaps combining the two models in some meaningful way is what we need for a holistic AI.

  • @omerozuzunn
    @omerozuzunn Před 3 měsíci

    I use them as a research partner and it works unbeliavably wonderful

  • @baiweilo136
    @baiweilo136 Před 3 měsíci +8

    I am a biologist and I use chatgpt to write simple codes all the time. I also build neural networks to solve some very complex problems in biology. I definitely think AI is revolutionizing the field.

  • @hydrohasspoken6227
    @hydrohasspoken6227 Před 3 měsíci +7

    I am an experienced medical doctor. I use GPT4 heavily to discuss complex medical cases.
    It is my understanding that it probably reached medical expert level. It hardly goes wrong.

  • @TheElectronPusher
    @TheElectronPusher Před 3 měsíci +2

    Pat, we're going to need you to start a fashion blog.

  • @teuruti55
    @teuruti55 Před 3 měsíci

    I’ve used ai to teach myself basic programming languages like SQL and M. I’ve able to write functional scripts for my company at my job. It’s able to communicate with me and answer questions as I need. It’s a really good teacher because it never answers your problems it’s just give you hints.

  • @Posiman
    @Posiman Před 3 měsíci +7

    I recently tried to ask three different language models (ChatGPT, Gemini and Copilot) if there is a =LAMBDA() function in Microsoft's DAX query language.
    All three of them told me it exists and described in great detail how to use it and which use cases was it suitable for. But all of them refused to provide me with a link to the official documentation discussing it. Which was a good call, because the function does not actually exist in DAX.

  • @chestodor4161
    @chestodor4161 Před 3 měsíci +5

    I find using LLM AI the most useful when translating text to other languages or making summaries of documents or other text based information.

    • @TeamSprocket
      @TeamSprocket Před 11 dny

      How do you know it's accurate when you don't have any validation of it?

  • @PastaEngineer
    @PastaEngineer Před 3 měsíci +2

    That title got me ready to rage comment lol but the video is very accurate. AI is useful if you know how to use it. If you expect to prompt just once and get the correct answer, no. It requires the same skill that one uses when spending 30-60 minutes writing a very important email. You need to format your request in the excat manner required to be optimize it's context memory, it takes several prompts, requires iteration, and you must acknowledge and work around it's weakness.
    I made a custom bot capable of running external actions to acquire multiple data sources to fill its context memory, and then use an external web request to access custom instructions for how to process the context based on user keywords.
    It's going to make for one hell of a good table top adventure assistant, but could it also be useful in the job field for all those people whose main role is writing VBA to generate reports? Probably. I think we will see some really neat useful tools, even if that use is entertainment.

  • @rpere008
    @rpere008 Před 3 měsíci +1

    In my opinion the future developments around AI might focus on the balance between applicability and accuracy. i.e. a happy medium between generic AI tools with mediocre accuracy and highly specialized AI tools with limited applications outside of their field. As a translator I like the convenience of machine translation for generic texts that I can copy-edit later, and I like the convenience of computer-aided translation supported by a good quality termbase and translation memory for more specialised texts; I don't know of any translation tool that combines applicability and accuracy optimally.

  • @madJesterful
    @madJesterful Před 3 měsíci +4

    Its worth noting that the findings also appear as tough they may reflect the bias of the entity doing the research. Remember that management consultants are going to find that they have the answers for your business, and those answers are not to do whatever it is you have been doing - and cutting staff 10%.
    We want to look hip and show industry we are good with AI, so its got to come out ahead and it sure would be nice if the best AI usage involved training we can provide wouldn't it? Oh look it does!
    And I am not saying the findings are "wrong" you just have to look at the blind spots: are many of these 'ideas' going to turn out to be based on product ideas that already existed and would cause a lot of legal or competitive problems? Patrick comments on a finding that hints that they may be because they were all a lot less diverse presumably based on the training data.

    • @jxh02
      @jxh02 Před 3 měsíci +1

      I would love to see their "quality" metric for this kind of work. Getting metrics right is hard. Even the blithe assertions about people's incentives in the experiment design are suspect. Getting incentives right is also hard.

  • @coldspring22
    @coldspring22 Před 3 měsíci +9

    As an IT professional, Chatgpt and similar generative AI is most useful as a partner for in depth discussion on topics which is elusive in google search. While generative AI often give wrong answers, they do provide direct answers to topics which google search cannot seem to interpret correctly and often bring up new tangents and new facets to the topic that you haven't thought of before. So generative AI for me is a winner as a learning tool and developing in depth understanding of topics for which there isn't much clear documentation to be found in google search.

    • @tomlxyz
      @tomlxyz Před 3 měsíci +6

      While that's useful it's far less than promised in all the hype and that can't hold up with all the valuations of AI companies

    • @Snaperkid
      @Snaperkid Před 3 měsíci +2

      Except it doesn’t give answers or information at all. All it’s trying to do is tell you what it thinks you want to hear. This includes your own wrong information as authoritative.

  • @nat9521
    @nat9521 Před 2 měsíci +1

    From my experience current AI models tend to be most useful for more mundane tasks, e.g. OpenAI's Whisper Large for audio transcription, various OCR models which are vastly more accurate than their predecessors, and machine translation (which has also been around for a long time, but current incarnations represent a vast improvement). The more 'flashy' applications of AI such as LLMs can be useful as well, as they do on occasion result in significant time savings compared to the usage of a traditional search engine, but only if the user is not already well versed in the topic of the query. As such, they seem make information more accessible to a wider audience, although care must be taken not to fall victim to model hallucinations, requiring independent verification of all output.