RouteLLM achieves 90% GPT4o Quality AND 80% CHEAPER

Sdílet
Vložit

Komentáře • 227

  • @matthew_berman
    @matthew_berman  Před 19 dny +56

    My "AI Stack" is RouteLLM, MoA, and CrewAI. What about you?

    • @craiggriessel1872
      @craiggriessel1872 Před 19 dny +1

      AISheldon 🤓

    • @shalinluitel1332
      @shalinluitel1332 Před 19 dny +4

      It would be best to have alternatives to all these which are free and open source. Maybe later down the line.. The video is really cool tho! Thanks Matthew

    • @santiagomartinez3417
      @santiagomartinez3417 Před 19 dny +8

      Is MoA mixture of agents?

    • @AIGooroo
      @AIGooroo Před 19 dny +20

      Mathew, please do the full tutorial on how to set this up. thank you

    • @smokewulf
      @smokewulf Před 19 dny +15

      RouteLLM, MoA, and Agency Swarm. Should do a video on Agency Swarm. I think it is the best agentic framework

  • @davtech
    @davtech Před 19 dny +122

    Would love to see a tutorial on how to set this up.

    • @AlexBrumMachadoPLUS
      @AlexBrumMachadoPLUS Před 19 dny +3

      Me too ❤

    • @bamit1979
      @bamit1979 Před 19 dny

      I think some other AI enthusiast covered it a few days back. It was quite easy. Check CZcams.

    • @ChristianNode
      @ChristianNode Před 19 dny

      get the agents to watch it and do it.

    • @sugaith
      @sugaith Před 19 dny

      On how to set this up IN THE CLOUD as well or preferebly

    • @averybrooks2099
      @averybrooks2099 Před 19 dny +3

      Me too but on a local machine instead of a third party service.

  • @clapppo
    @clapppo Před 19 dny +28

    it'd be cool if you did a vid on setting it up and running it locally

  • @cool1297
    @cool1297 Před 19 dny +72

    Please do a tutorial for local installation for this. Thanks

    • @camelCased
      @camelCased Před 19 dny +4

      What exactly? As I understand, RouteLLM is not an LLM itself but just a router.
      You can install local LLMs very easily using Backyard AI.

    • @m8hackr60
      @m8hackr60 Před 19 dny +2

      Sign me up for the full tutorial!

    • @DihelsonMendonca
      @DihelsonMendonca Před 19 dny

      ​@@camelCased Or LM Studio

    • @bigglyguy8429
      @bigglyguy8429 Před 18 dny +1

      @@camelCased But how to use the router with Backyard?

    • @camelCased
      @camelCased Před 18 dny

      @@bigglyguy8429 Why would you want to use the router at all, if running LLM models locally?

  • @velocityerp
    @velocityerp Před 19 dny +10

    Matthew - for those of us who develop line-of-business apps for SME businesses - local LLM deployment is a must. Would certainly like to see you demo RouteLLM with orchestration - Thanks!

  • @bernieapodaca2912
    @bernieapodaca2912 Před 18 dny +2

    Yes! Please show us a comprehensive breakdown of this great tool!
    I’m also interested in your sponsor’s product, LangTrace. Can you possibly show us how to use it?

  • @josephremick8286
    @josephremick8286 Před 19 dny +24

    I am a cyber security analyst who knows very little about coding so, between your videos and just straight asking ChatGPT or Claude, I am ham-fisting my way through getting AI to run locally. Please keep making tutorial videos - I am excited to see how to impliment RouteLLM!

    • @s2turbine
      @s2turbine Před 19 dny +4

      I agree, I'm pretty much in the same boat as you. The problem is that my knowledge is outdated by the time I finally figure things out because there is so much advancement in so little time. I think we need a "checkpoint" how-to on how to do things now, as opposed to 3 months ago.

    • @DihelsonMendonca
      @DihelsonMendonca Před 19 dny

      If you don't know much about anything, like me, but want to run LLMs locally, you just need to install LM Studio. No need to understand anything. On the software, it has even the option to download and install them, and run. That's what I use. Now that I learned a bit more, I will try to install Open WebUI, Ollama and Docker, these are way more complicated. 🎉❤

  • @aiforculture
    @aiforculture Před 19 dny +2

    Great breakdown, much appreciated. I definitely foresee local LLMs becoming dominant for organisations as soon as next year. My advice during consults is for them not to invest a massive amount in high-end data secure cloud systems, but just to hang on a little, work with dummy data on current models to build up foundational knowledge, and then once local options exist they can start diving into more sensitive analytics.

  • @caseyvallett8953
    @caseyvallett8953 Před 19 dny +6

    Absolutely do a detailed tutorial on how to get this up and running!

  • @MichaelLloydMobile
    @MichaelLloydMobile Před 19 dny +5

    Yes, please provide a tutorial on setting up the described language model.

  • @AshishKumar-hg2cl
    @AshishKumar-hg2cl Před 19 dny +1

    Hey Matt, yes it would be great if you could show a demo of how to setup this model on Azure OpenAI or Azure Databrix and then use it in the application.

  • @mrbrent62
    @mrbrent62 Před 19 dny +1

    I also saw where they will have 20TB m.2 drives in a couple of years. Running this LLM locally will be really cool.

  • @CookTheBruce
    @CookTheBruce Před 18 dny

    Yes! The tutorial. Great vid. Sharing with my crew...Just beginning an AI Consultant agency and cost is an existential threat!!!

  • @AngeloXification
    @AngeloXification Před 19 dny +3

    I feel like everyone is realising things at the same time. I started 2 projects, the first an LLM co-ordination system and a chain of thought processing on specific models

  • @jamesvictor2182
    @jamesvictor2182 Před 19 dny

    Just popping up to say thanks Matthew. You have become almost my only required source for AI news because your take is right up my street every time. Great work, keep it coming

  • @joe_limon
    @joe_limon Před 19 dny +7

    There seems to be a hold up on the highest end models as the leading companies continually try to improve safety while watching their competition. Nobody seems to want to jump in and release a new/better model at risk of the potential "dangerous" label being applied to them. So a lot of the progress remains hidden in the lab, waiting for competition to finally engage.

    • @steveclark9934
      @steveclark9934 Před 19 dny +1

      Improve safety really means neuter.

    • @davidk.8686
      @davidk.8686 Před 19 dny

      So far with LLM's "data is code" ... it is inherently unsafe, unless something fundamentally changes

  • @wardehaj
    @wardehaj Před 19 dny +1

    Thanks for this video. Very informative.
    Please make a full tutorial about the setup of route llm and what the recommendations of the local pc should be. Thank you in advance!

  • @madelles
    @madelles Před 19 dny +1

    It would be interesting to see how this will work on your AI benchmark. Please do a setup and test

  • @MarcvitZubieta
    @MarcvitZubieta Před 19 dny +1

    Yes! please we need a full tutorial!

  • @socialexperiment8267
    @socialexperiment8267 Před 19 dny +1

    Danke! As always great!🎯👍

  • @rilum97
    @rilum97 Před 19 dny

    You are so consistent bro, keep it up 🙌

  • @antonio-urbanculture
    @antonio-urbanculture Před 19 dny

    Yes I really like your idea of a complete install and running tutorial. Go for it. 🙏 Thanks 👍

  • @danielhenderson7050
    @danielhenderson7050 Před 19 dny +1

    I think you misrepresented the graph. The "ideal router" point on the graph is likely just that - the ideal. I don't think that's claiming actual results

  • @kamilnowak4329
    @kamilnowak4329 Před 19 dny

    The only channel where i actually watch ads. Very interesting stuff

  • @jlwolfhagen
    @jlwolfhagen Před 18 dny

    Would love to see a tutorial on setting up RouteLLM! 🙂

  • @limebulls
    @limebulls Před 18 dny

    Yes please full set up!

  • @dezigns333
    @dezigns333 Před 19 dny +16

    It's time people admit that benchmarking off GPT4 is stupid. When GPT4 came out it was amazing. Now its no better than any other LLM. Ever since OpenAI introduced cheaper Turbo models, the quality has gone down hill. They sacrificed intelligence for speed to the point where they have plateaued in quality and its not getting better no matter how new models they release.

    • @orthodox_gentleman
      @orthodox_gentleman Před 19 dny

      Thanks for being real bro. I absolutely agree with you. I barely even use ChatGPT anymore because it sucks.

    • @irql2
      @irql2 Před 19 dny

      "Now its no better than any other LLM" -- do you really believe this? Seems like you do. That's certainly a take.

    • @kyleabent
      @kyleabent Před 19 dny

      I agree man I don't care about speed as much as I care about accuracy. I'll happily wait for a better response than rapidly go through 2-3 quick responses that need more time in the oven.

  • @johngrauel1661
    @johngrauel1661 Před 19 dny

    Yes - please do a full tutorial on setup and use. Thanks.

  • @sophiophile
    @sophiophile Před 19 dny

    After developing exclusively on GPT models, then joining an org with a ridiculous amount of free GCP credits and being pushed to use Gemini family instead- I can honestly say that while differences on benchmarks may seem small, they end up being really extreme in practice. I spent days smashing my head against a wall trying to get Gemini to provide quality responses, and after switching to 4o, I was literally ready to deploy.
    There still don't seem to be great benchmarks that represent performance of generative models well.

  • @parimalthakkar1796
    @parimalthakkar1796 Před 17 dny

    Would love a local setup tutorial! Thanks 😊

  • @D0J0Master
    @D0J0Master Před 19 dny +1

    How would this effect mixture of agents? Could we have multiple route llms combined together since they use such lower compute?

  • @Ed-Shibboleth
    @Ed-Shibboleth Před 19 dny

    That's good stuff. I will take a look at the codebase. Thanks for sharing

  • @MoadKISSAI
    @MoadKISSAI Před 18 dny

    Always yes for full tutorial

  • @NNokia-jz6jb
    @NNokia-jz6jb Před 19 dny +5

    So, how to run it. And on what hardware?

  • @solifugus
    @solifugus Před 19 dny

    Yes please... Full tutorial on setting this up to run locally. Also, I'd like to know how to setup multi-modal so I can show my images and casually talk to it (local).

  • @davieslacker
    @davieslacker Před 19 dny

    I would love to catch a tutorial of you setting it up!

  • @executivelifehacks6747

    I suspect these features, plus dedicated non-GPU hardware will eventually reduce energy costs per "thought" to less than the human brain. Currently perplexity using Sonnet 3.5 thinks GPT4 uses 25x more.

  • @harshshah0203
    @harshshah0203 Před 19 dny +1

    Yes do make a whole tutorial on it

  • @mafo003
    @mafo003 Před 18 dny

    Ive seen you do techdev before and would love to see you do this one as well please.

  • @phieyl7105
    @phieyl7105 Před 14 dny

    Problem with this method is that there are some trade offs. While it maybe cheaper at answering a question directly; you sacrifice its social intelligence. Even though you get the right answer, the way the answer is phrased can be the difference between either a toddler or a graduate student. Personally I wauld want to talk with the graduate student.

  • @MagusArtStudios
    @MagusArtStudios Před 19 dny

    First thing I did a year and a half ago was routing different LLMs via a zero-shot classifier. Looks like Route has done the same thing lol. I figured it was common sense.

  • @Alice_Fumo
    @Alice_Fumo Před 19 dny

    I really don't find this to be a big deal. I expect people select the model to use themselves on a per-task basis on what they believe is the most appropriate one for the task. For me the decision process is really simple:
    1. is it code or requires complex problem-solving? -> Claude 3.5 Sonnet
    2. Do I want to have a deep conversation with a creative partner -> Claude 3 Opus
    3. Is it anything the other models would refuse? -> GPT-4o
    4. Is it too private for any of the above? -> Local LLM
    I don't need a router for this and I wouldn't trust it to reliably choose the same way I would either.

  • @AseemChishti
    @AseemChishti Před 15 dny

    Yes, give a walkthrough video for RouteLLM

  • @xhy20x
    @xhy20x Před 19 dny +1

    Please do a demonstration

  • @hipotures
    @hipotures Před 19 dny

    Reading and watching anything about AI is like a live broadcast of the Manhattan Project in 1942. The current year is 1944?

  • @ralfw77
    @ralfw77 Před 19 dny

    Hi Mathew,
    I love your channel. I’m curious if you would be willing to explore Pi ai? It doesn’t compare to the others in the same way. Maybe it’s hard to test. But very interesting. It’s trained to be empathetic and you can actually have a conversation with voice that feels satisfying.

  • @leonwinkel6084
    @leonwinkel6084 Před 19 dny

    For coding this would be insane. Mixed local and api endpoints

  • @geekswithfeet9137
    @geekswithfeet9137 Před 17 dny

    Every single time I’ve seen a claim like this, the output in real usage never compares

  • @galdakaMusic
    @galdakaMusic Před 19 dny

    We need something locally for non difficult pourpouses. For example local home Assitant control.

  • @monnef
    @monnef Před 18 dny

    Promising, but a bit mess with naming. They are using GPT-4 to mean at least GPT-4 Turbo and GPT-4 Omni in various places. I am not even sure if on some place they don't really mean the older model GPT-4.

  • @imramugh
    @imramugh Před 17 dny

    I’d love to see a demo if possible.

  • @MEvansMusic
    @MEvansMusic Před 8 dny

    can this be used to route between agents as opposed to model instances? for example routing to chain of thought agent vs simple q and a agent?

  • @rafaeldelrey9239
    @rafaeldelrey9239 Před 18 dny

    The article used GPT 4, not GPT4-O, which is already 50% of GPT4 cost. Or am I missing something?

  • @3enny3oy
    @3enny3oy Před 19 dny

    You should consider including Semantic Kernel and GraphRAG in that ideal stack

  • @aleksandreliott5440
    @aleksandreliott5440 Před 6 dny

    I would love to see a tutorial on how to get this running locally.

  • @PatrickWriter
    @PatrickWriter Před 18 dny

    Yes please make a tutorial on the routerLLM.

  • @knecting
    @knecting Před 18 dny

    Hey Matt, please do a tutorial on setting this up.

  • @audiovisualsoulfood1426

    Would also love to see the tutorial :)

  • @Idea-LabAi
    @Idea-LabAi Před 19 dny

    Please do a tutorial. And need to measure performance to validate the performance - cost graph.

  • @martingauthier5245
    @martingauthier5245 Před 18 dny

    It would be really cool to have a tutorial on how to implement this with ollama

  • @MattReady
    @MattReady Před 19 dny

    I’d love a guide to easily set this up for myself

  • @thecatsupdog
    @thecatsupdog Před 19 dny

    Does your local model search the internet and summarize a few web pages? That's what chatgpt does for me, and that's all I need.

  • @macjonesnz
    @macjonesnz Před 18 dny

    I think they are saying the brown dot is where an ideal LLM would be placed, I'm not sure that Route LLM is better than Claude 3 Opus. SO not sure where on that chart their router actually is. probably down with Llama 3 8b. Cause it's only job its to route.

  • @mikezooper
    @mikezooper Před 19 dny

    It doesn’t change anything. LLMs are good at certain tasks (most of which aren’t as useful as we need, and most don’t help us earn money). AI has plateaued. They haven’t replaced software engineers.

  • @ashtwenty12
    @ashtwenty12 Před 19 dny

    Could you do a tutorial on RAG (retrieval augmented generation) ? I think I'll be pretty massive thing in agentic archetecure. Also I think RAG might soon be more than just text and PDFs 😂 in the not too distant future.

  • @andresfelipehiguera785

    A tutorial would be great!

  • @parthwagh3607
    @parthwagh3607 Před 18 dny

    yes we need detailed video

  • @ritviksinghal9190
    @ritviksinghal9190 Před 19 dny

    An implementation would be interesting

  • @dantfamily9831
    @dantfamily9831 Před 19 dny

    I'd be interested in what hardware is needed to run something like this locally. I was waiting until late fall or early next year to buy, but I might need to get an intern system to train up. I am big on local control except when needed to reach out.

  • @HawkX189
    @HawkX189 Před 19 dny +1

    Let me launch this... Online models are saving themselves yet because of context.

  • @orthodox_gentleman
    @orthodox_gentleman Před 19 dny

    This wasn’t just released. It had been around for a while. Now that GPT-4o and Claude 3.5 Sonnet exist things are much cheaper. I can understand using a local LLM with these two but overall the cost savings are not as big of a deal as before.

    •  Před 19 dny

      API for claude and GPT is sitll expensive.

  • @KingMertel
    @KingMertel Před 19 dny

    Hey Matt, what are these routers exactly? (They are not LLM I understand) And how do they determine where to route to?

  • @nate2139
    @nate2139 Před 19 dny

    This sounds interesting, but does it offer the same capability that the OpenAI API offers with customizable assistants, RAG, and function calling? I still have yet to find anything that compares. Would love to see something open source that can do this.

  • @sapito169
    @sapito169 Před 19 dny

    wonderfull
    know you can offer a low cost service and a primun service at diferent prices

  • @angelwallflower
    @angelwallflower Před 18 dny

    yes I vote for tutorial for set up please thank you

  • @BradleyKieser
    @BradleyKieser Před 19 dny

    Yes please, do the tutorial.

  • @davidk.8686
    @davidk.8686 Před 19 dny

    When "data = code", how can you have security while having a actually useful / powerful AI?

  • @calvingrondahl1011
    @calvingrondahl1011 Před 19 dny

    Thank you Matt🖖🤖👍

  • @samuelopoku4868
    @samuelopoku4868 Před 18 dny

    If I could like and subscribe harder I would. Tutorial would be fantastic thanks 👍🏿

  • @mickmickymick6927
    @mickmickymick6927 Před 19 dny

    95% of my queries, even GPT 4o or Sonnet 3.5 can't answer so I don't know what your queries are that local models usually handle fine.

  • @nashad6142
    @nashad6142 Před 19 dny

    Yessss! Go open source

  • @RaedTulefat
    @RaedTulefat Před 19 dny

    Yes Please. a tutorial!

  • @分享免费AI应用
    @分享免费AI应用 Před 18 dny

    90% GPT4o Quality? More like 100% snake oil! Where do I sign up for this "RouteLLM" deal?

  • @keithhunt8
    @keithhunt8 Před 18 dny

    Yes, please.🙏

  • @tytwh
    @tytwh Před 19 dny +1

    Do you and Wes Roth collaborate? They uploaded an identically titled video 2 hours ago.

  •  Před 19 dny

    While this looks promising, it is just a router that forwards simple queries to weak models while forwarding hard queries to strong models. This assumes that the queries can be divided between strong and weak models. If your work is truly intensive, I don't see much reduction here as it still requires querying strong models most of the time.

  • @opita
    @opita Před 19 dny

    Can you please look into alloy voice assistant

  • @woszkar
    @woszkar Před 19 dny

    Is this an LLM that we can use in LM Studio?

    •  Před 19 dny

      its just a proxy to send queries to two models, weak vs strong. It's not a new LLM.

  • @heltengundersen
    @heltengundersen Před 9 dny

    Claude 2.5 Sonnet missing from the chart.

  • @mdubbau
    @mdubbau Před 19 dny

    Please do a tutorial on setti g up

  • @WylieWasp
    @WylieWasp Před 18 dny

    4:59 you lost me completely with langtrace! What does it do in why would I want it?

  • @jackbauer322
    @jackbauer322 Před 19 dny

    don't ask in the comments each time JUST DO IT !!!

  • @rawleystanhope3251
    @rawleystanhope3251 Před 18 dny

    Full tutorial pls

  • @user-em2hr4gj1f
    @user-em2hr4gj1f Před 19 dny

    Can you make a comparison video with LangGraph X GraphRAG?

  • @MPXVM
    @MPXVM Před 19 dny

    If runs on local machine, why needs OPENAI_API_KEY ?

    •  Před 19 dny +1

      because it still needs to query weak models (like mistral) and strong models like GPT

  • @keithycheung
    @keithycheung Před 17 dny

    Please do a tutorial !

  • @chrismann1916
    @chrismann1916 Před 18 dny

    Now, who has this in production?

  • @trelligan42
    @trelligan42 Před 19 dny

    @7:07, "causal" not "casual". #FeedTheAlgorithm

  • @nemonomen3340
    @nemonomen3340 Před 19 dny

    90% quality and 80% cheaper? I'm actually not sure if I should be impressed or not. Sure, on the surface that seems like a small decrease in quality for massively reduced cost, but isn't it normal for that last ~10% quality to be a lot harder to achieve? I think I'd be more impressed to see a model that's just 5% better quality for an 80% increase in cost.

  • @MussawirIftikhar
    @MussawirIftikhar Před 19 dny

    Dear Matt, you could create a presentation for showing all this rather than just reading from the website. Please give a bit more time when creating videos, I watch your videos for learning things faster not the opposite. Thank you