Replace Github Copilot with a Local LLM

Sdílet
Vložit
  • čas přidán 27. 01. 2024
  • If you're a coder you may have heard of are already using Github Copilot. Recent advances have made the ability to run your own LLM for code completions and chat not only possible, but in many ways superior to paid services. In this video you'll see how easy it is to set up, and what it's like to use!
    Please note while I'm incredibly lucky to have a higher end MacBook and 4090, you do not need such high-end hardware to use local LLM's. Everything shown in this video is free, so you've got nothing to lose trying it out yourself!
    LM Studio - lmstudio.ai/
    Continue - continue.dev/docs/intro

Komentáře • 198

  • @toofaeded5260

    Might want to clarify to potential buyers of new computers there is a difference between RAM and VRAM. You need lots of "VRAM" on your graphics card if want to use "GPU Offload" in the software which makes it run significantly faster than using your CPU and system RAM to do the same task. Great video though.

  • @Hobbitstomper

    So, you are telling me if I don't want to spend $20/month on copilot, I should instead buy a $2000 graphics card or a $4000 MacBook.

  • @dackerman123

    But local is not free.

  • @BauldyBoys

    I hate inline completion, within a couple days of using co-pilot I noticed the way I was coding changed. Instead of typing through something I would type a couple letters and see if the ai would read my mind correctly. Sometimes it would don't get me wrong but it was overall a bad habit I didn't want to encourage. This tool seems perfect for me as long as I'm working on my desktop.

  • @jeikovsegovia

    “sorry but i can only assist with programming related questions" 🤦

  • @nasarjafri4299

    Yeah but doesn't a local LLM needs atleast 64GB of ram? How am I suppose to get that as a college student. P.S correct me if Im wrong

  • @CorrosiveCitrus

    "Would you pay a cloud service for spell check?" Well said.

  • @KryptLynx

    I will argue, it is faster to write the code than to write code description for AI

  • @ariqahmer

    I was wondering... How about having a dedicated server-like PC at home to run these models and have it connected to the network so it's available to most of the devices on the network?

  • @CharlesQueiroz25

    can I do it in IntelliJ IDE?

  • @RShakes
    @RShakes  +10

    Your channel is going to blow up and you deserve it! Fantastic info, concise and even gave me hints to things I may not have known about like the LM Studio UI hint about Full GPU Offload. Also interesting take on paying for cloud spellcheck, I'd agree with you!

  • @programming8339

    A lot of great knowledge compressed in this 5 min video. Thank you!

  • @RobertLugg

    Your last question was amazing. Never thought about it that way.

  • @phobosmoon4643

    3:15

  • @levvayner4509

    Excellent work. I was planning to write my own vs code extension but you just saved me a great deal of time. Thank you!

  • @mikee2765

    Clear, concise explanation of the pros/cons of using local LLM for code assist

  • @MahendraSingh-ko8le

    Only 538 subscribers?

  • @Aegilops

    Hey Matthew. First video of yours that CZcams recommended and I liked and subbed. I tried ollama with a downloaded model and it ran only on the CPU so was staggeringly slow, but I'm very tempted to try this out (lucky enough to have a 4090). I'm also using AWS Code Whisperer as the price is right, so am thinking your suggestion of local LLM + Code Whisperer might be the cheap way to go. Great pacing of video, great production quality, you have a likeable personality, factual, and didn't waste the viewers time. Good job. Deserves more subs.

  • @Gabriel-iq6ug

    So much knowledge compressed in only 5 minutes. Great job!

  • @therobpratt

    Thank you for covering these topics - very informative!