Dual 3090Ti Build for 70B AI Models

Sdílet
Vložit
  • čas přidán 15. 03. 2024
  • In this video, I take you through my exciting journey of upgrading my computer setup by adding an additional Nvidia RTX 3090Ti, with the ultimate goal of running highly demanding 70B localllm models and other GPU-intensive applications. For those who share a passion for pushing the boundaries of AI research and computational power, you know how crucial having the right hardware can be. That's exactly why I embarked on this upgrade mission.
    After extensive research and monitoring the market for the best deals on GPUs, I stumbled upon a golden opportunity at my local Micro Center. To my surprise, they had refurbished Nvidia 3090 and 3090Ti Founders Edition cards on offer at prices that undercut even the second-hand market. This was a deal too good to pass up, especially for a high-performance enthusiast like myself looking to bolster my system's capabilities for handling some of the most compute-intensive tasks out there.
    In this detailed build log, I'll show you every step of the process, from the decision-making to the installation and eventual performance testing. We'll explore why the Nvidia 3090Ti is a game-changer for anyone interested in deep learning, AI model training, and running sophisticated algorithms that demand significant GPU resources.
    Furthermore, I'll share insights on how to spot great deals on high-end hardware, the importance of considering refurbished components, and tips for ensuring your system is ready to take on the challenges of next-generation computing. Whether you're a seasoned AI developer, a deep learning enthusiast, or simply someone fascinated by the capabilities of modern technology, this video is packed with valuable information.
    Join me as I boost my computer's performance to new heights, making it capable of running 70B localllm models and beyond. Don't forget to like, share, and subscribe for more content on AI, technology, and high-performance computing builds. Your support helps me bring more of these in-depth guides and tutorials. Let's dive into the world of high-end computing together!
  • Zábava

Komentáře • 35

  • @UpNorth937
    @UpNorth937 Před měsícem

    Great video!

  • @i6od
    @i6od Před 16 dny +1

    i seen a reddit post of guy running 4 p100 16gb under 1300$ getting 30 tokens a second with vLLM , on 70b llama 3 lol, im so happy to see other builds like dual 3090's too, so far i have managed to pick up one titan rtx im hoping to shoot for a 3090 or another titan rtx,

    • @OminousIndustries
      @OminousIndustries  Před 15 dny

      Its been very cool to see the use cases of older cards for localllm setups. I want to grab a tesla p40 at some point and put in in a restomod llm pc if nothing more than for the cool factor of how it looks.

  • @cybermazinh0
    @cybermazinh0 Před 2 měsíci

    The video is very cool, the case of the 3090 could be very beautiful

    • @OminousIndustries
      @OminousIndustries  Před 2 měsíci +1

      Thanks very much! I am going to be swapping everything over into a Thermaltake View 71 case very soon.

    • @jamesvictor2182
      @jamesvictor2182 Před měsícem

      Unlike the inside of that case!

  • @mcdazz2011
    @mcdazz2011 Před 2 měsíci +1

    One of the best things you can do in the short term, is to clean the front air filters. I can see one at 11:48, and there's a fair amount of dust between the filter and the fan. You'll get better air intake just by cleaning them, which will help with any heat generated in that case (which is a BIG heat trap).
    Longer term, definitely look at getting a new case with better air flow.
    The way it is at the moment, that case is going to act like an oven and you'll likely find that the CPU/GPUs might thermal throttle and rob you of performance.
    Thermaltake make some pretty big cases (on wheels if that's your thing), so you might like the Core W100 or Core W200.

    • @OminousIndustries
      @OminousIndustries  Před 2 měsíci

      Excellent advice, ironically enough I recently purchased a Thermaltake View 71 to transfer all the components into. I am excited to do the swap.

  • @cyrigofficial
    @cyrigofficial Před 7 dny

    Extremely toasty if you do not lock power limit under 300w

  • @jamesvictor2182
    @jamesvictor2182 Před měsícem

    I am awaiting my second 3090 ti, probably going to end up water cooling. How has it been for you with heat management?

    • @OminousIndustries
      @OminousIndustries  Před měsícem

      I have not seen crazy temps while running localllama. I did render something in keyshot pro that made the cards far too hot but for any llm stuff it hasn't been too bad at all.

  • @mbe102
    @mbe102 Před 2 měsíci

    What is the aim for using opendalie? Is it just... for fun, or is there some monetary gain to be had through this?

    • @OminousIndustries
      @OminousIndustries  Před 2 měsíci

      Personally I just use it for fun. Some people use these uncensored image models to generate NSFW images that they then release on patreon, etc to make some money, but that is not in my wheelhouse.

  • @atabekkasimov9702
    @atabekkasimov9702 Před 2 měsíci +1

    Did You plan to use NVLink with new Ryzen setup?

    • @OminousIndustries
      @OminousIndustries  Před 2 měsíci +2

      It is something I would like to add once I swap over to a threadripper. I have seen conflicting opinions on how much it helps but I would like it for "completeness" if nothing more.

  • @mixmax6027
    @mixmax6027 Před měsícem +1

    How'd you increase your swap file? I have the same issues with 72B models running dual 3090s

    • @OminousIndustries
      @OminousIndustries  Před měsícem +1

      These instructions should work, though I have only used them on 2022.04 wiki.crowncloud.net/?How_to_Add_Swap_Space_on_Ubuntu_22_04#Add+Swap

  • @M4XD4B0ZZ
    @M4XD4B0ZZ Před 2 měsíci +1

    Ok so i am very interested in local llms and found that my system is way too weak for my likings. But i really have to ask.. what are you doing with this technology? I have no "real" use case for it and wouldn't consider buying two new gpus for it. What are actual beneficial use cases for it? Maybe coding?

    • @OminousIndustries
      @OminousIndustries  Před 2 měsíci +2

      I have a business that utilizes LLMs for some of my products so it is a 50/50 split between business-related research and hobbyist tinkering. The requirements to run LLMS locally are heavily dependent on the type and size of model you want to run. You don't need a large vram setup like this to fool around with them, I just went for this so that I could run larger models like 70B models. Some of the smaller models would run fine on an older card like a 3060 which can be had without breaking the bank. Some of the model "curators" post the requirements for vram for the models on huggingface, bartowski being one who lists the requirements.

    • @M4XD4B0ZZ
      @M4XD4B0ZZ Před 2 měsíci

      @@OminousIndustries thank you for the insights really appreciate it

    • @OminousIndustries
      @OminousIndustries  Před 2 měsíci

      @@M4XD4B0ZZ Of course!

  • @Meoraclee
    @Meoraclee Před 4 dny

    Hi Ominous, Im looking for a good spec to train llm, ai (gaming sometime) with budget 3000$. Is 2x3090ti 24GB is the best option for my budget ?

    • @OminousIndustries
      @OminousIndustries  Před 4 dny +1

      I believe the consensus is still that dual 24gb cards (like the 3090) is the best "budget move", however, I would head over to: www.reddit.com/r/LocalLLaMA/ and browse/ask there as there are a lot of knowledgeable people in there who can provide good insight on this. For what it's worth, I don't believe it will make a huge difference if you get non Ti cards, I just bought them as I wanted to match the first card I had which happened to be a Ti.

  • @codescholar7345
    @codescholar7345 Před měsícem

    What CPU and motherboard? What is the temperature of the cards? Thanks!

    • @OminousIndustries
      @OminousIndustries  Před měsícem

      The cpu is an I7-12700K and the mobo is a MSI PRO Z690-A. I purchased them as a micro center bundle about a year ago. I have not seen the card temps get over about 75c when using the text-gen-webui. I was using keyshot pro for something and decided to use both cards to render the project and they got far too hot, so cooling is first priority to be upgraded.

    • @codescholar7345
      @codescholar7345 Před měsícem

      @@OminousIndustries Okay thanks. Yeah there's not much space in that case. I have a bigger case, I'm looking to get another 3090 or 4090 and possibly water cool them. Would be nice to get an A6000 but too much right now

    • @OminousIndustries
      @OminousIndustries  Před měsícem

      @@codescholar7345 I have a thermaltake view 71 to swap them into when I get the time. The A6000 would be awesome but yeah that price could get you a dual 4090 setup. A water cooling setup would be very cool and a good move for these situations.

  • @emiribrahimbegovic813
    @emiribrahimbegovic813 Před 24 dny

    Where did you buy your cafd

    • @OminousIndustries
      @OminousIndustries  Před 24 dny

      I got it at Micro Center, they were selling them refurbished. Not sure if they still have any in stock. They also had 3090s.

  • @MikeHowles
    @MikeHowles Před 24 dny

    Bro, use nvtop. You're welcome.

    • @OminousIndustries
      @OminousIndustries  Před 24 dny

      I'm going to install that tonight for my intel gpu build, I previously hadn't found a monitor for that gpu on linux.

  • @skrebneveugene5918
    @skrebneveugene5918 Před 27 dny

    What about llama3?

    • @OminousIndustries
      @OminousIndustries  Před 27 dny

      I tested a small version of it in one of my more recent videos!

  • @m0ntreeaL
    @m0ntreeaL Před 23 dny

    BIG Price ...i guess 200bucks to High