Run Llama 3 on Linux | Build with Meta Llama
Vložit
- čas přidán 23. 07. 2024
- Download Meta Llama 3 ➡️ go. 0mr91h
Navyata Bawa from Meta will go over a brief overview of Meta Llama models, and share a step-by-step tutorial to run Llama on Linux OS by getting the weights from the Llama website and running the model locally on your setup using examples from the Llama 3 GitHub repo.
Timestamps
00:00 Introduction
01:00 Llama models and capabilities
03:05 Confirm your Linux setup
03:45 Download Meta Llama models
04:50 Clone the Llama 3 repo and download model weights
07:20 Example text completion script
08:50 Run the text completion example
10:00 Run the chat example
Additional Resources
• Run Llama 3 on Linux written guide: go. 1cs3p4
• Getting Started Guide: go. 90gu7x
• Running Llama on Mac, Windows or Linux - Notebook: go. m1kn2t
• Fine-tuning, inference and API provider recipes: go. i05hes
• Getting to know Llama - Notebook: go. mhe17z
- - -
Subscribe: czcams.com/users/aiatmeta?sub_...
Learn more about our work: ai.meta.com
Follow us on social media
Follow us on Twitter: / aiatmeta
Follow us on LinkedIn: / aiatmeta
Follow us on Threads: threads.net/aiatmeta
Follow us on Facebook: / aiatmeta
Follow Navyata on Twitter: / navyatabawa - Věda a technologie
Good ❤
A plus.
Good
the issue resolved after installing NVIDIA CUDA Toolkit
I tried the steps you are using, but in my case, the model was crashing with error "E0627 13:16:38.866271 140091171120960 torch/distributed/elastic/multiprocessing/api.py:826] failed (exitcode: -9) local_rank: 0 (pid: 3282) of binary:"