Here is How You Create AI Operating System from Scratch
Vložit
- čas přidán 11. 06. 2024
- 🚀 Welcome to Our Comprehensive Guide on Building an LLM OS with GPT-4! 💻
In this exciting video, we dive into the world of Large Language Model Operating Systems (LLM OS) inspired by Andrej Karpathy's visionary proposal. We'll show you how to create a sophisticated LLM OS using GPT-4, integrating various tools and multi-agents to make your system incredibly powerful and versatile. Here is How You Create AI Operating System from Scratch
🛠️ What You'll Learn:
Introduction to LLM OS: Understand the concept and how it functions as a central orchestrator.
Setting Up Tools: Learn to integrate a calculator, Python interpreter, browser, file system, and more.
Creating Multi-Agents: Set up different AI agents like investment assistant, research assistant, and more.
Knowledge Base & Memory: Store and retrieve information efficiently using a knowledge base.
User Interface: Build a user-friendly interface for seamless interaction with your LLM OS.
🔗 Resources:
Sponsor a Video: mer.vin/contact/
Do a Demo of Your Product: mer.vin/contact/
Patreon: / mervinpraison
Ko-fi: ko-fi.com/mervinpraison
Discord: / discord
Twitter / X : / mervinpraison
Code: mer.vin/2024/05/phidata-llm-os/
github.com/phidatahq/phidata/...
📌 Timestamps:
0:00 - Introduction to LLM OS Concept
1:01 - Setting Up GPT-4 as the LLM
2:20 - Integrating Tools and Multi-Agents
5:49 - Creating the Knowledge Base and Memory
9:02 - User Interface Setup and Demonstration
12:27 - Testing the LLM OS with Various Queries
🌟 Key Benefits:
Enhanced Efficiency: Automate complex tasks using integrated tools and AI agents.
Customisable: Tailor the LLM OS to meet your specific needs and requirements.
Scalable: Expand functionality by adding more tools and agents as needed.
User-Friendly: Easy setup with a step-by-step guide and clear instructions.
📥 Get Started:
Source Code & Commands: Available in the description for easy setup.
Subscribe & Stay Updated: Regular videos on AI and tech innovations.
Like & Share: Help others discover the power of LLM OS by sharing this video
#AI #operating #system #llmos - Jak na to + styl
Really the absolute BEST AI presentations and development around! Thanks!
You are from another planet, Mervin... Always few steps ahead in the future 🤯🤯🤯...
this is Phidata Team project czcams.com/video/YMZm7LdGQp8/video.html
Amazing video. Thank you!
If I would give a score for this spectacular video in a scale from 1 to 10 .. I would give you 20! Well Done and Many Thanks
very nice video, thanks Mervin
Would love to see a remotely accessible server addition to this setup to act something like open interpreter and the O1 lite.
Holy Shit Marvin I’m impressed . You’ve just shown how with a few python libraries and some code you can hook into os file system and create agents with specialised knowledge . If I am not mistaken this could run on any embedded hardware with Linux os and a 5G or WiFi connnwction because your using an open ai api call Which means you could extend Ai to edge computing right now to all the millions of connected edge devices out there
Kudos man 👏👏👏
I’m going to try this on some hardware I have at work
Awesome Dear. Thanks for sharing. One Que - Can we use LLAMA3 instead of GPT4 ?
This is impressive
Out of all the AI tools and frameworks you’ve used, which one(s) do you find to be the most useful and have the most promise moving forward?
Would it be possible to add approved chat comments to the local knowledge base? Or is that automatic via the postgres storage? Also, can PraisonAI automatic multi-agent creation be added vs. manually/programmtically defining all of the agents/tools in LLM OS?
super cool, I hope the next CPUs will run Llama 3 70B fast
Hi Mervin, what about postgres memory ? is it a long term memory ? something like autogen teachable agent or memgpt ? thx again for your amazing content !
Postgres right now is storing chat history -- but chatgpt like personalized memory is in the works :)
@@phidata great! Amazing! I don't know if it will be possible, but I dream of a long term management system in a sql like database with autocreation of tables for topics which will be filled in by the agent when he found relevant info to keep (for example preferences, backstory of the user, the company, personals data's, ideas and thought, etc.) this kind of memory would be very helpful for all kind of assistants, from office to psychotherapist, coach, etc etc and maybe a runtime to reorganise all the database when it's needed... And if all that can be user session managed, it will be the perfect framework for new kind of agentic system, if you see what I mean... Unfotunaly I don't have enough coding skills to build that or to help building that.. Thanks a lot @phidata for.. Phidata ;) very great job and very great gift for the world!
@@christopheboucher127 this is truly amazing! im coding the personalized memory piece right now and your message was like the AI gods speaking to me showing me what to build. Thank you!
Cannot express how much I appreciate this guidance, thank you
It's funny in the film Her the fictional OS1 appears to take up the whole screen. It does later show documents and other UI elements, so I wonder if it's fair to call it an OS. I will say Apple and Microsoft need to get ahead of this and start allowing LLMs to control their desktops. My guess is both are working feverishly on this. In a few years you won't need to know how an email app like Outlook even works to send and receive email, or to create and update spreadsheets, your AI OS will handle that for you, just tell it what you want in there.
Love to make something similar, however I would seek the foundational LLM to actually be a local SLIM and call for larger models if needed. I would wish to make it local GPU agnostic/unneeded. Modular GPUs can be added on prem or called from providers. My idea is not in any way superior on the face of it... just an iteration/extrapolation on a similar idea. Thanks.
Is there any way i can use these assistants created by phidata in multi agentic frameworks like crewai or autogen?
there is a technical reason to choose phidata instead of langchain ?
Hello, exporting my openai api key isn't working on the terminal, any tips on how to import it ?
This would work good with Enso the interactive programming language
Is openai the only api? Or can i use local llm to mimik openai api will it work?
I’m having problems using lm studio with this or groq
Legit question, why not just use the embed models like Nomic for example, chatting with my LLM I learned these vector "memories" create neural connection cells/nodes or whatever and it connects to these vector memories , meaning it's knowledge and memories sort of expands..
How does PhiData relate to CrewAI and PraisonAI? Would we use them all separately and independently? Or do they work together somehow? If they are independent, which do you recommend and why?
1. Start with crewAI
2. Find it was a waste of time
3. Move on with your life 😅
No need to blow your mind with more complex stuff to see this stuff provides zero value
CrewAI is perfect and a little bit illegal with how simple it is.
@@denisblack9897 What do you mean by, "CrewAI is perfect and a little bit illegal with how simple it is."
@@denisblack9897 I'm confused... are you saying CrewAI is too simple to do real work? Or are you saying it's amazing?
🎉🎉👏👏👏
I rewatched this. Now I think that the term from scratch is misleading. Now I think that from scratch should start with „open a new python file“.
❤❤❤
Can it run doom tho?
Phidata > Praison AI?
Kendi görüntün çok büyük. Kodları göremiyoruz.
You could have shared a link to the original video: czcams.com/video/6g2KLvwHZlU/video.html
instead of recording a clone yourself.
Hm 😢
tbh i think mervin explained it better than me :)
if you havent noticed he does that a lot for most of his videos.
@@helix8847 im actually a big fan of that because he explains it much better than me :) hope he continues to do that. Mervin has a way of communicating complex information and i learn a lot about my own work when he makes a video
Let’s rename an agent as OS and pretend it is novel…🤷🏻♂️🙈
This is never going to work, since the LLM has to work from an OS as well. This is an OS on an OS. And it always needs lots of power.
depends on what you mean by ‘this’ 🤓
Must have never heard of virtualisatiin
Yes it’s true LLM’s have an OS under it in the same way an ATM or a parking meter have an app running on top of a operating system like windows embedded or embedded Linux but I think you missing the point . LLM’s are black boxes they have no contact with things outside their domain . LLM Os is a concept and Marvin has demonstrated how you can implement that concept and giving it hooks into your OS to access files and special agents . In this video Marvin uses an OpenAI api key which means you are making a api call to
Open Ai servers . So you could run this as is on a barebones Linux system with a WiFi or 5G connection and run it on raspberry pi or higher end beaglebone black .
If you were going to replace the OpenAI LLM component with an open source Llm like Grok or llama then you are correct you would need a lot more memory and compute power not to mention a gPU
Here is how you create AI OS from SCRATCH => Python …. 😂😂😂😂😂😂
What’s next? Building AI rockets from SCRATCH with Python? 😂😂😂😂😂😂
Please stop the nonsense and start educating people with real stuff. Yes Python is installed by default in OS but it‘s not used to program the OS.