LLM OS with gpt-4o
Vložit
- čas přidán 12. 05. 2024
- Lets build the `LLM OS` inspired by the great Andrej Karpathy using the new GPT-4o model.
Can LLMs be the CPU of a new operating system and solve problems using:
💻 software 1.0 tools
🌎 internet browsing
📕 knowledge retrieval
🤖 communication with other LLMs
Code: git.new/llm-os
⭐️ Phidata: git.new/phidata
Questions on Discord: phidata.link/discord
woah just finished the "older" video and BAM a new one is already avaiable and already updated!
You are a great dev, thanks for making it avaiable for everyone and for the simplicity which you use to describe difficult things!
i aim to please :)
I'm very skeptical of all things around AI lately, but this is a really cool implementation/conceptualization of what a powerful LLM can do. I want to build one of these locally and see if I can make it an 'expert' at something niche and traditionally 'difficult' for a computer to do.
I love that you have dived in and produced a practical demo. Thanks for that and for releasing the code. Longer term I wonder if using traditional OS concepts might be limiting and if a more human centric model has benefits? Eg, a minimal design that grows with the user, uses self-evaluation and develops it's own assistants and tools specific to that user's needs?
This is really great @phidata! Great resource for my project, am building a system much like this with a twist of the ACE (Autonomous cognitive Entities) framework and can use any open source models (Groq is super cool and super fast).
Thank you for open sourcing your work.
Amazing job, thank you so much for sharing it!
Security. Reliability. Performance. So many concerns.
fun fun fun :) i think its good to experiment and then add those on top :)
I was able to get something similar running reliably using llama 3 8b quantized to 4 bits.
Not quite as advanced, it doesn’t have any task delegation, but I don’t see a need for it so I doubt I’ll add it.
But I’m really happy I was able to get it to run on such a relatively “weak” model that can run locally
God Bless you and your work so far . very cool
@liamlarsen9286 🙏🙏thank you for the blessing 🙏🙏
You're the man 👏
Bro, how you put this together so fast ahahahah. You get leaked news early! Awesome video, I can’t wait to try after work
Great video, what kind of pointer do you use? the red line that disappears
you are on top of it !
thank you, at your service :)
Thank you. I'd love to see the text of the research report it wrote for you.
the father of LLM OS! awesome
Instantly subscribed, thanks
thank you
This is amazing! Is a LLAMA-3 with Groq version in your roadmap? I'm going to attempt to convert it, but don´t know if I'm as skilled...
is there a way to connect this to my vscode ? I'm trying to connect gpt4o to my project that I'm working on(it's an e2e playwright/ts automation framework for a nextjs project) kind of like a copilot, but I feel copilot doesn't give sometimes the best suggestions
Awesome
appreciate you
Since this can see shell, can it also write or create files? Create or edit system settings if I chose to do so?
Technically, LLM OS would work with a local Llama-3 model right? Since you do not need the "omni" multi-modal input.
technically yes, but local models probably are not yet good enough to pull this off. maybe i do a video testing local models with this
I think GPT-4o which is a multimodal model has been trained on millions of CZcams videos, it will be the same for GPT-5, just think of it, to scale up a model you train it on more data with more parameters, since the maximum of relevant tokens available online cannot exceed 50 trillion, the biggest source of quality data is CZcams with over 4 billions of videos, I think it's why GPT-4o is so good at agent capabilities.
you must take into account, that the data has to be increasingly well prompted as you grow. because of how large the multiparamter models are, their data must become increasingly complex sets of training / test data to actually see an increase in reasoning performance. Right now we are experiencing a plateau, as we can see even the 400 billion paramter model waiting on by meta has been training for months, which means our constraint limit is complex data and compute (GPUs). For models to become better at reasoning tasks, they have to complete reasoning tasks in a dataset, which may be significantly more complex than just watching youtube. They possibly created lots and lots of training data Even USING GPT4 ! So we can see the models become more efficient - but not grow (model collapse)
The quantity is impressive isn't it?
The quality of the data is subpar nonsense and insanity.
Quality over quantity of course
Shakes head in disbelief you people are bloody idiots innit?
@@liamlarsen9286compute ain't the problem dude.
Just laughable meritless mediocre meaningless mumbojumbo nothing more.
Why is it you people don't grasp any of what is going on?
@@Nakatoa0taku did chatgpt help you write that
Fantastic but please tell me me when it searches does it read the pages as well, scrape them maube with other cheaper llms team members or firecrawl which can scrape structured data to vector, does it read pages or only search results
we can program it to work however you like :) currently it reads the page if non-javascript but we have a firecrawl integration so can use that aswell
@@phidata awesome thanks and could technically add other scrapers or integrations as well if needed?
The links in the description don't seem to work at the moment. Great video and coding, however!
sorry, i tried to put em but not sure why its not working. everything should be under the phidata repo: git.new/phidata
How do you recommend I add vision capabilities?
Im hoping to use this as the backend of an Android app that can take plant images and diagnose their health and living conditions.
Thank you in advance! ❤️
the assistant.run() functions accepts images, checkout github.com/phidatahq/phidata/blob/main/cookbook/assistants/vision.py#L7-L15 for an example
can you share with us how can we add bunch of files (pdf, txt, docx etc) then the agents/assistant can respond based on the knowledge base and respond accordingly. not neccesary for users to upload from the front end.
yup absolutely doable, will add a video to it. For now please check the docs here: docs.phidata.com/knowledge/introduction
How big can the data base be? Can I add multiple doc files?
Yes please confirm this
@@jarad4621 you can add as much as you like, its a postgres database so the limits havent yet been found 😂
you can add many, many doc files. i've personally built this with 50-70 gb of data and it does fine (ofcourse need a HNSW index + retrieval fine tuning)😂
thank you for this valuable video, i've followed all the steps and it also launches the web ui, but gives me this error: NotFoundError: Error code: 404 - {'error': {'message': 'The model `gpt-4o` does not exist or you do not have access to it.', 'type': 'invalid_request_error', 'param': None, 'code': 'model_not_found'}}
API keys are connected and in use as well
Can use local open source model instead of gpt4o?i mean how to code if can
The prompts might have to be tuned I believe. I’ve tried multiple good repos which doesnt work as expected if not for the same LLM it was built with.
why do you say that it only works wit GPT-4o?
gpt-4o or gpt-4-turbo, other models aren't there yet. maybe i try with opus that should work probably
is that standalone? if so, why is it buried in phidata?
its built using phidata :)
@@phidata I just realized that's also your name lol
I guess I'll have to download the whole thing then :D
That is cool, but damn that code looks like a spaghetti spiral. I really wish people would format their code properly and get rid of test things.
sorry :(
@@phidata No need to be sorry lol it's good stuff I like it! Just don't bait people into downloading code which will eat tokens if you decide to opensource it, otherwise! Keep it up, I'll sub and wait for your next video!
langchain?
ohh no no
wow!