- 15
- 61 369
Phidata
United States
Registrace 27. 11. 2023
Build AI Apps using open source tools đ
LLM OS on AWS
Lets run the `LLM OS` inspired by the great Andrej Karpathy on AWS
Can LLMs be the CPU of a new operating system and solve problems using:
đ» software 1.0 tools
đ internet browsing
đ knowledge retrieval
đ€ communication with other LLMs
Docs: phidata.link/llmos-aws
âïž Phidata: git.new/phidata
Questions on Discord: phidata.link/discord
Can LLMs be the CPU of a new operating system and solve problems using:
đ» software 1.0 tools
đ internet browsing
đ knowledge retrieval
đ€ communication with other LLMs
Docs: phidata.link/llmos-aws
âïž Phidata: git.new/phidata
Questions on Discord: phidata.link/discord
zhlĂ©dnutĂ: 1 161
Video
Build AI Agents with GPT-4o from scratch đ„
zhlĂ©dnutĂ 6KPĆed 14 dny
Lets build AI Agents with GPT-4o from scratch đ„ đ Web Search Agent (2:40) đ Finance Agent (3:30) đ«Ą Hackernews Agent (5:50) đ Data Analysis Agent (8:10) đïž Research Agent (9:35) Code: phidata.link/assistants âïž Phidata: git.new/phidata Questions on Discord: phidata.link/discord
Team of AI Agents using gpt-4o
zhlĂ©dnutĂ 3,7KPĆed 14 dny
Lets build a team of AI Agents using the new GPT-4o model. We have: đ€ Driver Agent with memory, knowledge & tools đ«Ą Sub-agents for dedicated tasks đ©âđŠâđŠ Working together to solve problems Code: phidata.link/agents âïž Phidata: git.new/phidata Questions on Discord: phidata.link/discord
LLM OS with gpt-4o
zhlĂ©dnutĂ 18KPĆed 21 dnem
Lets build the `LLM OS` inspired by the great Andrej Karpathy using the new GPT-4o model. Can LLMs be the CPU of a new operating system and solve problems using: đ» software 1.0 tools đ internet browsing đ knowledge retrieval đ€ communication with other LLMs Code: git.new/llm-os âïž Phidata: git.new/phidata Questions on Discord: phidata.link/discord
Build the LLM OS | Autonomous LLMs as the new Operating System
zhlĂ©dnutĂ 7KPĆed 21 dnem
Lets build the `LLM OS` inspired by the great Andrej Karpathy Can LLMs be the CPU of a new operating system and solve problems using: đ» software 1.0 tools đ internet browsing đ knowledge retrieval đ€ communication with other LLMs Code: git.new/llm-os âïž Phidata: git.new/phidata Questions on Discord: phidata.link/discord
Llama3 Autonomous RAG
zhlĂ©dnutĂ 3,4KPĆed 28 dny
Lets build Autonomous RAG where Llama3 decides how to pull the data it needs. Code: git.new/groq-autorag âïž Phidata: git.new/phidata Questions on Discord: phidata.link/discord Here's the flow: đŠ The user asks a question. đ€ Llama3 decides whether to search its knowledge, memory, internet or make an API call. âïž Llama3 answers with the context.
Autonomous RAG | The next evolution of RAG AI Assistants
zhlĂ©dnutĂ 4,4KPĆed mÄsĂcem
Lets build an Autonomous RAG Assistant where we let the LLM automatically pull the data it needs. Code: git.new/auto-rag âïž Phidata: git.new/phidata Questions on Discord: phidata.link/discord Here's the flow: đŠ The user asks a question. đ€ LLM decides whether to search its knowledge, memory, internet or make an API call. âïž LLM answers with the context.
Generate Investment Reports using Llama 3 & Groq | AI Assistant | Llama3
zhlĂ©dnutĂ 1,8KPĆed mÄsĂcem
Let's build an `Investment Researcher` powered by Llama3 on Groq âĄïž Our researcher will: đ» Research the company đ Pull financials and news đ©âđŒ Find recommendations âïž Write an investment report Fully open-source: git.new/groq-investor Phidata code: git.new/phidata
Llama3 Research Assistant powered by Groq
zhlĂ©dnutĂ 1,5KPĆed mÄsĂcem
âĄïž Build a Superfast Research Assistant using Llama3 powered by Groq đŠ đ©âđ» Code: git.new/groq-researcher âïž Phidata: git.new/phidata đŹ Discord: phidata.link/discord
Llama3 local RAG | Step by step chat with websites and PDFs
zhlĂ©dnutĂ 10KPĆed mÄsĂcem
Learn how to run Llama 3 locally and build a fully local RAG AI Application. Code: git.new/llama3 Phidata: git.new/phidata
Building New Worlds with Local LLMs
zhlĂ©dnutĂ 583PĆed 3 mÄsĂci
Building new fictional worlds with OpenHermes running on Ollama. Github: github.com/phidatahq/phidata Discord: discord.gg/4MtYHHrgA8
Fully Local RAG AI App using OpenHermes and Ollama
zhlĂ©dnutĂ 2,8KPĆed 3 mÄsĂci
In this video we'll build a 100% local RAG AI App using OpenHermes and Ollama. 100% private, 100% fun. Github: github.com/phidatahq/phidata Code: github.com/phidatahq/phidata/tree/main/cookbook/local_rag
Build an AI App in 3 steps | Autonomous Assistants
zhlĂ©dnutĂ 3KPĆed 4 mÄsĂci
Learn how to build an AI App with a PDF Assistant, Multimodal Assistant and Website Assistant. Github: github.com/phidatahq/phidata Docs: docs.phidata.com/ai-app
Python Engineer
zhlĂ©dnutĂ 283PĆed 4 mÄsĂci
Automate my day-to-day scripting using AI. Read more: www.ashpreetbedi.com/blog/python-ai
Data Analyst AI
zhlĂ©dnutĂ 255PĆed 4 mÄsĂci
Watch a Data Analyst AI create tables, write and run SQL. Get your own: docs.phidata.com/blocks/agent/duckdb
youve done so many of these and you repeat the same script instead of showing us interesting things like custom tools and using llama through ollama......
hi if you give me recommendations i'll make more vids, i did a few videos with llama3 and ollama but happy to do more eg: czcams.com/video/-8NVHaKKNkM/video.html groq: czcams.com/video/ylFEoE42S2w/video.html I actually just record what im working on nowadays, but im happy to take requests :)
I would love to see automatic task delegation or rather agentic workflows here. I saw in your documentation that it says you are working on it. Is this presently available to play with? The scenario, from a one-sentence query, task a set of agents to complete the query.
Can we use any other model like Claude or Llama ?
check docs. Llama is available with different providers.
yup any model you'd like
Amazing! The possibility of building intangible AI tools is endless! đ
snappy! love it!
I was able to get something similar running reliably using llama 3 8b quantized to 4 bits. Not quite as advanced, it doesnât have any task delegation, but I donât see a need for it so I doubt Iâll add it. But Iâm really happy I was able to get it to run on such a relatively âweakâ model that can run locally
This is great! Is there a way to add a lot of documents (hundreds or thousands) into the vector database so that we can have the agent query a large corpus?
Someone said it, Iâll say it again. How are these different from conversational agents which langchain showed how to build like a year ago⊠is it a nicer API? (Langchain at this point like bloatware) but what is the new pattern here? Weâve always had agents where agent = LLM+ knowledge + memory + tools. Otherwise, nice demo!!
Great video, right on money
Thanks for the video, extremely helpful to learn and build our own applications
What if I want to include RAG in this system? How can I do it?
Since this can see shell, can it also write or create files? Create or edit system settings if I chose to do so?
Very helpful and thanks for making it open source. Can you make a tutorial on implementing this in a Streamlit app? Also, can you explain how we can add descriptions to each PDF file, so when querying, the agent knows which PDF to look for the most accurate data?
Just wanted to let you know that the discord link doesn't work! i'd love to join!
Why are you even to make these tutorials short! There is so much to learn, please make them longer or make more videos :)
I'm very skeptical of all things around AI lately, but this is a really cool implementation/conceptualization of what a powerful LLM can do. I want to build one of these locally and see if I can make it an 'expert' at something niche and traditionally 'difficult' for a computer to do.
Thank you. This is so neat, I am dying to port our agent to this framework soon
Awesome, I could not find the Local RAG under cookbook ?
can you please comment on advantages of of phidata vs autogen vs crewai?
great. can you make customize assistants and customize tools with phidata? any example?
Thanks for the video. Very helpful. I have a question: You mention openDB at 6:55. What is the concept? Can you share a link, please?
This is amazing! Is a LLAMA-3 with Groq version in your roadmap? I'm going to attempt to convert it, but donÂŽt know if I'm as skilled...
Phidata reminds me of early stages of Microsoft copilot... As a Self-taught Prompt Engineer, l can relate and join dots...
Had hands on Phidata AI Engineering framework, it simplifies the whole interconnected AI system structure
NotFoundError: Error code: 404 - {'error': {'message': 'The model `gpt-4o` does not exist or you do not have access to it.', 'type': 'invalid_request_error', 'param': None, 'code': 'model_not_found'}}
I was thinking same, to integrate LLM into Linux.
What software do you use for the zoom in/out effect? It looks great.
Can use local open source model instead of gpt4o?i mean how to code if can
The prompts might have to be tuned I believe. Iâve tried multiple good repos which doesnt work as expected if not for the same LLM it was built with.
If any individual agent needs to use memory, knowledge or internet how can it do so
Is there anyway we can create a team of agents with their own memory, knowledge and tools and allow them to communicate with each other to get a task done
does local models with function calling support work with it?
yes you can use local models but sadly they are not currently good enough for this :(
finally a good channel.
wow!
thanks can you make work with open LLM like ollama
Technically, LLM OS would work with a local Llama-3 model right? Since you do not need the "omni" multi-modal input.
technically yes, but local models probably are not yet good enough to pull this off. maybe i do a video testing local models with this
your vibe is amazing! thx alot bro!
appreciate your vibe too :)
do you just repost the same video every day?
How does Phidata compare to other frameworks, like CrewAI, Autogen, Agency Swarm, etc.? What makes it standout compared to the competition (strengths/weaknesses)? It would be great if you made a video about this.
or just use the API this some a little pointless
Hello. This is a great tool. Iâm new to coding but logically I completely follow you. Would you be able to assist me in where to find instructions for how to add âtextâ as an option to go along with the correct âpdfâ option you mentioned? Thank you,
Brilliant, thank you â€
thank you :)
Sorry to ask this, but what did you do after selecting the code (1:49) to get that new window running showing the reponse of the receipt? I have no clew what to do: did use a shortcut? And what did you do after switching from gpt-4o to llama3, (2:23) to get a new response in the new window? Another shortcut? Can assistants communicate with each others? Like ask python assistant script writer to use the websearch assistant to do research then write a code?
I think that a entire OS is to much for a LLM but I expect to use one as a linux shell in a near future.
oh I think this video answers my very recent question :D oh but ollama is an extra step I don't see the need for
holly cow... well done I assume this uses an online LLM? or does it load up a local gguf like I'd like? can you briefly explain were to make such a switch if needed?
Great job! I just test the framework and it's fantastic. Just curious why the 1k string length restriction on the tool instruction
Thank you sharing this amazing content!
Amazing job, thank you so much for sharing it!
Thank for creating great content
Where do you enter the API key? Apologies am pretty new to this.