Autogen Full Beginner Course
Vložit
- čas přidán 8. 07. 2024
- Welcome to a Full AutoGen Beginner Course! Whether you know everything there to AI Agents or are a complete beginner, I believe there is something to learn here. We have topics to the introduction to AutoGen to group chats, to a Reddit Project at the end. There are many topics to go over, and also a few projects for you to do. You can pause the video when you get to them and then unpause so you can see how I did it.
By the time you are done with this course you will be able to understand what AutoGen is and create your own Multi-Agent workflow.
You can download the IDE I use and you can use the Conda Environment with the following download as well:
🥧 PyCharm Download: www.jetbrains.com/pycharm/dow...
🐍 Anaconda Download: www.anaconda.com/download
Bonus Project URL: / apps
AutoGen Beginner Course Code: github.com/tylerprogramming/a...
Nested Sequential Chat Video: • AutoGen Tutorial | Seq...
Don't forget to sign up for the FREE newsletter below to give updates in AI, what I'm working on and struggles I've dealt with (which you may have too!):
=========================================================
📰 Newsletter Sign-up: bit.ly/tylerreed
=========================================================
Join me on Discord: / discord
Connect With Me:
🐦 X (twitter): @TylerReedAI
🙋♂️ GitHub: github.com/tylerprogramming/ai
📸 Instagram: TylerReedAI
💼 LinkedIn: / tylerreedai
📆 31 Day Challenge Playlist: • 31 Day Challenge AutoGen
🙋♂️ GitHub 31 Day Challenge: github.com/tylerprogramming/3...
🦙 Ollama Download: ollama.com/
🤖 LM Studio Download: lmstudio.ai/
The paper: arxiv.org/abs/2403.04783
📖 Chapters:
00:00:00 Welcome to the Course!
00:00:46 Autogen Introduction
00:02:59 Download PyCharm
00:04:17 01 - two way chat
00:11:51 01.1 - human interaction
00:13:30 02 - group chat
00:19:38 Project #1 - Snake
00:22:25 04 - sequential chat
00:28:50 05 - nested chat
00:33:54 06 - logging
00:40:16 07 - vision agent
00:46:57 openai vs. local
00:47:31 09 - lm studio
00:53:00 10 - function calling
01:04:48 brief intermission
01:05:15 11 - tools
01:14:51 12 - create images!
01:17:54 Project #2 - autogen + img + save file
01:20:16 Bonus Reddit Project
💬 If you have any issues, let me know in the comments and I will help you out!
Hey! With this course from beginning to end, you will be familiar with AutoGen and be able to create your own ai agent workflow. Like, subscribe and comment 😀 Have a good day coding!
Oh no coding again?
I love that your profile picture features both you and your wife. I'm subscribing not only because you're a great teacher but also because you proudly display the love of your life. 🥳❤🥳❤🥳❤🥳❤🥳❤
@@user-wy6hg9ej6f Thank you! I really appreciate your words! Yeah, it wouldn't be possible without her...that's the truth lol
You earned a new subscriber and loyal follower, gentleman. Great speech modulation and clearness.
Thank you so much appreciate this 🙌
This course is fantastic, thank you!!
Thank you Tyler, Awesome as usual!
Thank you!!
I usually do not comment, but I am commenting because you are just awsome. I was confused from last 3 days about LangGraph vs Autogen, but now you have completed my all doubt, with this video, thanks.
Hey thank you so much! I'm so glad to help clear things up 👍
A wonderful beginner's tutorial. Thanks for providing the code as we can copy paste and test it quickly. Appreciate it.
how did you get it working i get so many issues with it running locally
@AndyPandy-ni1io what issues are you having?
@@TylerReedAI main thing is when I make from scratch but I want it run run a local LLM do I still need the config.json or do I just put the Equivalent API stuff in main.py?
Sounds fantastic. I will take it as soon as I can.
Awesome, thank you 👍
Great summary. I've been looking for more examples of Autogen. I'd love to see a comparison of CrewAI vs Autogen and the code behind the test.
Great explanation and summary of autogen !
Thank you 👍
This is truly a fantasic video, i have been trying to learn crewai and i just have crap results every time plus its allot more code, i think i am setting on autogen and your video has been a huge help thank you you just earn another sub :)
Tyler, excellent video. I learned a lot. God bless you.
Thank you, glad it was helpful!
OK so sorry for the caps, mastered it now THANKS :) best thing you can do is talk a little slower though haha makes it hard to follow when it's all new EDIT = Just want to say anyone not getting this at 1st do it a few times and suddenly the penny will drop, just focus on why he's writting and get the structure understood then it's not so hard anymore :)
Okay noted!
legend
Thank you 🙏
thank you for these videos. I would like to ask that a CZcamsr do a video like this that goes straight into using a local LLM so that those of us without extensive knowledge don't have to piece it together later and hope we have it all right. Because quite frankly anyone looking to set this stuff up is more likely headed to the free version of most all if not all of this.
You’re welcome! And I hear ya and understand. It seemed it might be easier to get something going with OpenAI but I see your point!
This is excellent! Thank you for your efforts in bringing this tutorial out. Can I ask, can we add pdf’s to agents. Like ask agents to digest pdfs at particular points in the workflow and contribute to the discussion based on what it learns there?
Hey thank you, and absolutely you can. This would be using RAG. I will have a video soon on how to do just this!
I don't see the reddit url you mention at 1:20:28. I tried to see the url, but I can't make it out when I zoomed in on the video.
fixed it, thank you!
Thanks Tyler. I see you suggested goinng oai config instead of a .env and they both appear to do the same. What's the difference?
Hey, yeah there really is no functional difference, it's just how they get the properties. Because you could even just import os, and then say like os.environ("OPENAI_API_KEY"), something like that, and you would have that set into your configuration on env path. the oai_json is just the way I like to do it
Thanks for sharing this video, it helps me a a lot.
I have one question. Is that possible to dynamically changing base prompt(system_message)?
"dynamically" means that I would like to know how to change system_message during conversation.
Hey I'm glad it could help! And I will look into that. I can think of just adding context in each iteration to shape the output, but you would need to set the human_input_mode="ALWAYS".
Amazing tutorial, very clear and packed full!
Thanks for this awesome tutorial.
Looks like llama70 has some weird issue when trying the sequential chat. !!
You are welcome! Oh really? That's good to know, thank you for the testing! Hopefully they get better lol
Any way to use AutoGen to login on the website and perform a job?
I mean the functionality where I can describe with the text to login on the specific website with my credentials and do specific tasks, without specifying manually CSS or XPath elements and without writing (or generating) code for Selenium or similar tools?
Hey, I don't think doing that with their native tools just yet, however I know they are hard working (as per last week) on making things like this happen. They mentioned it in a discord call they had.
I saw a video that had little game devs working together. I want to make a game in Unreal Engine using little AI agents as helpers, but im not entirely convinced that AI agents are all the way there yet (?) What all can be done in this regard, to your knowledge? Like, i need one agent to interact with me, as my liasian to the other agents, to help prioritize which agents operate / do tasks in / etc., one to give screen reading/ keyboard/ mouse control to, to operate some specific programs (unreal, browser, etc), one to scrape websites for data, one to compile that data into tables, one that can learn unreal engine, one that codes in multiple languages, one for front end, one for back end, one to operate a local ai image generation on my workstation to make 2d pictures for inventory item sprites, and ui design iterations for me to pick through, etc.
Thank you for this excellent introduction! I have one question: I would like to have two agents performing an interview with a human on a certain topic. The first agent should ask the questions, while the second agent should reflect on their understanding of the topic and decide whether additional messages are needed. This seems like a good case for a Nested Chat. However, the nested chat seems to be bound to the amount of turns you define in the beginning. Is there a way to have the nested agent decide when to finish the interaction?
Hey I'm glad it has helped and thank you! So yeah you can determine how many max_turns a chat could have in the nested chat. It is sequential, but I guess for that...you may just need to say something in the prompt of each agent. For instance, AssistantAgent could say, ".... When task is done, reply TERMINATE". Then the UserAgent checks for that in the termination message.
res = user.initiate_chats(
[
{"recipient": assistant_1, "message": tasks[0], "max_turns": 1, "summary_method": "last_msg"},
{"recipient": assistant_2, "message": tasks[1]},
]
)
Here, in this example, you can increase the max turns where the user will initiate a chat with another assistant. I get what you're saying, and I think the answer is...No. Like not exactly. The closest would be with the prompting. Hope this helps, if it didn't let me know!
@@TylerReedAI Thank you very much, I'll give it a try!
@tylerreedAI : i have a use case at work, where I need to load a xlsx file with dealer data and then based on user question with year, month for a particular dealer..need to calculate 12 month previous rolling avg including current month on Demand for each part of that dealer and previous month rolling avg for 12 months and then find the difference of current to previous rolling avg on demand and come up with a percentage difference of demand. if the difference is 10% or more create a new file with those entries.
I have it kind of working with 3 agents group chat for all dealer data but when i try to add filters like year or month or dealer or part number, it falls apart.
would love to take your input and get it fixed , if you are willing to help.
this would prove your subscribers that it can work in real scenario than just hello worlds.
thanks
How can i do it on autogenstudio? I am in the conda environment but it doesnt do it on my desktop
So once you have it installed with pip install autogenstudio, then run it like this:
Autogenstudio ui -port 8080
Then it will show a localhost url in the terminal
Hey Tyler! Amazing video
I am non technical, just wanted to know if I can use jupyter notebook instead of pycharm. If yes, do I need to create separate jsonn file to call Open AI API key like you did by for two way chat?
Thanks
Thank you! And yes you can absolutely use jupyter notebook instead of pycharm. You do not need to create a separate file, you can just add the model and api key to the config list separately. If you need help, email me and I can give you a sample code! tylerreedytlearning@gmail.com
is function calling same as adding skills to agents ? if not can you make a video on adding skills to agents in autogen
yeah its the same idea, give them tools to execute some actions. But yes, I plan on making videos on adding skills with autogen studio.
well i wonder, the first programs runs with no error, but it don`t create the coding folder and also don`t create or run files where i see the chart in it, also not if i create it before, also in mode "Never". So it only produce output on output but not the resulting scripts. Any idea? Well i tryed with python 3.10
i also don*t see the 3 dots in the output log...
found the reason: after create project i have to say in the 3thd tab conda as environment and use py 3.10.11. it seams there is a problem with my 3.10.6 installed for automatic1111.
Sorry for late reply, I'm glad you got it figured out. Yeah so this is why I'm soon going to be creating docker images so everybody can have the same workflow with same settings I have. Then we won't have issues like this.
Can I request the a code written in Next.js(typescript) or .NET(C#) or it is strictly working with Python?
you are in luck! They just added .net support!
Hi - While using functions it is answering 2+2 = 4 then how is it different from Tools ? I am using your exact code from your git
Tools and Functions are very similar. End of the day, just python functions. A couple things though, I think that one of the differences is how they interact with OpenAI. And tools are a little more diverse, and tbh, I prefer them over function calling. You can also assign a bunch of tools to an agent and it will decide which is best. A function must be called when assigned to an agent.
Hi So I want to do this with a local run LLM how do I change [
{
"model": "gpt-3.5-turbo",
"api_key": "sk-proj-1111"
}
] to run with say LM studio with Llama 3
model: the actual model from LM studio you are using
api-key: “lm-studio”
base_url: “the url found in LM studio as well”
There is like a snippet of python code when you start local server, both of the model and base url can be found there
import autogen
def main():
llama3 = {
"config_list": [
{
"model": "Meta-Llama-3-8B-Instruct-GGUF",
"base_url": "localhost:1234/v1",
"api_key": "lm-studio",
},
],
"cache_seed": None,
"max_tokens": 1024
}
phil = autogen.ConversableAgent(
"Phil (Phi-2)",
llm_config=llama3,
system_message="""
Your name is Phil and you are a comedian.
""",
)
# Create the agent that represents the user in the conversation.
user_proxy = autogen.UserProxyAgent(
"user_proxy",
code_execution_config=False,
default_auto_reply="...",
human_input_mode="NEVER"
)
user_proxy.initiate_chat(phil, message="Tell me a joke!")
if __name__ == "__main__":
main()
Hi Tyler I was following along with your repo and it vanished mid tutorial. Any ideas? Great work btw
Hey thank you, what do you mean…like the repo doesn’t exist?
@@TylerReedAI I was following along with your /autogen_beginnner_course repo and I refreshed at one point and got 404. Its gone
I see, I had a different one and migrated because of some issue, I apologize. Try this: github.com/tylerprogramming/autogen-beginner-course
Hi, i commented on your other SAAS customer survey video i am using that code of you yours, and i keep getting "openai.BadRequestError: Error code: 400 - {'error': "'messages' array must only contain objects with a 'content' field that is not empty."}" error even though i have default_auto_reply:"..."
A different question, when i followed the exact path of your SAAS customer survey video it once ran the code, it even generated me an output, but couple things i see is:
- I cant see all those requirement interactiosn between the agents,
- and once the execution is done with code generation the last thing i get is the same error mentioned above
PLS HELP
Oh I see, you also have the default auto reply already. Are you using lm studio or ollama for local integration? Or something else?
So, to get all the interactions, just set the human_input_mode="ALWAYS" so you can be a part of the conversation. And you are still using Mistral-7B for this?
when to use register_for_llm and register_for_execution ?
so, these are decorators to be used at the top of the python method. This will let the framework know this is a tool to be used by an agent!
@@TylerReedAI is it possible to coordinate groupchat ? like I want specific agent to be called
I am not getting the py file at 10:54. the files are not saving for me. i am on mac too.
What model are you using? You using openai or a local model?
@@TylerReedAI openAI api :)
try gpt-4o, just tried it and it created the image and saved the code, let me know if that worked. Sometimes 3.5-turbo is weird
I'm at 10:45 on your tutorial and my code just pop up a META & TESLA Stock price graph! Just an issue: it was a success with Assistant sending "TERMINATE", but then "user" doesn't stop sending empty messages to Assistant, and Assistant responding "good bye" / "feel free to ask more questions"... in an infinite loop, CTRL-C help me get out of here! (^_^)
I'm glad you got the graph! Yeah sorry, it happens, but I will try to update the code with better termination replies and prompts so you don't run into this issue nearly as often. But yeah ctrl + c gets you out of it :D
where can i find the full code?
Hey it should be in a link to my GitHub in the description! Let me know if it works or not
When I ran the code, I got the following repeated about 12 or more times. Maybe we need to limit replies?
Assistant (to user):
If you have any more questions or need assistance in the future, please feel free to ask. Have a great day! Goodbye!
TERMINATE
--------------------------------------------------------------------------------
user (to Assistant):
I also did not get the same results you did, but I now think I know why. Since I have docker running, I set "use_docker" to True. When I set "user_docker" to false, I get results closer to yours.
I was thinking I needed to use the docker executor, but that causes other issues. You might want to try using docker and see if there are any differences. If so, it might be the subject of another video.
I had more consistent results when I set the temperature to 0, and set use_docker to false.
Gotcha, we talked in discord but yeah it's really interesting to have differences like this.
After like 3 weeks of fiddling around with AI, the way to go is to fine-tune the model itself directly to create agents. There's no need for any tool. The AI itself has it all already.
😂😂
How?
Using llama 3, that is a viable strategy. However, consider that the AgentOptimizer autogen workflow from Zhang and Zhang allows you to get the effect while still using the top of the line models.
gpt-4-turbo is current $30 per million tokens. Until the SML agent swarm gets traction, this is going to be the best option
Nope