AutoGroq beta v5 : One-Click AutoGen and CrewAI agents
Vložit
- čas přidán 30. 05. 2024
- Over 4700 developers are using the free online demo:
autogroq.streamlit.app/
...to create teams of agents in seconds, thanks to the blinding speed of Groq™.
And, we're screaming past 700 stars and 250 forks
(that's like... over 900 sporks!) on
GitHub: github.com/jgravelle/AutoGroq
Grab your free Groq developer's API key at:
console.groq.com/keys
How to import agents/workflows into AutoGen™:
• AutoGroqBetaV2 - Put A...
AutoGen: microsoft.github.io/autogen/
CrewAI: github.com/joaomdmoura/crewAI
0:00 Intro, in my real voice (by request)
0:15 Why AutoGroq?
0:20 The problem is: we don't know what the problem is
0:30 Real-world example
0:45 What AutoGroq does
1:05 Getting started
1:35 Multi-model support
1:40 A new feature!
1:55 Making stuff
2:20 Instant agents!
2:25 Exporting to AutoGen or CrewAI
3:00 Test-driving your agents
3:25 It's even faster than it looks!
3:40 CSV support... sort of
4:00 Our Three Display Modes
4:15 1) Most Recent Comment
4:25 2) Whiteboard
4:35 3) Discussion History
4:55 Another new feature!
5:05 Editing your agents
5:15 Self-improving agents!
5:30 Where to GIT your own AutoGroq
5:45 Send in the clones
6:00 The requirements are required
6:25 Like a version
6:35 VS Code
6:50 Environment variables
7:00 Thanks! You can go now... - Věda a technologie
The great thing about AI voices is, that truly knowledgeable people can create content without having to deal with the glass ceiling of being “an influencer” - people criticizing this should go for the next beast video
My voice via AI seems like a happy compromise...
I've recently jumped into AI and programming. Using vscode, using APIs and such. CrewAI was setup but never really bothered to add agents and do all the extra.
Thank you for this, makes all the difference, and it's on the 6th version already.
I appreciate you sir. You are a miracle worker. Thank you.
I hope to contribute to the open source community soon.
That's very kind of you. The world looks forward to your software...! -jjg
Something wholesome about this project! ❤ Sorry to hear that you are suffering with parkinson's disease. Your use of elevenlabs and the way you present is fun and clear! 👌
Lastly, this project is truly the missing link to both Autogen and CrewAI! Nailed it on the fact you need to build the team before building the solution.
Now I wish CrewAI and Autogen had the ability to have stand ups to evaluate how the work is going after deployment of the team to do the work.
Good catch! I shouldn't assume everybody's up to speed on every piece of AI software out there. Not everybody eats and breathes this stuff.
I mean... WE do. Just not everybody... 😎
@@jjgravelle I agree, but I do love eating and breathing this stuff. 😁 This is a great and very creative time we are in! Keep doing what you are doing, it's inspiring!
@@jjgravelleLets be honest, the only people watching these kinds of videos are us nerds, hungry for new ideas 😆 My coworkers regularly say things like “l just tried out chatGPT last night for the first time” and I’m like, my guy, where have you been for the last 2 years ??? Hell, last week I had a convo with our reporting team and they didn’t remember what hugging face was…. “I’m not sure if this is work safe”… My brother in Christ, what are we doing here?? 😅 I explained basically it’s where you find AI models, and even then only one guy was like “oh yeah”.. and these are people who work AI adjacent in a large IT company lol smdh
Excellent video! I was playing with AutoGen for the first time today earlier today and you correctly listed its shortcomings. I am looking forward to trying AutoGen.
And as far as I am concerned, you can use whatever voice you want. I am here for the content and it was superb!
Thanks! It IS a work in progress...
This is what AutoGen should be, bravo friend. One request: ability to view the Skills required for each Agent.
Thanks! Skills are a tall order, but one I plan to fill...
I wish all non-native English speaking creators would use an AI voice because most struggle with the "th" sound.
I'm in Wisconsin. Dat''s very true, hey...
Sick work.... I know you're trying to cater for a number of agentic platforms, but I think I'd be more tempted to fork one of them and make autogroq the front end for the whole process. THAT, or make the whole agentic process fireable from AutoGroq once you have your agents. The reason I say this, is simply because it's not a stretch to think that AutoGen or CrewAI could take your idea and run with it on their own platforms.
Ripping off other people's ideas is the sincerest form of flattery.
I think that's how that saying goes. Anyway, thanks...!
@@jjgravelle haha, indeed ;)
This is a great tool, thank you for your work
No sweat! Thanks for clicking and liking and sharing and telling all your friends and naming a kid after me and...
Amazing! Thank you so much for sharing...It doesn't make sense indeed, first the team then the problem. I'm used to working with Crewai, but with this... i have to try Autogen.
Another user pointed me toward CrewAI's YAML (their 'golden path') approach. I'll definitely be looking into generating those types of files as well.
Thanks...!
Awesome video, short and to the point. Just how I like it. Subscribed
Thanks! I'll try not to let you down... 😎
You are really done from a great work, and have my respect and appreciation ⚡⚡⚡🙏🙏👍👍👌👌, here is mohamed from dubai, and for sure I will try it out in my work and project, I may be no one, but for ur knowledge u are the first video to make a comment on it from long long time, and I really thank u u for this great work
أنت لطيف جدا يا صديقي. شكرًا...
@@jjgravelle thanks for the effort of writing in Arabic :)
Best leveraging of the groq inference speed is to stack the iterative reasoning deeper... which this project does in spades
Thanks for the recognition! Means a lot...
Congratulations bro and thanks for your work. This autoGROQs! Kudos from México
¡Bienvenido y gracias hermano...!
Can you use this with a local llm instead of groq?
I imagine you could but it would crawl compared to Groq™...
Running a bunch of agents like that would require a gpu farm or ages of time.
@@jjgravellewouldn't that depend on models used and hardware? I imagine a handful of 7b agents on a HEDT would be feasible
@@BenDavis78 Yes it would. And my '98 Mustang could do Mach 23 if I put it in the cargo bay of the Space Shuttle... 😎
@@BenDavis78 I haven’t tried AutoGroq but use both a 128gb M3 as well as 2x 3090 machines running projects like gpt-researcher which also spawns a bunch of agents. Speed is acceptable for many tasks. Ollama and LM Studio both (I believe) support request queuing as all as parallel requests. Speed boosts (like Flash Attention) help and seem to roll out here and there.
We love your work I teaching it to hundreds of ppl in my company, keep it up.
My work loves you, too! Thanks... 😎
Good narration. Good explanation especially for people with experience before, and want to start testing that with agents better. I'm downloading and testing your tool now!
Awesome. And thanks...!
Thank you! I had not heard about this anywhere else!
Well I hope some day you DO hear about it in other places! Thanks...
Portlandia Skit:
Fred: I thought you said it was one click.
JG: It's one click per 30 seconds.
Steven Wright, seeing a guy lock up the 7/11:
"I thought you were open 24 hours."
"Not in a ROW..."
Your commentary was amusing, thanks! 😂👍
Thank YOU...!
Great work 👏
Tack så mycket... 😎
Greetings from the south of France. Well done my man. Great work.
That's no where near the Canadian part of France, is it? 😎 Thanks...!
Well done. That's the right approach to DEV teams
Thanks for weighing in...!
Many thanks. Elevenlabs voice was just fine. No issues here. Thanks again.
Thank YOU...
I am learning a lot from you. Deeply grateful and very appreciative. Thank you!
Nice of you to say! Thanks...
@jjgravelle can you explain this to me? I am confused.
This doesn't run the agents theirselves? I have to import them into Autogen or Crewai?
You don't HAVE to, but I'd recommend it. AutoGroq™ started out as just an agent generator: Enter your request, get a team with stubbed out placeholders for skills and tools. Then we added: "Click on an agent to talk to it and test it." Then: "Let the next agent respond to the previous agent" and on and on it goes, adding more bells and whistles all the time. Thanks for asking...!
That's amazing, Keep the good work
That's the sort of feedback that makes it all worth while. Thanks...!
yes it works and i trust on your videos . thank you
Thank YOU...!
Exactly what I was looking for about 9 months back, and now I found it. 😁
Thank You for your work.
BTW Do you know if it is possible to send sequential series of prompts in the agentic frameworks? To let the LLMs reflect on the previous answer before attempting to perform the next step?
No sweat! Sorry it took so long...
@@jjgravelle 😄
Awesome video - thank you so much for sharing this.
Glad you enjoyed it! Thanks so much! Sorry for the late reply. The response has been overwhelming, and some comments have slipped through the cracks...
Thank you. This save me a lot of time. I love you so much
I'm very lovable. 😎 Thanks...!
Worked perfectly! Thank you for your hard work! Much appreciated my friend
You've made my day, sir. Thanks...
Narration improved! Thanks man cool app
I appreciate you taking the time to say so. Thanks...!
I love that you include the code in a PDF to provide as context, Everyone should. Really great project.what features are on your road map?
Thanks! I need to do a major code cleanup. There's a lot of redundancy.
Like I alluded to in the video, I like the potential of the whiteboard as a code testing and validation component. That's a big piece to chew on, though.
The things Groq's speed make possible are worth pursuing: real-time retrospection is probably next...
Is it possible to upload txt and md files too? It would be useful to organize everything within user defined project folders (agents, uploaded and generated files) this way the project can be revisited and expanded upon. The project has huge potential!
@@mrmatari I tell my boss "These days, the answer to any questions that start with 'Is it possible...' is almost always going to be 'yes'."
Thanks. I'll add it to the TO DO:
Merge similar functions:
display_discussion_and_whiteboard() and display_discussion_modal() have overlapping functionality. Consider combining them into a single function.
create_agent_data() is defined in both file_utils.py and ui_utils.py. Consolidate them into a single function in one of the files and update the references accordingly.
Remove unused code:
extract_code_from_response() is defined twice, once in api_utils.py and once in ui_utils.py. Identify which one is actually being used and remove the unused definition.
custom_button() and agent_button() in custom_button.py don't seem to be used anywhere. Consider removing them if they are indeed unused.
Refactor duplicate code:
The code for creating the ZIP files in zip_files_in_memory() has similar logic for Autogen and CrewAI. Consider extracting the common parts into a separate function to avoid duplication.
The code for sending requests to the Groq API is repeated in multiple places (send_request_to_groq_api(), get_agents_from_text(), rephrase_prompt()). Extract the common parts into a single function to reduce duplication.
Simplify complex functions:
display_agents() is quite lengthy. Consider breaking it down into smaller, more focused functions for better readability and maintainability.
process_agent_interaction() also contains a lot of code. Consider splitting it into separate functions for retrieving agent information, constructing the request, sending the request, and updating the discussion and whiteboard.
Improve error handling:
In get_agents_from_text() and rephrase_prompt(), the error handling code is repeated. Consider creating a custom exception class or a utility function to handle errors consistently across the codebase.
Rename functions and variables for clarity:
Some function and variable names could be more descriptive. For example, handle_begin() could be renamed to handle_user_request() to better reflect its purpose.
display_user_input() could be renamed to get_user_input() since it retrieves user input rather than displaying it.
Add comments and docstrings:
While the code has some comments, adding more detailed comments and docstrings to explain the purpose and functionality of each function would improve code readability and maintainability.
Organize imports:
Group the imports in each file based on their origin (standard library, third-party libraries, local modules) and order them alphabetically within each group for better organization.
Remove unnecessary prints and comments:
There are many print statements throughout the code, likely used for debugging. Remove the ones that are no longer needed.
Remove any commented-out code that is no longer relevant.
Good work guys
Ndatenda...! 😎
Wow, amzing product you got going here! I was wondering if I can run AutoGen or CrewAI agents with Groq? Or do I have to have an paid OpenAi account for this? Thanks
You can. I've gotten Autogen to work with Groq™ before. You could also run AutoGen or CrewAI locally using Ollama or LM Studio. You'd have hundreds of free LLM options that way. Thanks...!
Very interesting application! I am experimenting with AutoGroq to write stories. Often times 5 or more agents are generated which I can click on, and indeed stories are generated!
But what is not clear to me is the sequence in which to invoke these agents! This is also an issue within AutoGen and CrewAI. But what about the sequence in AutoGroq? Could that be automated in some way? Or a user choice between some options?
Thanks! In AutoGroq™, you can simply tap on the agents in whatever order you'd like them to interact. I do plan to automate round-robin and automatic conversations, but be advised: we're going to get 429 limited by Groq™ for exceeding their TOS...
Absolutely brilliant. AI agents are the future.
Preachin' to the choir! Thanks...
Hey thank you for this nice project, I love playing around with it - but after a few tests, I'm getting blocked by groq and need to wait an eternity...
is there any chance to use another vendor than groq? Want to use Claude or a local model....
A local model, unless it was small (like Phi, maybe) would take forever. Understand that, when Groq sploots out nine agents in a few seconds, that represents like a dozen round-trips to the LLM / API. Locally, if the prompt re-engineering took a minute, then each agent took a minute... well, it gets ugly.
Since GTP4o has a free tier, I might play around with that. Thanks...!
Interesting. I'll wait for local llm support
Fair enough...!
I have been testing this and it is quite impressive! It has some minor flaws but all in all there is a huge potential on it!
You're overly-generous. There are some major flaws, but we're getting there. Thanks...!
@@jjgravelle Thank you very much for uploading this! This is what I was looking for!
amazing project, will be messing around a lot with it as this summer I will be working on many things as im managing the AI society at my university
Maybe some day AI will manage the AI society and free up your weekends! 😎 Thanks...
Great stuff!
Thanks...!
This is amazing! Thanks 🙏
Thanks for watching! Have fun...
Great work, cool idea!
Thanks...!
This might be one of the few times i've commented on youtube, but well done. This is one of the most clearly articulated videos I've watched in a long time. Keep up the great work!
You're very kind. Thanks...!
good job brother. god bless and thank you
I appreciate it. Thanks...!
AMAZING WORK!!!!!!!!!!!!!!!!!!!!
THANKS...!!!!!!!!!!!!!!!!!!
For importation into Autogen Studio do the tools that the jsons reference exist somewhere or is it calling them and then I would have to write those tools based on what I wanted them to do? Because it is referencing tools that I don't have. The project is great btw! Stuff like this lowers the bar to entry for crewai and autogen.
Right now, they are little more than high-level placeholders. The 'tools' and 'skills' components are major efforts in themselves. It's on the 'to do' list though, definitely. Thanks...!
@@jjgravelle no worries I'm liking the project so far and enjoying using it it definitely simplifies things. Great work! Thanks! You piqued my interest in looking at elevenlabs again for voice generation again. 😀
Im thinking to build a local machine to do most things offline without api restrictions. Do I meed a RTx 4090?
I'm running a 4060 on a CostCo (Lenovo) laptop. If you don't want it to slow down too much, go with the best GPU you can afford and the smallest LLM that will suit your needs...
This program's fantastic, keep it up. Would love to be able to run it off an LmStudio server with the ability to call different llm's from the server for different Autogroq agents
Thanks! I get this sort of request a lot, and I may have to do it just to show people how painfully slow a non-Groq™ experience would be...
@@jjgravelle Definitely it would be a lot slower compared to Groq, but everyone has different hardware. It doesn't take much to inference locally faster than chat gpt, and most are willing to suffer certain tradeoffs in exchange for the benefits of running purely local
Do you know a why , to use AutoGroq and then give it a function as a tool, e.g. using GPT4all to process documents? this would be great, as AutoGroq could be used to generate iterative loops (Research agent using GPT4all, Critic Agent reflecting on GPT4all output, prompt Agent that generates prompt for next iteration, and a planning agent that coordinates the three other agents)?
That observation goes to the heart of this effort. Google arose when a couple guys didn't let the obstacle of storage space stop them. Phones improved exponentially when toll calling went away. We have to anticipate real-time, no (or low)- cost AI compute, and build tomorrow's software accordingly, today. Thanks for the feedback...
Good stuff! 👍
Thanks...!
Out of interest, have you looked into vrsen's agency swarm framework?
Why, yes. Yes I have...
I confess I can't keep up. Great video.
If a doofus like me can do this, anybody can. 😎 Thanks...
@@jjgravelle I've been in IT since the 90s.Those who do special work see it as normal. Unable to see it for what it is. They occasionally lambest lesser folks because they think their high level work is normal, easy, anyone can do it.
You ain't no doofus my man :)
@@AdmV0rl0n *Old-fart fist-bump* 😎
So good just so good
ThanksThanks MikeMike... 😎
Super grateful! Thank you! I have noticed though i can not download the files from google drive? It creates example links
Thanks! The downloadable zips aren't stored on Google Drive. They are created in memory on the server and streamed to the client, so I'm not certain what's happening in your case. If you're on a Google Collab, that's uncharted territory for me. Sorry...
@@jjgravelle Thank you for the answer. When i have not created the team i see a section where it says no available downloads yet, when the crew is created that section dissapears.
Also, when letting it do research (for example top 5 brands in X niche) it hallunicates all of the brands. Any tips?
@@jjgravelle File "/Users/glenn/Autogroq/AutoGroq/autogroq_venv/lib/python3.12/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 561, in _run_script
self._session_state.on_script_will_rerun(rerun_data.widget_states)
File "/Users/glenn/Autogroq/AutoGroq/autogroq_venv/lib/python3.12/site-packages/streamlit/runtime/state/safe_session_state.py", line 68, in on_script_will_rerun
self._state.on_script_will_rerun(latest_widget_states)
File "/Users/glenn/Autogroq/AutoGroq/autogroq_venv/lib/python3.12/site-packages/streamlit/runtime/state/session_state.py", line 482, in on_script_will_rerun
self._call_callbacks()
File "/Users/glenn/Autogroq/AutoGroq/autogroq_venv/lib/python3.12/site-packages/streamlit/runtime/state/session_state.py", line 495, in _call_callbacks
self._new_widget_state.call_callback(wid)
File "/Users/glenn/Autogroq/AutoGroq/autogroq_venv/lib/python3.12/site-packages/streamlit/runtime/state/session_state.py", line 247, in call_callback
callback(*args, **kwargs)
File "/Users/glenn/Autogroq/AutoGroq/autogroq/agent_management.py", line 22, in callback
process_agent_interaction(agent_index)
File "/Users/glenn/Autogroq/AutoGroq/autogroq/agent_management.py", line 222, in process_agent_interaction
request += f" Additional input: {user_input}. Reference URL content: {url_content}."
^^^^^^^^^^^
@@revailing If no agents have yet been created, you can't yet download them. That's all that means.
And you can set the temperature lower to discourage hallucinations...
Awesome.... Im happy i got to know this channel.before it even had 1000 subs :)
You're a VIP. I'll teach you the secret handshake... 😎
Subbed!
Heroic...!
Can crewAI or AutoGen web crawl and use the terminal on my machine?
Like say I want the dev ai to use certain boilerplate on GitHub?
That's sort of why I want to allow the whiteboard to run code (in a limited capacity). That can get out of hand quick, though. "You're a hardware tech tasked with formatting every hard drive you can access..."
@@jjgravelle containerization?
Is this available on mac as well?
Should work. Macs can run Python. Handling your local environment variable is slightly different, but not much...
I am having problems integrating it with autogen. Is it just me? It is quite possible that it is user error.
I'd encourage you to use Autogen Studio. Here's how: czcams.com/video/Jm4UYVTwgBI/video.html
have you considered integration into VS Code as an extension?
Interesting idea. I'd be eager to know what the workflow would be like for something like that...
What a surprise, hahaa!
Thank you for the hard work and tutorial ! Can I create Phind agents with AutoGroq?
Thanks! I asked Phind:
Phind
As of my last update, there isn't a specific JSON model designed exclusively for a "Phind agent." The concept of a "Phind agent" isn't a standard term in the tech industry or a widely recognized entity in the context of programming or AI. It's possible that "Phind agent" could refer to a specific application, service, or concept within a particular project or organization, but without more context, it's challenging to provide a detailed JSON model for it...
@@jjgravelle From what I know Phind is built on CodeLlama-70b + specific code tasks training (and it's one rare example which outputs textbook correct examples in my experiments) ; apart from JSON format, is there any way to use it with AutoGroq using an API key?
@@arianetrek7049 I'd have to do a lot more reading to find out. Sounds interesting, though...
Any good tuto on how to use the zip in Autogen afterwards ?
So I tried Autogen Studio 2.0...but the agent does not have skills...So I guess it's not that tool that you are using :/ I don't get how to proceed with the workflow and the agents...
The agents have what some call "stubbed out" skills... essentially placeholder entries to get you started.
Skill generation via AutoGroq is definitely on the 'to do' list...
Do you have Autogen running with Groq?
Yes! But you have to be careful not to hammer the Groq API too quickly or subsequent requests can get bounced due to the 'one every two seconds' policy...
Would you be interested open sourcing and creating API integration for this tool to connect it with AI marketplace?
If this is potentially going to these directions I would be help you get finance for this project.
My personal interest is seeing if generators could create documentation up to level of system designers
Originally, I was piping everything through proprietary APIs on my site, but I opted to go full open source. But by all means, if you have a clever way to capitalize on this, I'm all ears... j.gravelle.us
First time viewer. This is so amazing that you are using AI to create youtube videos. Haters are always going to hate especially without context. My mum had a serious car accident in 1993 (caused by my uncle sabotaging her car btw) but she has had a handicapped parking badge since around that time and since she was only 25 years old she used to receive abuse from strangers (I remember being a child in the back of the car) including elderly people that would come up to the car as she was parking and shout at her to leave the handicapped parking space. The abuse was given because they couldnt believe or understand that she had that badge rightfully - which of course she did.
But with that story, I still hope people dont act in an aggressive way for what is really no real reason. In fact if everyone had a default compassion and understanding mode, even when people are perceiving that they are being wronged in some way, to want to find out more instead of going to blame mode, may be the world would be a nicer place to be.
It's a wonderful time to be alive. Thanks for all your kind words. It means a lot...
Just wow...
Just thanks...! 😎
@@jjgravelle i rarely subs,only when im interested 100%
This information is huge bro...
Huge time saving with these ai,for now
We will see when they took all our physical activity,beside food or workouts,or some kind of physical activity which is necessary at all age from health side of the body itself
But the question is how they will impact us on the much deeper level,emotional and sensational level,fears,pleasures, anxiety,desires etc
Which all of these impact the thinking itself
Because the more we see AI progress the less we see human activities on all levels,as far i can see or understand all of these movements with AI in general
Also huge part over here is playing our thinking,because as far i can see we programed those AI models based on our function of the brain/thinking,no matter how advanced or how poorly that AI models are trained/programed,they respond with certain level of intelligence,like human responds
And i think we are not too far from seeing enormous changes in society and ourself also
And yea,this is huge topic i know...but in order to understand the human brain or AI in general (which is replication of human brain,function),its necessary to cover the whole of it
@@PcManiac2022 You've given us all a lot to think about. Thanks...
I like your style. You make me chuckle 😂 Also, WOW!
Thanks so much! Sorry for the late reply. The response has been overwhelming, and some comments have slipped through the cracks...
Unfortunately not wortking ATM. LLama3 is now default
Had to reboot on Streamlit's end after the update. Give 'er a shot...
Playing tug-o-war between Groq and Streamlit. Looks okay now: ibb.co/L8P950C
Thanks again for the heads-up...
@@jjgravelle thank you, great piece of software!
It looks like this does everything. Why would someone need/want to export it to Autogen?
AutoGroq™ is-- or was meant to be, an agent generation and testing platform for Autogen. Granted, it's becoming its own... thing. But Autogen still has a lot of features our humble platform lacks. Thanks for asking, though. I imagine more than a few people have wondered that...
I like the AI voice 😂
Me too...!
'If you dont know what any of that means, you probably shouldnt do it' 😂
'If your screen fills up with all this garbage, congratulations, filling your screen up with garbage is a good sign, yeah you'
🤣
Thanks for watching...!
Waiting for when they can direct OpenDevin or similiar
Baby steps...
Amazing! anything i can do to help, let me know, Im heavy into django, CrewAi , XRPL
Sauce is on GitHub boss. Knock yerself out! And thanks...
@@jjgravelle thank you kindly!
any real project created with this?
Watch the first video to see the creation of the Bellybutton Lint site via AutoGen/AutoGroq. It's gotten much better since then...
@@jjgravelle what first video
@@hqcart1 Sorry, I was mobile and didn't have the link. And it was the v2, not the first. Sorry. Here 'tis... czcams.com/video/Jm4UYVTwgBI/video.html&t
@@jjgravelle Thank you, i saw the video, but i think you misunderstood me, what i meant by a real project is a project that cant be done normally with some prompts to chatGPT, as the video you showed can all be done in a single prompt without any additional agents.
@@hqcart1 Agreed. Baby steps. We're still in beta...
Is this free
It surely is, Shirley...!
For some reason when I upload the agents to Autogen this error message pops up "Connection error 422 Unprocessable Entity", also the you can't import directly since the name of the created agent from Autogroq is "Name Bob" instead of "Name_Bob" uploading the workflow does work in Autogen
I'll look into it. Thanks for the heads-up...
Pushed a fix for the naming conventions. Thanks again. You have to delete the empty skills from the import window (click all the Xs) to lose the 422 error...
@@jjgravelle Thank you! I really enjoy using Autogroq
@@tfitzfritz9411 AutoGroq™ enjoys being used...! 😎
Automate the AI setup work with AI, this is the way.
I look forward to when my AI can set up my AI using AI... 😎
Very valuable & cutting edge . Just a constructive feedback, plz reduce your sarcastic jokes. It gets too confusing to focus on the meat of the video.
Thanks! Sorry if my meat was confusing...
@@jjgravelle yes, it's too small and looks like skin tag. 😂😂
@@aga5979 Still, thanks for taking the time to look...
My friend reversed Parkinson's with the Carnivore diet
This is really great! I have been confused with the autogen stuff (being a relative newbie at AI and AI agents). Very excited for this!
I am getting an error when I try to provide feedback to the individual agents. I entered roughly the same thing you did for the project manager but got an error:
File "/home/adminuser/venv/lib/python3.11/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 561, in _run_script
self._session_state.on_script_will_rerun(rerun_data.widget_states)
File "/home/adminuser/venv/lib/python3.11/site-packages/streamlit/runtime/state/safe_session_state.py", line 68, in on_script_will_rerun
self._state.on_script_will_rerun(latest_widget_states)
File "/home/adminuser/venv/lib/python3.11/site-packages/streamlit/runtime/state/session_state.py", line 482, in on_script_will_rerun
self._call_callbacks()
File "/home/adminuser/venv/lib/python3.11/site-packages/streamlit/runtime/state/session_state.py", line 495, in _call_callbacks
self._new_widget_state.call_callback(wid)
File "/home/adminuser/venv/lib/python3.11/site-packages/streamlit/runtime/state/session_state.py", line 247, in call_callback
callback(*args, **kwargs)
File "/mount/src/autogroq/AutoGroq/agent_management.py", line 22, in callback
process_agent_interaction(agent_index)
File "/mount/src/autogroq/AutoGroq/agent_management.py", line 222, in process_agent_interaction
request += f" Additional input: {user_input}. Reference URL content: {url_content}."
^^^^^^^^^^^
I cloned the repo and ran it all locally. I replaced:
reference_url = st.session_state.get('reference_url', '')
with
url_content = st.session_state.get('url_content', '')
in the "process_agent_interaction" function and all is well it seems.
Could also just update line 222 to use "reference_url" instead but this seemed (to me) to be the better approach.
@@beanlover117 I rolled out a fix for something similar this morning. I'll double-check my work. Thanks...!
@@jjgravelle this is a really great tool. I am curious about the "skills" listed in the agent json though. Do those exist somewhere or do they need to be created?
@@beanlover117 For now, those are high-level generic placeholders for where defined skills should go. Thanks...!
Great work 👏
Thanks...!