Is there a way to set the next speaker to user_proxy only when an agent calls a function? With Stateflow even though an agent called a function, the conversation is just swept away to the next agent
I just saw the video on autogen studio v2, and then this one, but I don't really understand if there's a link between them. I'm interested in a specifications - coder - executor - unit tester - massive tester (data flow).
Hey, so Studio isn't as up to date with all features that pyautogen has (the library you install to code with). It's getting there and they have a roadmap, but I don't think they have ability for stateflow just yet in the UI. That would just be done through code for now
That’s correct! Sorry for the late response. But with auto, I would say a very good percentage of the time that’s correct, but state flow you can guarantee it
sorry for late reply, I just created that myself. To do that automatically, you would create a function that takes in the output form the llm and then use a python library to convert to markdown, or just have the model output the response in a markdown format and save to file. In this example, I did not do that
Great video
Thank you so much, clear explanation with great example. Thanks for sharing.
Can you make avideo on auto gen multi tools with multi agents
algo tan simple que puede solucionar muchos problemas, gracias por los conocimientos
Amazing sharing, thanks. 🎉
This is really awesome 👏
Is there a way to set the next speaker to user_proxy only when an agent calls a function? With Stateflow even though an agent called a function, the conversation is just swept away to the next agent
You saved my life
Can the workflow implement graph? Though it feels clunkier to implement at its current state
would you say this vid / autogen in general is still the way to go, or rather use crewai or llamaindex or langgraph or agency swarm or...?
Is this on Autogen or Autogen studio?
It's autogen. I believe Autogen Studio has not been updated with this "state_transition" feature yet.
Separate question: it’s painful that the state flow function relies on agent instances passed in as global variables. Is that the only way to do it?
There could be another way, I’ll have to fool around with it and see, but it may get updated soon as well
I just saw the video on autogen studio v2, and then this one, but I don't really understand if there's a link between them.
I'm interested in a specifications - coder - executor - unit tester - massive tester (data flow).
Hey, so Studio isn't as up to date with all features that pyautogen has (the library you install to code with). It's getting there and they have a roadmap, but I don't think they have ability for stateflow just yet in the UI. That would just be done through code for now
thank you
What other "state_transition" can be beside "if last_speaker"?
Just to confirm, you could have had the exact same output with “auto” , but state flow lets you guarantee the sequence of agents is right?
That’s correct! Sorry for the late response. But with auto, I would say a very good percentage of the time that’s correct, but state flow you can guarantee it
how did you manage to make the agent create and fill the test.md?
sorry for late reply, I just created that myself. To do that automatically, you would create a function that takes in the output form the llm and then use a python library to convert to markdown, or just have the model output the response in a markdown format and save to file. In this example, I did not do that
Does anyone know how make openai compatible url and use autogen with bedrock ?
I'm actually taking a bedrock course to understand it and hope to have a video to answer this question!
where can i download the file ?