AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation
- Thesis: Utilize conversation design pattern to build Multi-Agent framework by unifying interfaces to be driven by converation-based interactions between agents that would work together to solve a complex task.
- Contribution: AutoGen framework that speed up the development of LLM-based applications through Multi-Agent architectures. This will also modularize components where each agent can be developed, optimized, tested, and deployed independently
- Takeaways: Using conversation as an interface for agents to communicate make a generalizable framework that can be extended/customized/reused across a plethora of applications
- Notes:
ConversableAgent
is the highest-level abstraction agentUserProxyAgent
solicits human feedback or execute code/make function callsAssistantAgent
is an AI assistant agent using LLM- Each agent has
send
/receive
functions that send and receive messages to/from agents - Each agent has
generate_reply
that takes action and generate a response based on the received message- Once an agent receives a message from another agent,
generate_reply
is automatically invoked does some action such as calling LLM/tool/code execution and sends the a message back to the sender agent
- Once an agent receives a message from another agent,
- Agents can be controlled by natural language, programming language, or both #nlp #llm #agents