AutoGen and Open-Source LLMs: A Free Synergy

autogen llama2 openai python Feb 06, 2024
 

Autonomous agents are a game-changer, taking over tasks like chat responses and research, traditionally done by humans.

The cost to run these agents daily, however, can add up quickly for developers. This guide offers a cost-effective solution: how to run AutoGen agents without incurring daily expenses, while still optimizing their performance. We'll cover strategies and tips to make this possible. Stay tuned for a step-by-step walkthrough on setting up and maintaining these agents at zero cost.

This is what we are going to build:

 

The Assistant Agent ‘Bob’ is programmed to tell jokes while ‘Alice’ critiques them, all orchestrated by a central ‘Manager’ and mediated through the ‘UserProxyAgent’.

Setting Up the Environment

Starting on the right foot with any development project means setting up a clean workspace. Have you ever considered the importance of a virtual environment? It’s like having a dedicated room for your project where everything is neatly organized and you won’t trip over someone else’s tools. So, we begin by creating a virtual environment to isolate dependencies. It’s simple: activate this environment and install AutoGen with the command python pip install pyautogen. This is your first step to a tidy, controlled setup for your autogen agents.

python3 -m venv venv
source venv/bin/activate
pip install pyautogen

 

Configuring AutoGen and Agents

Configuring your tools can make or break your project. Do you start big or small? Here, you start with a model you trust, like GPT-3.5, to get things running smoothly. But what if you want a free alternative without compromising quality? That’s where open-source models like Llama 2 come in. It's all about finding the right fit for your project's needs. You can easily configure your agents in AutoGen to communicate with these models.

import autogen

llm_config = {"config_list": [{
 "model": "gpt-3.5-turbo"
}]}

llm_config_local = {"config_list": [
 {"model": "llama2", "base_url": "http://localhost:1234/v1"}]
}


Creating Assistant Agents

How about spicing up your AI interactions? Define your assistant agents, Bob and Alice, with distinct personalities and roles within the chat. Bob’s the joker, and Alice? She’s the critic. It’s like setting up a digital comedy club where Bob tries to make Alice laugh. This dynamic adds a layer of human-like interaction to your chat.


bob = autogen.AssistantAgent(
 name="Bob",
 system_message="""
   You love telling jokes. After Alice feedback improve 
   the joke. Say 'TERMINATE' when you have improved the joke.
 """,
 llm_config=llm_config_local
)

alice = autogen.AssistantAgent(
 name="Alice",
 system_message="Criticise the joke.",
 llm_config=llm_config_local
)


Setting Up the User Proxy

Seamless interaction is key. The user proxy is the behind-the-scenes hero that keeps the conversation flowing by acting as an intermediary between you and the AutoGen agents. It's smart too – programmed to understand when to wrap things up based on cues like the word “TERMINATE.” It ensures your chat doesn’t turn into an endless loop.


def termination_message(msg):
 return "TERMINATE" in str(msg.get("content", ""))

user_proxy = autogen.UserProxyAgent(
 name="user_proxy",
 code_execution_config={"use_docker": False},
 is_termination_msg=termination_message,
 human_input_mode="NEVER"
)


Managing the Group Chat

Now, imagine being the director of a play where each actor knows exactly when to come on stage. That’s what managing a group chat in AutoGen is like. You set the scene, and your agents carry the conversation, with the manager ensuring everyone gets their turn without a hitch.


groupchat = autogen.GroupChat(
 agents=[bob, alice, user_proxy],
 messages=[]
)

manager = autogen.GroupChatManager(
 groupchat=groupchat, 
 code_execution_config={"use_docker": False},
 llm_config=llm_config,
 is_termination_msg=termination_message
)


Integrating Local Language Models with LM Studio

Ever needed a toolshed for your AI project? LM Studio is like that, but for managing and deploying language models. It’s compatible with major operating systems and offers a straightforward setup. Once you have it, downloading models like Llama 2 is a breeze. LM Studio then provides the API endpoints you need, ready to plug into your AutoGen setup.


Starting the Chat

Initiating the chat is as easy as starting a conversation – a simple command and you’re off, focusing on a task like crafting a joke. It's like telling your AI, "Show me what you've got."


user_proxy.initiate_chat(
 manager, 
 message="Tell a joke"
)

 

Troubleshooting and Optimization

If things get tricky, such as your agents not passing the baton properly, a round-robin selection can smooth things out. And remember, just like people, sometimes local models need you to be clear about what you’re asking of them. Adjust your prompts to make sure they hit the mark.


groupchat = autogen.GroupChat(
 agents=[bob, alice, user_proxy],
 speaker_selection_method="round_robin"
 messages=[]
)


Conclusion

By following these steps, you’re not just setting up AutoGen agents; you're paving the way for free, innovative AI development. You're breaking down barriers and opening up a world where developers can experiment and create without the worry of costs. It's about making the AI landscape a little more accessible for everyone.


Source Code:


import autogen

llm_config = {"config_list": [{
"model": "gpt-3.5-turbo"
}]}

llm_config_local = {"config_list": [{
"model": "llama2",
"base_url": "http://localhost:1234/v1"
}]}

bob = autogen.AssistantAgent(
name="Bob",
system_message="You love telling jokes. After Alice feedback improve the joke. Say 'TERMINATE' when you have improved the joke.",
llm_config=llm_config_local
)

alice = autogen.AssistantAgent(
name="Alice",
system_message="Criticise the joke.",
llm_config=llm_config_local
)

def termination_message(msg):
return "TERMINATE" in str(msg.get("content", ""))

user_proxy = autogen.UserProxyAgent(
name="user_proxy",
code_execution_config={"use_docker": False},
is_termination_msg=termination_message,
human_input_mode="NEVER"
)

groupchat = autogen.GroupChat(
agents=[bob, alice, user_proxy],
speaker_selection_method="round_robin"
messages=[]
)

manager = autogen.GroupChatManager(
groupchat=groupchat,
code_execution_config={"use_docker": False},
llm_config=llm_config,
is_termination_msg=termination_message
)

user_proxy.initiate_chat(
manager,
message="Tell a joke"
)
 
 

Stay Ahead in AI with Free Weekly Video Updates!

AI is evolving faster than ever ā€“ donā€™t get left behind. By joining our newsletter, youā€™ll get:

  • Weekly video tutorials previews on new AI tools and frameworks
  • Updates on major AI breakthroughs and their impact
  • Real-world examples of AI in action, delivered every week, completely free.


Don't worry, your information will not be shared.

We hate SPAM. We will never sell your information, for any reason.