OpenAI API Usage with Autogen-like Function Calls

gpt openai python Feb 05, 2024
 

 

Have you ever wondered about the potential of AI chatbots beyond simple conversation? We’re diving into how to elevate a basic chatbot by adding custom functions for actions like telling jokes or updating users on the weather.

Laying the Groundwork for Chat Completion

Where do we start? First things first: create a virtual environment to keep everything tidy. It's like giving your chatbot its own room where it won't be disturbed. Then, install the OpenAI library—this is your chatbot's brain, essentially. Here’s how you can do this in your terminal:

python3 -m venv venv
source venv/bin/activate 
pip install openai


Initializing the OpenAI Client

Once your space is set up, how do you get the chatbot to listen? By initializing the OpenAI client with an API key from your environment variables. Think of it as whispering the secret password to your chatbot so it knows you’re the boss:

import os
from openai import OpenAI

client = OpenAI(
api_key=os.environ.get('OPEN_AI_KEY')
)

 

Testing the Waters with a Simple Greeting

Ready for a mic check? Send a "Hello" to the GPT-3.5 Turbo model and see how it responds. This is your chatbot's first "hello" to the world:

chat_completion = client.chat.completions.create(
messages=[{"role": "user", "content": "Hello"}],
model="gpt-3.5-turbo"
)


Crafting Custom Functions

Now, let's get creative. Want your chatbot to crack a joke or give the weather forecast? Craft functions for that. Start with something simple like a static joke:

def joke_of_the_day():
 return "Random joke of the day"


Then maybe you want to get a little more complex, like checking the weather based on a location:

def get_current_weather(arguments):
location = arguments.get('location')
 return f"It's hot in {location}"


Remember, these are just placeholders. For more practical and detailed implementations, consider consulting ChatGPT for tailored advice and examples.


Defining Tools for the Chatbot

How does GPT know when to tell a joke or check the weather? You define tools for it. These are like instructions you're embedding into GPT's thoughts so it knows what to do and when:

tools = [
{
  "type": "function",
  "function": {
  "name": "joke_of_the_day",
  "description": "Get a random joke of the day",
  "parameters": {
  "type": "object",
  "properties": {},
  "required": []
}
}
},
{
  "type": "function",
 "function": {
 "name": "get_current_weather",
 "description": "Get the current weather",
 "parameters": {
 "type": "object",
  "properties": {
  "location": {
 "type": "string",
 "description": "The city and state, e.g., San Francisco, CA"
}
},
 "required": ["location"]
}
}
},
]

You'll give each tool a name, a description, and any parameters it needs. It's like programming shortcuts for your chatbot to use during your chats.

 

Adding Tool Definitions to the Chat Completion Process 

Incorporating tool definitions into the chat completion processes transforms it from a basic response unit to a dynamic assistant capable of executing specialized tasks.

We embed functions like joke_of_the_day and get_current_weather into the GPT's flow. This way, when a user asks about the weather in a city like Berlin, GPT doesn't just reply—it also suggests executing the relevant function to provide a detailed answer.

Here’s a snippet showing how these tools are integrated into a chat request:

chat_completion = client.chat.completions.create(
messages=[{
 "role": "user",
  "content": "How is the weather in Berlin?"
}],
model="gpt-3.5-turbo",
tools=tools # Utilizing our custom-defined functions
)


After receiving such a request, GPT assesses the query, matches it with the appropriate tool, and suggests  a function call. 

ChatCompletionMessageToolCall(
id='call_zd6Lr6pq5kiqXaHHEPavmHzM',
 function=Function(arguments='{\n "location": "Berlin"\n}',
name='get_current_weather'),
 type='function'
)

 

Executing Function Calls Based on GPT Response

Once a chat completion request is sent and we've got the response, we're ready to interpret any function calls GPT suggests. This step requires us to carefully extract the specific function name and its necessary arguments based on what GPT proposes.

tool_call = chat_completion.choices[0].message.tool_calls[0]
function_name = tool_call.function.name
arguments = json.loads(tool_call.function.arguments)


By using a function map, we match the function name recommended by GPT with its corresponding code in our system. Then, we execute this function, applying the arguments GPT suggests. This approach allows us to dynamically respond to user queries with specific, action-oriented tasks.

function_map = {
 "joke_of_the_day": joke_of_the_day,
 "get_current_weather": get_current_weather
}

func = function_map.get(function_name)
result = func(arguments)
print(result)

 

Comparing Our Custom Function Implementation to AutoGen's Conversible Agent

When comparing our approach to AutoGen's Conversible Agent class, both utilize a function map for dynamic function execution in response to user queries, illustrating a shared method for enhancing chatbot capabilities. This strategy enriches the OpenAI chatbot's functionality, allowing for a more sophisticated interaction by incorporating custom tasks like humor and weather updates.



For developers aiming to craft interactive and adaptable AI chatbots, mastering these function calls marks a crucial step towards creating more engaging and responsive applications, moving closer to the advanced capabilities seen in AutoGen's framework.

 

 

Learn To Build Real-world AI

Unlock 100+ AI Videos & Source Code Now