from chatlas import ChatOpenAI
Introduction
One of the most interesting aspects of modern chat models is their ability to make use of external tools that are defined by the caller.
When making a chat request to the chat model, the caller advertises one or more tools (defined by their function name, description, and a list of expected arguments), and the chat model can choose to respond with one or more “tool calls”. These tool calls are requests from the chat model to the caller to execute the function with the given arguments; the caller is expected to execute the functions and “return” the results by submitting another chat request with the conversation so far, plus the results. The chat model can then use those results in formulating its response, or, it may decide to make additional tool calls.
Note that the chat model does not directly execute any external tools! It only makes requests for the caller to execute them. It’s easy to think that tool calling might work like this:
But in fact it works like this:
The value that the chat model brings is not in helping with execution, but with knowing when it makes sense to call a tool, what values to pass as arguments, and how to use the results in formulating its response.
Motivating example
Let’s take a look at an example where we really need an external tool. Chat models generally do not know the current time, which makes questions like these impossible.
= ChatOpenAI(model="gpt-4o")
chat = chat.chat("How long ago exactly was the moment Neil Armstrong touched down on the moon?") _
Neil Armstrong touched down on the moon on July 20, 1969. To calculate the exact time elapsed since that moment, subtract July 20, 1969, from the current date. For example, if today is October 1, 2023, it’s been 54 years, 2 months, and 11 days since the landing.
Unfortunately, the LLM doesn’t hallucinates the current date. Let’s give the chat model the ability to determine the current time and try again.
Defining a tool function
The first thing we’ll do is define a Python function that returns the current time. This will be our tool.
def get_current_time(tz: str = "UTC") -> str:
"""
Gets the current time in the given time zone.
Parameters
----------
tz
The time zone to get the current time in. Defaults to "UTC".
Returns
-------
str
The current time in the given time zone.
"""
from datetime import datetime
from zoneinfo import ZoneInfo
return datetime.now(ZoneInfo(tz)).strftime("%Y-%m-%d %H:%M:%S %Z")
Note that we’ve gone through the trouble of adding the following to our function:
- Type hints for arguments and the return value
- A docstring that explains what the function does and what arguments it expects
Providing these hints and context is very important, as it helps the chat model understand how to use your tool correctly!
Let’s test it:
get_current_time()
'2024-11-23 00:06:11 UTC'
Using the tool
In order for the LLM to make use of our tool, we need to register it with the chat object. This is done by calling the register_tool
method on the chat object.
chat.register_tool(get_current_time)
Now let’s retry our original question:
= chat.chat("How long ago exactly was the moment Neil Armstrong touched down on the moon?") _
Neil Armstrong touched down on the moon on July 20, 1969. As of November 23, 2024, it has been exactly 55 years, 4 months, and 3 days since that historic event.
That’s correct! Without any further guidance, the chat model decided to call our tool function and successfully used its result in formulating its response.
(Full disclosure: I originally tried this example with the default model of gpt-4o-mini
and it got the tool calling right but the date math wrong, hence the explicit model="gpt-4o"
.)
This tool example was extremely simple, but you can imagine doing much more interesting things from tool functions: calling APIs, reading from or writing to a database, kicking off a complex simulation, or even calling a complementary GenAI model (like an image generator). Or if you are using chatlas in a Shiny app, you could use tools to set reactive values, setting off a chain of reactive updates.
Tool limitations
Remember that tool arguments come from the chat model, and tool results are returned to the chat model. That means that only simple, JSON-compatible data types can be used as inputs and outputs. It’s highly recommended that you stick to basic types for each function parameter (e.g. str
, float
/int
, bool
, None
, list
, tuple
, dict
). And you can forget about using functions, classes, external pointers, and other complex (i.e., non-serializable) Python objects as arguments or return values. Returning data frames seems to work OK (as long as you return the JSON representation – .to_json()
), although be careful not to return too much data, as it all counts as tokens (i.e., they count against your context window limit and also cost you money).