Introduction

One of the most interesting aspects of modern chat models is their ability to make use of external tools that are defined by the caller.

When making a chat request to the chat model, the caller advertises one or more tools (defined by their function name, description, and a list of expected arguments), and the chat model can choose to respond with one or more “tool calls”. These tool calls are requests from the chat model to the caller to execute the function with the given arguments; the caller is expected to execute the functions and “return” the results by submitting another chat request with the conversation so far, plus the results. The chat model can then use those results in formulating its response, or, it may decide to make additional tool calls.

Note that the chat model does not directly execute any external tools! It only makes requests for the caller to execute them. It’s easy to think that tool calling might work like this:

Diagram showing showing the wrong mental model of tool calls: a user initiates a request that flows to the assistant, which then runs the code, and returns the result back to the user.”

Diagram showing showing the wrong mental model of tool calls: a user initiates a request that flows to the assistant, which then runs the code, and returns the result back to the user.”

But in fact it works like this:

Diagram showing the correct mental model for tool calls: a user sends a request that needs a tool call, the assistant request that the user’s runs that tool, returns the result to the assistant, which uses it to generate the final answer.

Diagram showing the correct mental model for tool calls: a user sends a request that needs a tool call, the assistant request that the user’s runs that tool, returns the result to the assistant, which uses it to generate the final answer.

The value that the chat model brings is not in helping with execution, but with knowing when it makes sense to call a tool, what values to pass as arguments, and how to use the results in formulating its response.

from chatlas import ChatOpenAI

Motivating example

Let’s take a look at an example where we really need an external tool. Chat models generally do not have access to “real-time” information, such as current events, weather, etc. Let’s see what happens when we ask the chat model about the weather in a specific location:

chat = ChatOpenAI(model="gpt-4o-mini")
_ = chat.chat("What's the weather like today in Duluth, MN?")


I don’t have access to real-time data, including current weather information. To find out the weather in Duluth, MN for today, I recommend checking a reliable weather website or using a weather app for the most accurate and up-to-date information.

Fortunately, the model is smart enough to know that it doesn’t have access to real-time information, and it doesn’t try to make up an answer. However, we can help it out by providing a tool that can fetch the weather for a given location.

Defining a tool function

At it turns out, LLMs are pretty good at figuring out ‘structure’ like latitude and longitude from ‘unstructured’ things like a location name. So we can write a tool function that takes a latitude and longitude and returns the current temperature at that location. Here’s an example of how you might write such a function using the Open-Meteo API:

import requests

def get_current_temperature(latitude: float, longitude: float):
    """
    Get the current weather given a latitude and longitude.

    Parameters
    ----------
    latitude
        The latitude of the location.
    longitude
        The longitude of the location.
    """
    lat_lng = f"latitude={latitude}&longitude={longitude}"
    url = f"https://api.open-meteo.com/v1/forecast?{lat_lng}&current=temperature_2m,wind_speed_10m&hourly=temperature_2m,relative_humidity_2m,wind_speed_10m"
    response = requests.get(url)
    json = response.json()
    return json["current"]

Note that we’ve gone through the trouble of adding the following to our function:

  • Type hints for function arguments
  • A docstring that explains what the function does and what arguments it expects (as well as descriptions for the arguments themselves)

Providing these hints and documentation is very important, as it helps the chat model understand how to use your tool correctly!

Let’s test it:

get_current_temperature(46.7867, -92.1005)
{'time': '2024-12-21T17:30',
 'interval': 900,
 'temperature_2m': -12.8,
 'wind_speed_10m': 2.3}

Using the tool

In order for the LLM to make use of our tool, we need to register it with the chat object. This is done by calling the register_tool method on the chat object.

chat.register_tool(get_current_temperature)

Now let’s retry our original question:

_ = chat.chat("What's the weather like today in Duluth, MN?")



Today in Duluth, MN, the weather is quite cold with a temperature of -11.7°C. There is also a wind speed of 6.6 km/h. Make sure to dress warmly if you’re heading outside!

That’s correct! Without any further guidance, the chat model decided to call our tool function and successfully used its result in formulating its response.

This tool example was extremely simple, but you can imagine doing much more interesting things from tool functions: calling APIs, reading from or writing to a database, kicking off a complex simulation, or even calling a complementary GenAI model (like an image generator). Or if you are using chatlas in a Shiny app, you could use tools to set reactive values, setting off a chain of reactive updates. This is precisely what the sidebot dashboard does to allow for an AI assisted “drill-down” into the data.

Trouble-shooting

When the execution of a tool function fails, chatlas sends the exception message back to the chat model. This can be useful for gracefully handling errors in the chat model. However, this can also lead to confusion as to why a response did not come back as expected. If you encounter such a situation, you can set echo="all" in the chat.chat() method to see the full conversation, including tool calls and their results.

def get_current_temperature(latitude: float, longitude: float):
    "Get the current weather given a latitude and longitude."
    raise ValueError("Failed to get current temperature")

chat.register_tool(get_current_temperature)

_ = chat.chat("What's the weather like today in Duluth, MN?")



I am currently unable to retrieve the weather information for Duluth, MN. However, you can check a reliable weather source, such as a weather website or app, for the latest updates on the weather in that area.

Tool limitations

Remember that tool arguments come from the chat model, and tool results are returned to the chat model. That means that only simple, JSON-compatible data types can be used as inputs and outputs. It’s highly recommended that you stick to basic types for each function parameter (e.g. str, float/int, bool, None, list, tuple, dict). And you can forget about using functions, classes, external pointers, and other complex (i.e., non-serializable) Python objects as arguments or return values. Returning data frames seems to work OK (as long as you return the JSON representation – .to_json()), although be careful not to return too much data, as it all counts as tokens (i.e., they count against your context window limit and also cost you money).