A chat object that can be used to interact with a language model.
A Chat is an sequence of sequence of user and assistant Turns sent to a specific Provider. A Chat takes care of managing the state associated with the chat; i.e. it records the messages that you send to the server, and the messages that you receive back. If you register a tool (i.e. an function that the assistant can call on your behalf), it also takes care of the tool loop.
You should generally not create this object yourself, but instead call ChatOpenAI or friends instead.
One of the following (defaults to "none" when stream=True and "text" when stream=False): - "text": Echo just the text content of the response. - "output": Echo text and tool call content. - "all": Echo both the assistant and user turn. - "none": Do not echo any content.
One of the following (default is “output”): - "text": Echo just the text content of the response. - "output": Echo text and tool call content. - "all": Echo both the assistant and user turn. - "none": Do not echo any content.
One of the following (default is “output”): - "text": Echo just the text content of the response. - "output": Echo text and tool call content. - "all": Echo both the assistant and user turn. - "none": Do not echo any content.
One of the following (default is “output”): - "text": Echo just the text content of the response. - "output": Echo text and tool call content. - "all": Echo both the assistant and user turn. - "none": Do not echo any content.
A Pydantic model describing the structure of the data to extract.
required
echo
EchoOptions
One of the following (default is “none”): - "text": Echo just the text content of the response. - "output": Echo text and tool call content. - "all": Echo both the assistant and user turn. - "none": Do not echo any content.
A Pydantic model describing the structure of the data to extract.
required
echo
EchoOptions
One of the following (default is “none”): - "text": Echo just the text content of the response. - "output": Echo text and tool call content. - "all": Echo both the assistant and user turn. - "none": Do not echo any content.
Whether to include the system prompt in the turns.
False
on_tool_request
Chat.on_tool_request(callback)
Register a callback for a tool request event.
A tool request event occurs when the assistant requests a tool to be called on its behalf. Before invoking the tool, on_tool_request handlers are called with the relevant ContentToolRequest object. This is useful if you want to handle tool requests in a custom way, such as requiring logging them or requiring user approval before invoking the tool
A function to be called when a tool request event occurs. This function must have a single argument, which will be the tool request (i.e., a ContentToolRequest object).
required
Returns
Name
Type
Description
A callable that can be used to remove the callback later.
on_tool_result
Chat.on_tool_result(callback)
Register a callback for a tool result event.
A tool result event occurs when a tool has been invoked and the result is ready to be provided to the assistant. After the tool has been invoked, on_tool_result handlers are called with the relevant ContentToolResult object. This is useful if you want to handle tool results in a custom way such as logging them.
A function to be called when a tool result event occurs. This function must have a single argument, which will be the tool result (i.e., a ContentToolResult object).
required
Returns
Name
Type
Description
A callable that can be used to remove the callback later.
register_tool
Chat.register_tool(func, *, model=None)
Register a tool (function) with the chat.
The function will always be invoked in the current Python process.
Examples
If your tool has straightforward input parameters, you can just register the function directly (type hints and a docstring explaning both what the function does and what the parameters are for is strongly recommended):
from chatlas import ChatOpenAI, Tooldef add(a: int, b: int) ->int:''' Add two numbers together.#### Parameters {.doc-section .doc-section-----parameters} a : int The first number to add. b : int The second number to add. '''return a + bchat = ChatOpenAI()chat.register_tool(add)chat.chat("What is 2 + 2?")
If your tool has more complex input parameters, you can provide a Pydantic model that corresponds to the input parameters for the function, This way, you can have fields that hold other model(s) (for more complex input parameters), and also more directly document the input parameters:
from chatlas import ChatOpenAI, Toolfrom pydantic import BaseModel, Fieldclass AddParams(BaseModel):'''Add two numbers together.''' a: int= Field(description="The first number to add.") b: int= Field(description="The second number to add.")def add(a: int, b: int) ->int:return a + bchat = ChatOpenAI()chat.register_tool(add, model=AddParams)chat.chat("What is 2 + 2?")
Parameters
func The function to be invoked when the tool is called. model A Pydantic model that describes the input parameters for the function. If not provided, the model will be inferred from the function’s type hints. The primary reason why you might want to provide a model in Note that the name and docstring of the model takes precedence over the name and docstring of the function.
Whether to yield just text content or include rich content objects (e.g., tool calls) when relevant.
'text'
echo
EchoOptions
One of the following (default is “none”): - "text": Echo just the text content of the response. - "output": Echo text and tool call content. - "all": Echo both the assistant and user turn. - "none": Do not echo any content.
Whether to yield just text content or include rich content objects (e.g., tool calls) when relevant.
'text'
echo
EchoOptions
One of the following (default is “none”): - "text": Echo just the text content of the response. - "output": Echo text and tool call content. - "all": Echo both the assistant and user turn. - "none": Do not echo any content.
An (unconsumed) response from the chat. Iterate over this object to consume the response.
token_count
Chat.token_count(*args, data_model=None)
Get an estimated token count for the given input.
Estimate the token size of input content. This can help determine whether input(s) and/or conversation history (i.e., .get_turns()) should be reduced in size before sending it to the model.
If the input is meant for data extraction (i.e., .extract_data()), then this should be the Pydantic model that describes the structure of the data to extract.
Remember that the token count is an estimate. Also, models based on ChatOpenAI() currently does not take tools into account when estimating token counts.
Examples
from chatlas import ChatAnthropicchat = ChatAnthropic()# Estimate the token count before sending the inputprint(chat.token_count("What is 2 + 2?"))# Once input is sent, you can get the actual input and output# token counts from the chat objectchat.chat("What is 2 + 2?", echo="none")print(chat.token_usage())
token_count_async
Chat.token_count_async(*args, data_model=None)
Get an estimated token count for the given input asynchronously.
Estimate the token size of input content. This can help determine whether input(s) and/or conversation history (i.e., .get_turns()) should be reduced in size before sending it to the model.
If this input is meant for data extraction (i.e., .extract_data_async()), then this should be the Pydantic model that describes the structure of the data to extract.
If “cumulative” (the default), the result can be summed to get the chat’s overall token usage (helpful for computing overall cost of the chat). If “discrete”, the result can be summed to get the number of tokens the turns will cost to generate the next response (helpful for estimating cost of the next response, or for determining if you are about to exceed the token limit).
If the chat’s turns (i.e., .get_turns()) are not in an expected format. This may happen if the chat history is manually set (i.e., .set_turns()). In this case, you can inspect the “raw” token values via the .get_turns() method (each turn has a .tokens attribute).