A chat object that can be used to interact with a language model.
A Chat is an sequence of sequence of user and assistant Turns sent to a specific Provider. A Chat takes care of managing the state associated with the chat; i.e. it records the messages that you send to the server, and the messages that you receive back. If you register a tool (i.e. an function that the assistant can call on your behalf), it also takes care of the tool loop.
You should generally not create this object yourself, but instead call ChatOpenAI or friends instead.
Whether to include the system prompt in the turns.
False
register_tool
Chat.register_tool(func, *, model=None)
Register a tool (function) with the chat.
The function will always be invoked in the current Python process.
Examples
If your tool has straightforward input parameters, you can just register the function directly (type hints and a docstring explaning both what the function does and what the parameters are for is strongly recommended):
from chatlas import ChatOpenAI, Tooldef add(a: int, b: int) ->int:''' Add two numbers together.#### Parameters {.doc-section .doc-section-----parameters} a : int The first number to add. b : int The second number to add. '''return a + bchat = ChatOpenAI()chat.register_tool(add)chat.chat("What is 2 + 2?")
If your tool has more complex input parameters, you can provide a Pydantic model that corresponds to the input parameters for the function, This way, you can have fields that hold other model(s) (for more complex input parameters), and also more directly document the input parameters:
from chatlas import ChatOpenAI, Toolfrom pydantic import BaseModel, Fieldclass AddParams(BaseModel):'''Add two numbers together.''' a: int= Field(description="The first number to add.") b: int= Field(description="The second number to add.")def add(a: int, b: int) ->int:return a + bchat = ChatOpenAI()chat.register_tool(add, model=AddParams)chat.chat("What is 2 + 2?")
Parameters
func The function to be invoked when the tool is called. model A Pydantic model that describes the input parameters for the function. If not provided, the model will be inferred from the function’s type hints. The primary reason why you might want to provide a model in Note that the name and docstring of the model takes precedence over the name and docstring of the function.
An (unconsumed) response from the chat. Iterate over this object to consume the response.
token_count
Chat.token_count(*args, data_model=None)
Get an estimated token count for the given input.
Estimate the token size of input content. This can help determine whether input(s) and/or conversation history (i.e., .get_turns()) should be reduced in size before sending it to the model.
If the input is meant for data extraction (i.e., .extract_data()), then this should be the Pydantic model that describes the structure of the data to extract.
Remember that the token count is an estimate. Also, models based on ChatOpenAI() currently does not take tools into account when estimating token counts.
Examples
from chatlas import ChatAnthropicchat = ChatAnthropic()# Estimate the token count before sending the inputprint(chat.token_count("What is 2 + 2?"))# Once input is sent, you can get the actual input and output# token counts from the chat objectchat.chat("What is 2 + 2?", echo="none")print(chat.token_usage())
token_count_async
Chat.token_count_async(*args, data_model=None)
Get an estimated token count for the given input asynchronously.
Estimate the token size of input content. This can help determine whether input(s) and/or conversation history (i.e., .get_turns()) should be reduced in size before sending it to the model.
If this input is meant for data extraction (i.e., .extract_data_async()), then this should be the Pydantic model that describes the structure of the data to extract.
If “cumulative” (the default), the result can be summed to get the chat’s overall token usage (helpful for computing overall cost of the chat). If “discrete”, the result can be summed to get the number of tokens the turns will cost to generate the next response (helpful for estimating cost of the next response, or for determining if you are about to exceed the token limit).
If the chat’s turns (i.e., .get_turns()) are not in an expected format. This may happen if the chat history is manually set (i.e., .set_turns()). In this case, you can inspect the “raw” token values via the .get_turns() method (each turn has a .tokens attribute).