Chat
=None) Chat(provider, system_prompt
A chat object that can be used to interact with a language model.
A Chat
is an sequence of sequence of user and assistant Turn
s sent to a specific Provider
. A Chat
takes care of managing the state associated with the chat; i.e. it records the messages that you send to the server, and the messages that you receive back. If you register a tool (i.e. an function that the assistant can call on your behalf), it also takes care of the tool loop.
You should generally not create this object yourself, but instead call ChatOpenAI
or friends instead.
Attributes
Name | Description |
---|---|
current_display | Get the currently active markdown display, if any. |
system_prompt | A property to get (or set) the system prompt for the chat. |
Methods
Name | Description |
---|---|
add_turn | Add a turn to the chat. |
app | Enter a web-based chat app to interact with the LLM. |
chat | Generate a response from the chat. |
chat_async | Generate a response from the chat asynchronously. |
cleanup_mcp_tools | Close MCP server connections (and their corresponding tools). |
console | Enter a chat console to interact with the LLM. |
export | Export the chat history to a file. |
extract_data | Extract structured data from the given input. |
extract_data_async | Extract structured data from the given input asynchronously. |
get_cost | Estimate the cost of the chat. |
get_last_turn | Get the last turn in the chat with a specific role. |
get_tokens | Get the tokens for each turn in the chat. |
get_tools | Get the list of registered tools. |
get_turns | Get all the turns (i.e., message contents) in the chat. |
on_tool_request | Register a callback for a tool request event. |
on_tool_result | Register a callback for a tool result event. |
register_mcp_tools_http_stream_async | Register tools from an MCP server using streamable HTTP transport. |
register_mcp_tools_stdio_async | Register tools from a MCP server using stdio (standard input/output) transport. |
register_tool | Register a tool (function) with the chat. |
set_echo_options | Set echo styling options for the chat. |
set_model_params | Set common model parameters for the chat. |
set_tools | Set the tools for the chat. |
set_turns | Set the turns of the chat. |
stream | Generate a response from the chat in a streaming fashion. |
stream_async | Generate a response from the chat in a streaming fashion asynchronously. |
token_count | Get an estimated token count for the given input. |
token_count_async | Get an estimated token count for the given input asynchronously. |
add_turn
Chat.add_turn(turn)
Add a turn to the chat.
Parameters
Name | Type | Description | Default |
---|---|---|---|
turn | Turn | The turn to add. Turns with the role “system” are not allowed. | required |
app
Chat.app(=True,
stream=0,
port='127.0.0.1',
host=True,
launch_browser=None,
bg_thread=None,
echo='all',
content=None,
kwargs )
Enter a web-based chat app to interact with the LLM.
Parameters
Name | Type | Description | Default |
---|---|---|---|
stream | bool | Whether to stream the response (i.e., have the response appear in chunks). | True |
port | int | The port to run the app on (the default is 0, which will choose a random port). | 0 |
host | str | The host to run the app on (the default is “127.0.0.1”). | '127.0.0.1' |
launch_browser | bool | Whether to launch a browser window. | True |
bg_thread | Optional[bool] | Whether to run the app in a background thread. If None , the app will run in a background thread if the current environment is a notebook. |
None |
echo | Optional[EchoOptions ] |
One of the following (defaults to "none" when stream=True and "text" when stream=False ): - "text" : Echo just the text content of the response. - "output" : Echo text and tool call content. - "all" : Echo both the assistant and user turn. - "none" : Do not echo any content. |
None |
content | Literal['text', 'all'] | Whether to display text content or all content (i.e., tool calls). | 'all' |
kwargs | Optional[SubmitInputArgsT] | Additional keyword arguments to pass to the method used for requesting the response. | None |
chat
*args, echo='output', stream=True, kwargs=None) Chat.chat(
Generate a response from the chat.
Parameters
Name | Type | Description | Default |
---|---|---|---|
args | Content | str | The user input(s) to generate a response from. | () |
echo | EchoOptions |
One of the following (default is “output”): - "text" : Echo just the text content of the response. - "output" : Echo text and tool call content. - "all" : Echo both the assistant and user turn. - "none" : Do not echo any content. |
'output' |
stream | bool | Whether to stream the response (i.e., have the response appear in chunks). | True |
kwargs | Optional[SubmitInputArgsT] | Additional keyword arguments to pass to the method used for requesting the response. | None |
Returns
Name | Type | Description |
---|---|---|
ChatResponse | A (consumed) response from the chat. Apply str() to this object to get the text content of the response. |
chat_async
*args, echo='output', stream=True, kwargs=None) Chat.chat_async(
Generate a response from the chat asynchronously.
Parameters
Name | Type | Description | Default |
---|---|---|---|
args | Content | str | The user input(s) to generate a response from. | () |
echo | EchoOptions |
One of the following (default is “output”): - "text" : Echo just the text content of the response. - "output" : Echo text and tool call content. - "all" : Echo both the assistant and user turn. - "none" : Do not echo any content. |
'output' |
stream | bool | Whether to stream the response (i.e., have the response appear in chunks). | True |
kwargs | Optional[SubmitInputArgsT] | Additional keyword arguments to pass to the method used for requesting the response. | None |
Returns
Name | Type | Description |
---|---|---|
ChatResponseAsync | A (consumed) response from the chat. Apply str() to this object to get the text content of the response. |
cleanup_mcp_tools
=None) Chat.cleanup_mcp_tools(names
Close MCP server connections (and their corresponding tools).
This method closes the MCP client sessions and removes the tools registered from the MCP servers. If a specific name
is provided, it will only clean up the tools and session associated with that name. If no name is provided, it will clean up all registered MCP tools and sessions.
Parameters
Name | Type | Description | Default |
---|---|---|---|
names | Optional[Sequence[str]] | If provided, only clean up the tools and session associated with these names. If not provided, clean up all registered MCP tools and sessions. | None |
Returns
Name | Type | Description |
---|---|---|
None |
console
='output', stream=True, kwargs=None) Chat.console(echo
Enter a chat console to interact with the LLM.
To quit, input ‘exit’ or press Ctrl+C.
Parameters
Name | Type | Description | Default |
---|---|---|---|
echo | EchoOptions |
One of the following (default is “output”): - "text" : Echo just the text content of the response. - "output" : Echo text and tool call content. - "all" : Echo both the assistant and user turn. - "none" : Do not echo any content. |
'output' |
stream | bool | Whether to stream the response (i.e., have the response appear in chunks). | True |
kwargs | Optional[SubmitInputArgsT] | Additional keyword arguments to pass to the method used for requesting the response | None |
Returns
Name | Type | Description |
---|---|---|
None |
export
Chat.export(
filename,*,
=None,
turns=None,
title='text',
content=True,
include_system_prompt=False,
overwrite )
Export the chat history to a file.
Parameters
Name | Type | Description | Default |
---|---|---|---|
filename | str | Path | The filename to export the chat to. Currently this must be a .md or .html file. |
required |
turns | Optional[Sequence[Turn]] | The .get_turns() to export. If not provided, the chat’s current turns will be used. |
None |
title | Optional[str] | A title to place at the top of the exported file. | None |
overwrite | bool | Whether to overwrite the file if it already exists. | False |
content | Literal['text', 'all'] | Whether to include text content, all content (i.e., tool calls), or no content. | 'text' |
include_system_prompt | bool | Whether to include the system prompt in a | True |
Returns
Name | Type | Description |
---|---|---|
Path | The path to the exported file. |
extract_data
*args, data_model, echo='none', stream=False) Chat.extract_data(
Extract structured data from the given input.
Parameters
Name | Type | Description | Default |
---|---|---|---|
args | Content | str | The input to extract data from. | () |
data_model | type[BaseModel] | A Pydantic model describing the structure of the data to extract. | required |
echo | EchoOptions |
One of the following (default is “none”): - "text" : Echo just the text content of the response. - "output" : Echo text and tool call content. - "all" : Echo both the assistant and user turn. - "none" : Do not echo any content. |
'none' |
stream | bool | Whether to stream the response (i.e., have the response appear in chunks). | False |
Returns
Name | Type | Description |
---|---|---|
dict[str, Any] | The extracted data. |
extract_data_async
*args, data_model, echo='none', stream=False) Chat.extract_data_async(
Extract structured data from the given input asynchronously.
Parameters
Name | Type | Description | Default |
---|---|---|---|
args | Content | str | The input to extract data from. | () |
data_model | type[BaseModel] | A Pydantic model describing the structure of the data to extract. | required |
echo | EchoOptions |
One of the following (default is “none”): - "text" : Echo just the text content of the response. - "output" : Echo text and tool call content. - "all" : Echo both the assistant and user turn. - "none" : Do not echo any content. |
'none' |
stream | bool | Whether to stream the response (i.e., have the response appear in chunks). Defaults to True if echo is not “none”. |
False |
Returns
Name | Type | Description |
---|---|---|
dict[str, Any] | The extracted data. |
get_cost
='all', token_price=None) Chat.get_cost(options
Estimate the cost of the chat.
Note
This is a rough estimate, treat it as such. Providers may change their pricing frequently and without notice.
Parameters
Name | Type | Description | Default |
---|---|---|---|
options | Literal['all', 'last'] | One of the following (default is “all”): - "all" : Return the total cost of all turns in the chat. - "last" : Return the cost of the last turn in the chat. |
'all' |
token_price | Optional[tuple[float, float]] | An optional tuple in the format of (input_token_cost, output_token_cost) for bringing your own cost information. - "input_token_cost" : The cost per user token in USD per million tokens. - "output_token_cost" : The cost per assistant token in USD per million tokens. |
None |
Returns
Name | Type | Description |
---|---|---|
float | The cost of the chat, in USD. |
get_last_turn
='assistant') Chat.get_last_turn(role
Get the last turn in the chat with a specific role.
Parameters
Name | Type | Description | Default |
---|---|---|---|
role | Literal['assistant', 'user', 'system'] | The role of the turn to return. | 'assistant' |
get_tokens
Chat.get_tokens()
Get the tokens for each turn in the chat.
Returns
Name | Type | Description |
---|---|---|
list[TokensDict ] |
A list of dictionaries with the token counts for each (non-system) turn |
Raises
Name | Type | Description |
---|---|---|
ValueError | If the chat’s turns (i.e., .get_turns() ) are not in an expected format. This may happen if the chat history is manually set (i.e., .set_turns() ). In this case, you can inspect the “raw” token values via the .get_turns() method (each turn has a .tokens attribute). |
get_tools
Chat.get_tools()
Get the list of registered tools.
Returns
Name | Type | Description |
---|---|---|
list[Tool] | A list of Tool instances that are currently registered with the chat. |
get_turns
=False) Chat.get_turns(include_system_prompt
Get all the turns (i.e., message contents) in the chat.
Parameters
Name | Type | Description | Default |
---|---|---|---|
include_system_prompt | bool | Whether to include the system prompt in the turns. | False |
on_tool_request
Chat.on_tool_request(callback)
Register a callback for a tool request event.
A tool request event occurs when the assistant requests a tool to be called on its behalf. Before invoking the tool, on_tool_request
handlers are called with the relevant ContentToolRequest
object. This is useful if you want to handle tool requests in a custom way, such as requiring logging them or requiring user approval before invoking the tool
Parameters
Name | Type | Description | Default |
---|---|---|---|
callback | Callable[[ContentToolRequest], None] | A function to be called when a tool request event occurs. This function must have a single argument, which will be the tool request (i.e., a ContentToolRequest object). |
required |
Returns
Name | Type | Description |
---|---|---|
A callable that can be used to remove the callback later. |
on_tool_result
Chat.on_tool_result(callback)
Register a callback for a tool result event.
A tool result event occurs when a tool has been invoked and the result is ready to be provided to the assistant. After the tool has been invoked, on_tool_result
handlers are called with the relevant ContentToolResult
object. This is useful if you want to handle tool results in a custom way such as logging them.
Parameters
Name | Type | Description | Default |
---|---|---|---|
callback | Callable[[ContentToolResult], None] | A function to be called when a tool result event occurs. This function must have a single argument, which will be the tool result (i.e., a ContentToolResult object). |
required |
Returns
Name | Type | Description |
---|---|---|
A callable that can be used to remove the callback later. |
register_mcp_tools_http_stream_async
Chat.register_mcp_tools_http_stream_async(
url,=(),
include_tools=(),
exclude_tools=None,
name=None,
namespace=None,
transport_kwargs )
Register tools from an MCP server using streamable HTTP transport.
Connects to an MCP server (that communicates over a streamable HTTP transport) and registers the available tools. This is useful for utilizing tools provided by an MCP server running on a remote server (or locally) over HTTP.
Pre-requisites
Requires the mcp
package to be installed. Install it with:
pip install mcp
Parameters
Name | Type | Description | Default |
---|---|---|---|
url | str | URL endpoint where the Streamable HTTP server is mounted (e.g., http://localhost:8000/mcp ) |
required |
name | Optional[str] | A unique name for the MCP server session. If not provided, the name is derived from the MCP server information. This name is primarily useful for cleanup purposes (i.e., to close a particular MCP session). | None |
include_tools | Sequence[str] | List of tool names to include. By default, all available tools are included. | () |
exclude_tools | Sequence[str] | List of tool names to exclude. This parameter and include_tools are mutually exclusive. |
() |
namespace | Optional[str] | A namespace to prepend to tool names (i.e., namespace.tool_name ) from this MCP server. This is primarily useful to avoid name collisions with other tools already registered with the chat. This namespace applies when tools are advertised to the LLM, so try to use a meaningful name that describes the server and/or the tools it provides. For example, if you have a server that provides tools for mathematical operations, you might use math as the namespace. |
None |
transport_kwargs | Optional[dict[str, Any]] | Additional keyword arguments for the transport layer (i.e., mcp.client.streamable_http.streamablehttp_client ). |
None |
Returns
Name | Type | Description |
---|---|---|
None |
See Also
.cleanup_mcp_tools_async()
: Cleanup registered MCP tools..register_mcp_tools_stdio_async()
: Register tools from an MCP server using stdio transport.
Note
Unlike the .register_mcp_tools_stdio_async()
method, this method does not launch an MCP server. Instead, it assumes an HTTP server is already running at the specified URL. This is useful for connecting to an existing MCP server that is already running and serving tools.
Examples
Assuming you have a Python script my_mcp_server.py
that implements an MCP server like so:
from mcp.server.fastmcp import FastMCP
= FastMCP("my_server")
app
@app.tool(description="Add two numbers.")
def add(x: int, y: int) -> int:
return x + y
="streamable-http") app.run(transport
You can launch this server like so:
python my_mcp_server.py
Then, you can register this server with the chat as follows:
await chat.register_mcp_tools_http_stream_async(
="http://localhost:8080/mcp"
url )
register_mcp_tools_stdio_async
Chat.register_mcp_tools_stdio_async(
command,
args,=None,
name=(),
include_tools=(),
exclude_tools=None,
namespace=None,
transport_kwargs )
Register tools from a MCP server using stdio (standard input/output) transport.
Useful for launching an MCP server and registering its tools with the chat – all from the same Python process.
In more detail, this method:
- Executes the given
command
with the providedargs
.- This should start an MCP server that communicates via stdio.
- Establishes a client connection to the MCP server using the
mcp
package. - Registers the available tools from the MCP server with the chat.
- Returns a cleanup callback to close the MCP session and remove the tools.
Pre-requisites
Requires the mcp
package to be installed. Install it with:
pip install mcp
Parameters
Name | Type | Description | Default |
---|---|---|---|
command | str | System command to execute to start the MCP server (e.g., python ). |
required |
args | list[str] | Arguments to pass to the system command (e.g., ["-m", "my_mcp_server"] ). |
required |
name | Optional[str] | A unique name for the MCP server session. If not provided, the name is derived from the MCP server information. This name is primarily useful for cleanup purposes (i.e., to close a particular MCP session). | None |
include_tools | Sequence[str] | List of tool names to include. By default, all available tools are included. | () |
exclude_tools | Sequence[str] | List of tool names to exclude. This parameter and include_tools are mutually exclusive. |
() |
namespace | Optional[str] | A namespace to prepend to tool names (i.e., namespace.tool_name ) from this MCP server. This is primarily useful to avoid name collisions with other tools already registered with the chat. This namespace applies when tools are advertised to the LLM, so try to use a meaningful name that describes the server and/or the tools it provides. For example, if you have a server that provides tools for mathematical operations, you might use math as the namespace. |
None |
transport_kwargs | Optional[dict[str, Any]] | Additional keyword arguments for the stdio transport layer (i.e., mcp.client.stdio.stdio_client ). |
None |
Returns
Name | Type | Description |
---|---|---|
None |
See Also
.cleanup_mcp_tools_async()
: Cleanup registered MCP tools..register_mcp_tools_http_stream_async()
: Register tools from an MCP server using streamable HTTP transport.
Examples
Assuming you have a Python script my_mcp_server.py
that implements an MCP server like so
from mcp.server.fastmcp import FastMCP
= FastMCP("my_server")
app
@app.tool(description="Add two numbers.")
def add(y: int, z: int) -> int:
return y - z
="stdio") app.run(transport
You can register this server with the chat as follows:
from chatlas import ChatOpenAI
= ChatOpenAI()
chat
await chat.register_mcp_tools_stdio_async(
="python",
command=["-m", "my_mcp_server"],
args )
register_tool
*, force=False, model=None) Chat.register_tool(func,
Register a tool (function) with the chat.
The function will always be invoked in the current Python process.
Examples
If your tool has straightforward input parameters, you can just register the function directly (type hints and a docstring explaning both what the function does and what the parameters are for is strongly recommended):
from chatlas import ChatOpenAI
def add(a: int, b: int) -> int:
'''
Add two numbers together.
#### Parameters {.doc-section .doc-section-----parameters}
a : int
The first number to add.
b : int
The second number to add.
'''
return a + b
= ChatOpenAI()
chat
chat.register_tool(add)"What is 2 + 2?") chat.chat(
If your tool has more complex input parameters, you can provide a Pydantic model that corresponds to the input parameters for the function, This way, you can have fields that hold other model(s) (for more complex input parameters), and also more directly document the input parameters:
from chatlas import ChatOpenAI
from pydantic import BaseModel, Field
class AddParams(BaseModel):
'''Add two numbers together.'''
int = Field(description="The first number to add.")
a:
int = Field(description="The second number to add.")
b:
def add(a: int, b: int) -> int:
return a + b
= ChatOpenAI()
chat =AddParams)
chat.register_tool(add, model"What is 2 + 2?") chat.chat(
Parameters
func The function to be invoked when the tool is called. force If True
, overwrite any existing tool with the same name. If False
(the default), raise an error if a tool with the same name already exists. model A Pydantic model that describes the input parameters for the function. If not provided, the model will be inferred from the function’s type hints. The primary reason why you might want to provide a model in Note that the name and docstring of the model takes precedence over the name and docstring of the function.
Raises
ValueError If a tool with the same name already exists and force
is False
.
set_echo_options
=None, rich_console=None, css_styles=None) Chat.set_echo_options(rich_markdown
Set echo styling options for the chat.
Parameters
Name | Type | Description | Default |
---|---|---|---|
rich_markdown | Optional[dict[str, Any]] | A dictionary of options to pass to rich.markdown.Markdown() . This is only relevant when outputting to the console. |
None |
rich_console | Optional[dict[str, Any]] | A dictionary of options to pass to rich.console.Console() . This is only relevant when outputting to the console. |
None |
css_styles | Optional[dict[str, str]] | A dictionary of CSS styles to apply to IPython.display.Markdown() . This is only relevant when outputing to the browser. |
None |
set_model_params
Chat.set_model_params(=MISSING,
temperature=MISSING,
top_p=MISSING,
top_k=MISSING,
frequency_penalty=MISSING,
presence_penalty=MISSING,
seed=MISSING,
max_tokens=MISSING,
log_probs=MISSING,
stop_sequences=MISSING,
kwargs )
Set common model parameters for the chat.
A unified interface for setting common model parameters across different providers. This method is useful for setting parameters that are commonly supported by most providers, such as temperature, top_p, etc.
By default, if the parameter is not set (i.e., set to MISSING
), the provider’s default value is used. If you want to reset a parameter to its default value, set it to None
.
Parameters
Name | Type | Description | Default |
---|---|---|---|
temperature | float | None | MISSING_TYPE | Temperature of the sampling distribution. | MISSING |
top_p | float | None | MISSING_TYPE | The cumulative probability for token selection. | MISSING |
top_k | int | None | MISSING_TYPE | The number of highest probability vocabulary tokens to keep. | MISSING |
frequency_penalty | float | None | MISSING_TYPE | Frequency penalty for generated tokens. | MISSING |
presence_penalty | float | None | MISSING_TYPE | Presence penalty for generated tokens. | MISSING |
seed | int | None | MISSING_TYPE | Seed for random number generator. | MISSING |
max_tokens | int | None | MISSING_TYPE | Maximum number of tokens to generate. | MISSING |
log_probs | bool | None | MISSING_TYPE | Include the log probabilities in the output? | MISSING |
stop_sequences | list[str] | None | MISSING_TYPE | A character vector of tokens to stop generation on. | MISSING |
kwargs | SubmitInputArgsT | None | MISSING_TYPE | Additional keyword arguments to use when submitting input to the model. When calling this method repeatedly with different parameters, only the parameters from the last call will be used. | MISSING |
set_tools
Chat.set_tools(tools)
Set the tools for the chat.
This replaces any previously registered tools with the provided list of tools. This is for advanced usage – typically, you would use .register_tool()
to register individual tools as needed.
Parameters
Name | Type | Description | Default |
---|---|---|---|
tools | list[Callable[…, Any] | Callable[…, Awaitable[Any]] | Tool] | A list of Tool instances to set as the chat’s tools. |
required |
set_turns
Chat.set_turns(turns)
Set the turns of the chat.
Replaces the current chat history state (i.e., turns) with the provided turns. This can be useful for: * Clearing (or trimming) the chat history (i.e., .set_turns([])
). * Restoring context from a previous chat.
Parameters
Name | Type | Description | Default |
---|---|---|---|
turns | Sequence[Turn] | The turns to set. Turns with the role “system” are not allowed. | required |
stream
*args, content='text', echo='none', kwargs=None) Chat.stream(
Generate a response from the chat in a streaming fashion.
Parameters
Name | Type | Description | Default |
---|---|---|---|
args | Content | str | The user input(s) to generate a response from. | () |
content | Literal['text', 'all'] | Whether to yield just text content or include rich content objects (e.g., tool calls) when relevant. | 'text' |
echo | EchoOptions |
One of the following (default is “none”): - "text" : Echo just the text content of the response. - "output" : Echo text and tool call content. - "all" : Echo both the assistant and user turn. - "none" : Do not echo any content. |
'none' |
kwargs | Optional[SubmitInputArgsT] | Additional keyword arguments to pass to the method used for requesting the response. | None |
Returns
Name | Type | Description |
---|---|---|
ChatResponse | An (unconsumed) response from the chat. Iterate over this object to consume the response. |
stream_async
*args, content='text', echo='none', kwargs=None) Chat.stream_async(
Generate a response from the chat in a streaming fashion asynchronously.
Parameters
Name | Type | Description | Default |
---|---|---|---|
args | Content | str | The user input(s) to generate a response from. | () |
content | Literal['text', 'all'] | Whether to yield just text content or include rich content objects (e.g., tool calls) when relevant. | 'text' |
echo | EchoOptions |
One of the following (default is “none”): - "text" : Echo just the text content of the response. - "output" : Echo text and tool call content. - "all" : Echo both the assistant and user turn. - "none" : Do not echo any content. |
'none' |
kwargs | Optional[SubmitInputArgsT] | Additional keyword arguments to pass to the method used for requesting the response. | None |
Returns
Name | Type | Description |
---|---|---|
ChatResponseAsync | An (unconsumed) response from the chat. Iterate over this object to consume the response. |
token_count
*args, data_model=None) Chat.token_count(
Get an estimated token count for the given input.
Estimate the token size of input content. This can help determine whether input(s) and/or conversation history (i.e., .get_turns()
) should be reduced in size before sending it to the model.
Parameters
Name | Type | Description | Default |
---|---|---|---|
args | Content | str | The input to get a token count for. | () |
data_model | Optional[type[BaseModel]] | If the input is meant for data extraction (i.e., .extract_data() ), then this should be the Pydantic model that describes the structure of the data to extract. |
None |
Returns
Name | Type | Description |
---|---|---|
int | The token count for the input. |
Note
Remember that the token count is an estimate. Also, models based on ChatOpenAI()
currently does not take tools into account when estimating token counts.
Examples
from chatlas import ChatAnthropic
= ChatAnthropic()
chat # Estimate the token count before sending the input
print(chat.token_count("What is 2 + 2?"))
# Once input is sent, you can get the actual input and output
# token counts from the chat object
"What is 2 + 2?", echo="none")
chat.chat(print(chat.token_usage())
token_count_async
*args, data_model=None) Chat.token_count_async(
Get an estimated token count for the given input asynchronously.
Estimate the token size of input content. This can help determine whether input(s) and/or conversation history (i.e., .get_turns()
) should be reduced in size before sending it to the model.
Parameters
Name | Type | Description | Default |
---|---|---|---|
args | Content | str | The input to get a token count for. | () |
data_model | Optional[type[BaseModel]] | If this input is meant for data extraction (i.e., .extract_data_async() ), then this should be the Pydantic model that describes the structure of the data to extract. |
None |
Returns
Name | Type | Description |
---|---|---|
int | The token count for the input. |