Monitor
As mentioned in the debugging section, chatlas has support for gaining more insight into the behavior of your application through things like logging.
However, in a production setting, you may want to go beyond simple logging and use more sophisticated observability tools Datadog, Logfire, etc., to monitor your application. These tools can give you a more structured way to view and monitor app performance, including things like latency, error rates, and other metrics. These tools tend to integrate well with open standards like OpenTelemetry (OTel), meaning if you “instrument” your app with OTel, you can view your app’s telemetry data in any observability tool that supports OTel. There are at least a few different ways to do this, but we’ll cover some of the more simpler approaches here.
OpenLLMetry
The simplest (and most model agnostic) way to instrument your app with OTel is to leverage openllmetry, which can be as easy as adding the following code to your app:
pip install traceloop-sdk
from traceloop.sdk import Traceloop
Traceloop.init(="my app name",
app_name=True,
disable_batch=False
telemetry_enabled )
This approach does have the downside of requiring a Traceloop account, but it does provide a free tier, and makes it quite easy to get started visualizing your app’s telemetry data.
If you want to avoid the Traceloop account, you can also use their OTel instrumentation libraries (e.g., openai and anthropic) more directly. If Traceloop is not for you, however, you may prefer to use the “official” OTel libraries directly, which are more truly vendor agnostic.
OpenTelemetry
To use OpenTelemetry’s “official” instrumentation libraries, you’ll need to first install the relevant instrumentation packages for the model providers you are using.
OpenAI
More than a handful of chatlas’ model providers use the openai Python SDK under the hood (e.g., ChatOpenAI
, ChatOllama
, etc).
openai
SDK
To be sure a particular provider uses the openai
SDK, make sure the class of the .provider
attribute is OpenAIProvider
:
from chatlas import ChatOpenAI
= ChatOpenAI()
chat
chat.provider# <chatlas._openai.OpenAIProvider object at 0x103d2fdd0>
As a result, you can use the opentelemetry-instrumentation-openai-v2 package to add OTel instrumentation your app. It even provides a way to add instrumentation without modifying your code (i.e., zero-code). To tweak the zero-code example to work with chatlas, just change the requirements.txt
and main.py
files to use chatlas instead of openai directly:
main.py
from chatlas import ChatOpenAI
= ChatOpenAI()
chat "Hello world!") chat.chat(
You may also want to tweak the environment variables in .env
to target the relevant OTel collector and service name.
Anthropic
Both the ChatAnthropic()
and ChatBedrockAnthropic()
providers use the anthropic Python SDK under the hood. As a result, you can use the opentelemetry-instrumentation-anthropic package to add OTel instrumentation your app.
To do this, you’ll need to install the package:
pip install opentelemetry-instrumentation-anthropic
Then, add the following instrumentation code to your app:
from opentelemetry.instrumentation.anthropic import AnthropicInstrumentor
AnthropicInstrumentor().instrument()
Both the ChatGoogle()
and ChatVertex()
providers use the google-genai Python SDK under the hood. As a result, you can use the opentelemetry-instrumentation-google-genai package to add OTel instrumentation your app. It even provides a way to add instrumentation without modifying your code (i.e., zero-code). To tweak the zero-code example to work with chatlas, just change the requirements.txt
and main.py
files to use chatlas instead of google-genai directly:
main.py
from chatlas import ChatGoogle
= ChatGoogle()
chat "Hello world!") chat.chat(