Overview

chatlas website banner image

chatlas

Your friendly guide to building LLM chat apps in Python with less effort and more clarity.

PyPI MIT License versions Python Tests

Quick start

Get started in 3 simple steps:

  1. Choose a model provider, such as ChatOpenAI or ChatAnthropic.
  2. Visit the providerโ€™s reference page to get setup with necessary credentials.
  3. Create the relevant Chat client and start chatting!
from chatlas import ChatOpenAI

# Optional (but recommended) model and system_prompt
chat = ChatOpenAI(
    model="gpt-4o-mini",
    system_prompt="You are a helpful assistant.",
)

# Optional tool registration
def get_current_weather(lat: float, lng: float):
    "Get the current weather for a given location."
    return "sunny"

chat.register_tool(get_current_weather)

# Send user prompt to the model for a response.
chat.chat("How's the weather in San Francisco?")
# ๐Ÿ› ๏ธ tool request
get_current_weather(37.7749, -122.4194)
# โœ… tool result
sunny

The current weather in San Francisco is sunny.

Install

Install the latest stable release from PyPI:

pip install -U chatlas

Why chatlas?

๐Ÿš€ Opinionated design: most problems just need the right model, system prompt, and tool calls. Spend more time mastering the fundamentals and less time navigating needless complexity.

๐Ÿงฉ Model agnostic: try different models with minimal code changes.

๐ŸŒŠ Stream output: automatically in notebooks, at the console, and your favorite IDE. You can also stream responses into bespoke applications (e.g., chatbots).

๐Ÿ› ๏ธ Tool calling: give the LLM โ€œagenticโ€ capabilities by simply writing Python function(s).

๐Ÿ”„ Multi-turn chat: history is retained by default, making the common case easy.

๐Ÿ–ผ๏ธ Multi-modal input: submit input like images, pdfs, and more.

๐Ÿ“‚ Structured output: easily extract structure from unstructured input.

โฑ๏ธ Async: supports async operations for efficiency and scale.

โœ๏ธ Autocomplete: easily discover and use provider-specific parameters like temperature, max_tokens, and more.

๐Ÿ” Inspectable: tools for debugging and monitoring in production.

๐Ÿ”Œ Extensible: add new model providers, content types, and more.

Next steps

Next weโ€™ll learn more about what model providers are available and how to approach picking a particular model. If you already have a model in mind, or just want to see what chatlas can do, skip ahead to hello chat.