Models

Under the hood, querychat is powered by chatlas, a library for building chat-based applications with large language models (LLMs). chatlas supports a wide range of LLM providers – see here for a full list.

Specify a model

To use a particular model, pass a "{provider}/{model}" string to the client parameter. Under the hood, this gets passed along to chatlas.ChatAuto

from querychat import QueryChat
from querychat.data import titanic

qc = QueryChat(
    titanic(),
    "titanic",
    client="anthropic/claude-sonnet-4-5"
)

And, if you’d like to effectively set a new default model, you can use the QUERYCHAT_CLIENT environment variable.

export QUERYCHAT_CLIENT="anthropic/claude-sonnet-4-5"

Note that it can also be useful to pass a full Chat object to the client parameter for more advanced use cases (e.g., custom parameters, tools, etc). It can also be useful for getting some helpful autocomplete of available models.

from chatlas import ChatAnthropic

client = ChatAnthropic(model="claude-sonnet-4-5")

Credentials

Most models require an API key or some other form of authentication. See the reference page for the relevant model provider (e.g., ChatAnthropic) to learn more on how to set up credentials.

Github model marketplace

If you are already setup with Github credentials, Github model marketplace provides a free and easy way to get started. See here for more details on how to get setup.

github-model.py
from chatlas import ChatGithub

# Just works if GITHUB_TOKEN is set in your environment
client = ChatGithub(model="gpt-4.1")

In general, most providers will prefer credentials stored as environment variables, and common practice is to use a .env file to manage these variables. For example, for ChatOpenAI(), you might create a .env file like so:

.env
OPENAI_API_KEY="your_api_key_here"

Then, load the environment variables via the dotenv package:

pip install dotenv
from dotenv import load_dotenv
load_dotenv()