ChatOllama

ChatOllama(
    model=None,
    *,
    system_prompt=None,
    turns=None,
    base_url='http://localhost:11434/v1',
    seed=None,
    kwargs=None,
)

Chat with a local Ollama model.

Ollama makes it easy to run a wide-variety of open-source models locally, making it a great choice for privacy and security.

Prerequisites

Ollama runtime

ChatOllama requires the ollama executable to be installed and running on your machine.

Pull model(s)

Once ollama is running locally, download a model from the command line (e.g. ollama pull llama3.2).

Examples

from chatlas import ChatOllama

chat = ChatOllama(model="llama3.2")
chat.chat("What is the capital of France?")

Parameters

Name Type Description Default
model Optional[str] The model to use for the chat. If None, a list of locally installed models will be printed. None
system_prompt Optional[str] A system prompt to set the behavior of the assistant. None
turns Optional[list[Turn]] A list of turns to start the chat with (i.e., continuing a previous conversation). If not provided, the conversation begins from scratch. Do not provide non-None values for both turns and system_prompt. Each message in the list should be a dictionary with at least role (usually system, user, or assistant, but tool is also possible). Normally there is also a content field, which is a string. None
base_url str The base URL to the endpoint; the default uses ollama’s API. 'http://localhost:11434/v1'
seed Optional[int] Optional integer seed that helps to make output more reproducible. None
kwargs Optional['ChatClientArgs'] Additional arguments to pass to the openai.OpenAI() client constructor. None

Note

This function is a lightweight wrapper around ChatOpenAI with the defaults tweaked for ollama.

Limitations

ChatOllama currently doesn’t work with streaming tools, and tool calling more generally doesn’t seem to work very well with currently available models.