ui.Chat

ui.Chat(self, id, *, messages=(), on_error='auto', tokenizer=None)

Create a chat interface.

A UI component for building conversational interfaces. With it, end users can submit messages, which will cause a .on_user_submit() callback to run. In that callback, a response can be generated based on the chat's .messages(), and appended to the chat using .append_message() or .append_message_stream().

Here's a rough outline for how to implement a Chat:

from shiny.express import ui

# Create and display chat instance
chat = ui.Chat(id="my_chat")
chat.ui()

# Define a callback to run when the user submits a message
@chat.on_user_submit
async def _():
    # Get messages currently in the chat
    messages = chat.messages()
    # Create a response message stream
    response = await my_model.generate_response(messages, stream=True)
    # Append the response into the chat
    await chat.append_message_stream(response)

In the outline above, my_model.generate_response() is a placeholder for the function that generates a response based on the chat's messages. This function will look different depending on the model you're using, but it will generally involve passing the messages to the model and getting a response back. Also, you'll typically have a choice to stream=True the response generation, and in that case, you'll use .append_message_stream() instead of .append_message() to append the response to the chat. Streaming is preferrable when available since it allows for more responsive and scalable chat interfaces.

Parameters

id : str

A unique identifier for the chat session. In Shiny Core, make sure this id matches a corresponding chat_ui call in the UI.

messages : Sequence[Any] = ()

A sequence of messages to display in the chat. Each message can be a dictionary with a content and role key. The content key should contain the message text, and the role key can be “assistant”, “user”, or “system”. Note that system messages are not actually displayed in the chat, but will still be stored in the chat’s .messages().

on_error : Literal[‘auto’, ‘actual’, ‘sanitize’, ‘unhandled’] = 'auto'

How to handle errors that occur in response to user input. When "unhandled", the app will stop running when an error occurs. Otherwise, a notification is displayed to the user and the app continues to run. * "auto": Sanitize the error message if the app is set to sanitize errors, otherwise display the actual error message. * "actual": Display the actual error message to the user. * "sanitize": Sanitize the error message before displaying it to the user. * "unhandled": Do not display any error message to the user.

tokenizer : TokenEncoding | None = None

The tokenizer to use for calculating token counts, which is required to impose token_limits in .messages(). If not provided, a default generic tokenizer is attempted to be loaded from the tokenizers library. A specific tokenizer may also be provided by following the TokenEncoding (tiktoken or tozenizers) protocol (e.g., tiktoken.encoding_for_model("gpt-4o")).

Examples

#| standalone: true
#| components: [editor, viewer]
#| layout: vertical
#| viewerHeight: 400

## file: app.py
from shiny import App, ui

app_ui = ui.page_fillable(
    ui.panel_title("Hello Shiny Chat"),
    ui.chat_ui("chat"),
    fillable_mobile=True,
)

# Create a welcome message
welcome = ui.markdown(
    """
    Hi! This is a simple Shiny `Chat` UI. Enter a message below and I will
    simply repeat it back to you. For more examples, see this
    [folder of examples](https://github.com/posit-dev/py-shiny/tree/main/examples/chat).
    """
)


def server(input, output, session):
    chat = ui.Chat(id="chat", messages=[welcome])

    # Define a callback to run when the user submits a message
    @chat.on_user_submit
    async def _():
        # Get the user's input
        user = chat.user_input()
        # Append a response to the chat
        await chat.append_message(f"You said: {user}")


app = App(app_ui, server)

Methods

Name Description
append_message Append a message to the chat.
append_message_stream Append a message as a stream of message chunks.
clear_messages Clear all chat messages.
destroy Destroy the chat instance.
messages Reactively read chat messages
on_user_submit Define a function to invoke when user input is submitted.
set_user_message Deprecated. Use update_user_input(value=value) instead.
transform_assistant_response Transform assistant responses.
transform_user_input Transform user input.
ui Place a chat component in the UI.
update_user_input Update the user input.
user_input Reactively read the user’s message.

append_message

ui.Chat.append_message(message)

Append a message to the chat.

Parameters

message : Any

The message to append. A variety of message formats are supported including a string, a dictionary with content and role keys, or a relevant chat completion object from platforms like OpenAI, Anthropic, Ollama, and others.

Note

Use .append_message_stream() instead of this method when stream=True (or similar) is specified in model’s completion method.

append_message_stream

ui.Chat.append_message_stream(message)

Append a message as a stream of message chunks.

Parameters

message : Iterable[Any] | AsyncIterable[Any]

An iterable or async iterable of message chunks to append. A variety of message chunk formats are supported, including a string, a dictionary with content and role keys, or a relevant chat completion object from platforms like OpenAI, Anthropic, Ollama, and others.

Note

Use this method (over .append_message()) when stream=True (or similar) is specified in model’s completion method.

clear_messages

ui.Chat.clear_messages()

Clear all chat messages.

destroy

ui.Chat.destroy()

Destroy the chat instance.

messages

ui.Chat.messages(
    format=MISSING,
    token_limits=None,
    transform_user='all',
    transform_assistant=False,
)

Reactively read chat messages

Obtain chat messages within a reactive context. The default behavior is intended for passing messages along to a model for response generation where you typically want to:

  1. Cap the number of tokens sent in a single request (i.e., token_limits).
  2. Apply user input transformations (i.e., transform_user), if any.
  3. Not apply assistant response transformations (i.e., transform_assistant) since these are predominantly for display purposes (i.e., the model shouldn't concern itself with how the responses are displayed).

Parameters

format : MISSING_TYPE | ProviderMessageFormat = MISSING

The message format to return. The default value of MISSING means chat messages are returned as ChatMessage objects (a dictionary with content and role keys). Other supported formats include: * "anthropic": Anthropic message format. * "google": Google message (aka content) format. * "langchain": LangChain message format. * "openai": OpenAI message format. * "ollama": Ollama message format.

token_limits : tuple[int, int] | None = None

Limit the conversation history based on token limits. If specified, only the most recent messages that fit within the token limits are returned. This is useful for avoiding “exceeded token limit” errors when sending messages to the relevant model, while still providing the most recent context available. A specified value must be a tuple of two integers. The first integer is the maximum number of tokens that can be sent to the model in a single request. The second integer is the amount of tokens to reserve for the model’s response. Note that token counts based on the tokenizer provided to the Chat constructor.

transform_user : Literal[‘all’, ‘last’, ‘none’] = 'all'

Whether to return user input messages with transformation applied. This only matters if a transform_user_input was provided to the chat constructor. The default value of "all" means all user input messages are transformed. The value of "last" means only the last user input message is transformed. The value of "none" means no user input messages are transformed.

transform_assistant : bool = False

Whether to return assistant messages with transformation applied. This only matters if an transform_assistant_response was provided to the chat constructor.

Note

Messages are listed in the order they were added. As a result, when this method is called in a .on_user_submit() callback (as it most often is), the last message will be the most recent one submitted by the user.

Returns

: tuple[ChatMessage, …]

A tuple of chat messages.

on_user_submit

ui.Chat.on_user_submit(fn=None)

Define a function to invoke when user input is submitted.

Apply this method as a decorator to a function (fn) that should be invoked when the user submits a message. The function should take no arguments.

In many cases, the implementation of fn should do at least the following:

  1. Call .messages() to obtain the current chat history.
  2. Generate a response based on those messages.
  3. Append the response to the chat history using .append_message() ( or .append_message_stream() if the response is streamed).

Parameters

fn : SubmitFunction | SubmitFunctionAsync | None = None

A function to invoke when user input is submitted.

Note

This method creates a reactive effect that only gets invalidated when the user submits a message. Thus, the function fn can read other reactive dependencies, but it will only be re-invoked when the user submits a message.

set_user_message

ui.Chat.set_user_message(value)

Deprecated. Use update_user_input(value=value) instead.

transform_assistant_response

ui.Chat.transform_assistant_response(fn=None)

Transform assistant responses.

Use this method as a decorator on a function (fn) that transforms assistant responses before displaying them in the chat. This is useful for post-processing model responses before displaying them to the user.

Parameters

fn : TransformAssistantResponseFunction | None = None

A function that takes a string and returns either a string, shiny.ui.HTML, or None. If fn returns a string, it gets interpreted and parsed as a markdown on the client (and the resulting HTML is then sanitized). If fn returns shiny.ui.HTML, it will be displayed as-is. If fn returns None, the response is effectively ignored.

Note

When doing an .append_message_stream(), fn gets called on every chunk of the response (thus, it should be performant), and can optionally access more information (i.e., arguments) about the stream. The 1st argument (required) contains the accumulated content, the 2nd argument (optional) contains the current chunk, and the 3rd argument (optional) is a boolean indicating whether this chunk is the last one in the stream.

transform_user_input

ui.Chat.transform_user_input(fn=None)

Transform user input.

Use this method as a decorator on a function (fn) that transforms user input before storing it in the chat messages returned by .messages(). This is useful for implementing RAG workflows, like taking a URL and scraping it for text before sending it to the model.

Parameters

fn : TransformUserInput | TransformUserInputAsync | None = None

A function to transform user input before storing it in the chat .messages(). If fn returns None, the user input is effectively ignored, and .on_user_submit() callbacks are suspended until more input is submitted. This behavior is often useful to catch and handle errors that occur during transformation. In this case, the transform function should append an error message to the chat (via .append_message()) to inform the user of the error.

ui

ui.Chat.ui(
    messages=None,
    placeholder='Enter a message...',
    width='min(680px, 100%)',
    height='auto',
    fill=True,
    **kwargs,
)

Place a chat component in the UI.

This method is only relevant fpr Shiny Express. In Shiny Core, use chat_ui instead to insert the chat UI.

Parameters

messages : Optional[Sequence[str | ChatMessage]] = None

A sequence of messages to display in the chat. Each message can be either a string or a dictionary with content and role keys. The content key should contain the message text, and the role key can be “assistant” or “user”.

placeholder : str = 'Enter a message…'

Placeholder text for the chat input.

width : CssUnit = 'min(680px, 100%)'

The width of the chat container.

height : CssUnit = 'auto'

The height of the chat container.

fill : bool = True

Whether the chat should vertically take available space inside a fillable container.

kwargs : TagAttrValue = {}

Additional attributes for the chat container element.

update_user_input

ui.Chat.update_user_input(value=None, placeholder=None)

Update the user input.

Parameters

value : str | None = None

The value to set the user input to.

placeholder : str | None = None

The placeholder text for the user input.

user_input

ui.Chat.user_input(transform=False)

Reactively read the user's message.

Parameters

transform : bool = False

Whether to apply the user input transformation function (if one was provided).

Returns

: str | None

The user input message (before any transformation).

Note

Most users shouldn’t need to use this method directly since the last item in .messages() contains the most recent user input. It can be useful for:

  1. Taking a reactive dependency on the user’s input outside of a .on_user_submit() callback.
  2. Maintaining message state separately from .messages().