express.Chat
id, *, messages=(), on_error='auto', tokenizer=None) express.Chat(
Attributes
Name | Description |
---|---|
latest_message_stream | React to changes in the latest message stream. |
Methods
Name | Description |
---|---|
append_message | Append a message to the chat. |
append_message_stream | Append a message as a stream of message chunks. |
clear_messages | Clear all chat messages. |
destroy | Destroy the chat instance. |
enable_bookmarking | Enable bookmarking for the chat instance. |
message_stream_context | Message stream context manager. |
messages | Reactively read chat messages |
on_user_submit | Define a function to invoke when user input is submitted. |
set_user_message | Deprecated. Use update_user_input(value=value) instead. |
transform_assistant_response | Transform assistant responses. |
transform_user_input | Transform user input. |
ui | Create a UI element for this Chat . |
update_user_input | Update the user input. |
user_input | Reactively read the user’s message. |
append_message
*, icon=None) express.Chat.append_message(message,
Append a message to the chat.
Parameters
Name | Type | Description | Default |
---|---|---|---|
message | Any | A given message can be one of the following: * A string, which is interpreted as markdown and rendered to HTML on the client. * To prevent interpreting as markdown, mark the string as :class:~shiny.ui.HTML . * A UI element (specifically, a :class:~shiny.ui.TagChild ). * This includes :class:~shiny.ui.TagList , which take UI elements (including strings) as children. In this case, strings are still interpreted as markdown as long as they’re not inside HTML. * A dictionary with content and role keys. The content key can contain content as described above, and the role key can be “assistant” or “user”. NOTE: content may include specially formatted input suggestion links (see note below). |
required |
icon | HTML | Tag | TagList | None | An optional icon to display next to the message, currently only used for assistant messages. The icon can be any HTML element (e.g., an :func:~shiny.ui.img tag) or a string of HTML. |
None |
Note
Input suggestions are special links that send text to the user input box when clicked (or accessed via keyboard). They can be created in the following ways:
<span class='suggestion'>Suggestion text</span>
: An inline text link that places ‘Suggestion text’ in the user input box when clicked.<img data-suggestion='Suggestion text' src='image.jpg'>
: An image link with the same functionality as above.<span data-suggestion='Suggestion text'>Actual text</span>
: An inline text link that places ‘Suggestion text’ in the user input box when clicked.
A suggestion can also be submitted automatically by doing one of the following:
- Adding a
submit
CSS class or adata-suggestion-submit="true"
attribute to the suggestion element. - Holding the
Ctrl/Cmd
key while clicking the suggestion link.
Note that a user may also opt-out of submitting a suggestion by holding the Alt/Option
key while clicking the suggestion link.
Use .append_message_stream()
instead of this method when stream=True
(or similar) is specified in model’s completion method.
append_message_stream
*, icon=None) express.Chat.append_message_stream(message,
Append a message as a stream of message chunks.
Parameters
Name | Type | Description | Default |
---|---|---|---|
message | Iterable[Any] | AsyncIterable[Any] | An (async) iterable of message chunks. Each chunk can be one of the following: * A string, which is interpreted as markdown and rendered to HTML on the client. * To prevent interpreting as markdown, mark the string as :class:~shiny.ui.HTML . * A UI element (specifically, a :class:~shiny.ui.TagChild ). * This includes :class:~shiny.ui.TagList , which take UI elements (including strings) as children. In this case, strings are still interpreted as markdown as long as they’re not inside HTML. * A dictionary with content and role keys. The content key can contain content as described above, and the role key can be “assistant” or “user”. NOTE: content may include specially formatted input suggestion links (see note below). |
required |
icon | HTML | Tag | None | An optional icon to display next to the message, currently only used for assistant messages. The icon can be any HTML element (e.g., an :func:~shiny.ui.img tag) or a string of HTML. |
None |
Note
Input suggestions are special links that send text to the user input box when
clicked (or accessed via keyboard). They can be created in the following ways:
* `<span class='suggestion'>Suggestion text</span>`: An inline text link that
places 'Suggestion text' in the user input box when clicked.
* `<img data-suggestion='Suggestion text' src='image.jpg'>`: An image link with
the same functionality as above.
* `<span data-suggestion='Suggestion text'>Actual text</span>`: An inline text
link that places 'Suggestion text' in the user input box when clicked.
A suggestion can also be submitted automatically by doing one of the following:
* Adding a `submit` CSS class or a `data-suggestion-submit="true"` attribute to
the suggestion element.
* Holding the `Ctrl/Cmd` key while clicking the suggestion link.
Note that a user may also opt-out of submitting a suggestion by holding the
`Alt/Option` key while clicking the suggestion link.
Use this method (over `.append_message()`) when `stream=True` (or similar) is
specified in model's completion method.
Returns
Name | Type | Description |
---|---|---|
An extended task that represents the streaming task. The .result() method of the task can be called in a reactive context to get the final state of the stream. |
clear_messages
express.Chat.clear_messages()
Clear all chat messages.
destroy
express.Chat.destroy()
Destroy the chat instance.
enable_bookmarking
express.Chat.enable_bookmarking(
client,/,
*,
=None,
bookmark_store='response',
bookmark_on )
Enable bookmarking for the chat instance.
This method registers on_bookmark
and on_restore
hooks on session.bookmark
(:class:shiny.bookmark.Bookmark
) to save/restore chat state on both the Chat
and client=
instances. In order for this method to actually work correctly, a bookmark_store=
must be specified in shiny.express.app_opts()
.
Parameters
Name | Type | Description | Default |
---|---|---|---|
client | ClientWithState | chatlas.Chat[Any, Any] | The chat client instance to use for bookmarking. This can be a Chat model provider from chatlas, or more generally, an instance following the ClientWithState protocol. |
required |
bookmark_store | Optional[BookmarkStore] | A convenience parameter to set the shiny.express.app_opts(bookmark_store=) which is required for bookmarking (and .enable_bookmarking() ). If None , no value will be set. |
None |
bookmark_on | Optional[Literal['response']] | The event to trigger the bookmarking on. Supported values include: - "response" (the default): a bookmark is triggered when the assistant is done responding. - None : no bookmark is triggered When this method triggers a bookmark, it also updates the URL query string to reflect the bookmarked state. |
'response' |
Raises
Name | Type | Description |
---|---|---|
ValueError | If the Shiny App does have bookmarking enabled. |
Returns
Name | Type | Description |
---|---|---|
CancelCallback | A callback to cancel the bookmarking hooks. |
message_stream_context
express.Chat.message_stream_context()
Message stream context manager.
A context manager for appending streaming messages into the chat. This context
manager can:
1. Be used in isolation to append a new streaming message to the chat.
* Compared to `.append_message_stream()` this method is more flexible but
isn't non-blocking by default (i.e., it doesn't launch an extended task).
2. Be nested within itself
* Nesting is primarily useful for making checkpoints to `.clear()` back
to (see the example below).
3. Be used from within a `.append_message_stream()`
* Useful for inserting additional content from another context into the
stream (e.g., see the note about tool calls below).
Yields
:
A `MessageStream` class instance, which has a method for `.append()`ing
message content chunks to as well as way to `.clear()` the stream back to
it's initial state. Note that `.append()` supports the same message content
types as `.append_message()`.
Example
```python
import asyncio
from shiny import reactive
from shiny.express import ui
chat = ui.Chat(id="my_chat")
chat.ui()
@reactive.effect
async def _():
async with chat.message_stream_context() as msg:
await msg.append("Starting stream...
Progress:“) async with chat.message_stream_context() as progress: for x in [0, 50, 100]: await progress.append(f” {x}%“) await asyncio.sleep(1) await progress.clear() await msg.clear() await msg.append(”Completed stream”) ```
Note
A useful pattern for displaying tool calls in a chatbot is for the tool to
display using `.message_stream_context()` while the the response generation is
happening through `.append_message_stream()`. This allows the tool to display
things like progress updates (or other "ephemeral" content) and optionally
`.clear()` the stream back to it's initial state when ready to display the
"final" content.
messages
express.Chat.messages(format=MISSING,
=None,
token_limits='all',
transform_user=False,
transform_assistant )
Reactively read chat messages
Obtain chat messages within a reactive context. The default behavior is intended for passing messages along to a model for response generation where you typically want to:
- Cap the number of tokens sent in a single request (i.e.,
token_limits
). - Apply user input transformations (i.e.,
transform_user
), if any. - Not apply assistant response transformations (i.e.,
transform_assistant
) since these are predominantly for display purposes (i.e., the model shouldn’t concern itself with how the responses are displayed).
Parameters
Name | Type | Description | Default |
---|---|---|---|
format | MISSING_TYPE | ProviderMessageFormat | The message format to return. The default value of MISSING means chat messages are returned as :class:ChatMessage objects (a dictionary with content and role keys). Other supported formats include: * "anthropic" : Anthropic message format. * "google" : Google message (aka content) format. * "langchain" : LangChain message format. * "openai" : OpenAI message format. * "ollama" : Ollama message format. |
MISSING |
token_limits | tuple[int, int] | None | Limit the conversation history based on token limits. If specified, only the most recent messages that fit within the token limits are returned. This is useful for avoiding “exceeded token limit” errors when sending messages to the relevant model, while still providing the most recent context available. A specified value must be a tuple of two integers. The first integer is the maximum number of tokens that can be sent to the model in a single request. The second integer is the amount of tokens to reserve for the model’s response. Note that token counts based on the tokenizer provided to the Chat constructor. |
None |
transform_user | Literal['all', 'last', 'none'] | Whether to return user input messages with transformation applied. This only matters if a transform_user_input was provided to the chat constructor. The default value of "all" means all user input messages are transformed. The value of "last" means only the last user input message is transformed. The value of "none" means no user input messages are transformed. |
'all' |
transform_assistant | bool | Whether to return assistant messages with transformation applied. This only matters if an transform_assistant_response was provided to the chat constructor. |
False |
Note
Messages are listed in the order they were added. As a result, when this method is called in a .on_user_submit()
callback (as it most often is), the last message will be the most recent one submitted by the user.
Returns
Name | Type | Description |
---|---|---|
tuple[ChatMessage, …] | A tuple of chat messages. |
on_user_submit
=None) express.Chat.on_user_submit(fn
Define a function to invoke when user input is submitted.
Apply this method as a decorator to a function (fn
) that should be invoked when the user submits a message. This function can take an optional argument, which will be the user input message.
In many cases, the implementation of fn
should also do the following:
- Generate a response based on the user input.
- If the response should be aware of chat history, use a package like chatlas to manage the chat state, or use the
.messages()
method to get the chat history.
- Append that response to the chat component using
.append_message()
( or.append_message_stream()
if the response is streamed).
Parameters
Name | Type | Description | Default |
---|---|---|---|
fn | UserSubmitFunction | None | A function to invoke when user input is submitted. | None |
Note
This method creates a reactive effect that only gets invalidated when the user submits a message. Thus, the function fn
can read other reactive dependencies, but it will only be re-invoked when the user submits a message.
set_user_message
express.Chat.set_user_message(value)
Deprecated. Use update_user_input(value=value)
instead.
transform_assistant_response
=None) express.Chat.transform_assistant_response(fn
Transform assistant responses.
Use this method as a decorator on a function (fn
) that transforms assistant responses before displaying them in the chat. This is useful for post-processing model responses before displaying them to the user.
Parameters
Name | Type | Description | Default |
---|---|---|---|
fn | TransformAssistantResponseFunction | None | A function that takes a string and returns either a string, :class:shiny.ui.HTML , or None . If fn returns a string, it gets interpreted and parsed as a markdown on the client (and the resulting HTML is then sanitized). If fn returns :class:shiny.ui.HTML , it will be displayed as-is. If fn returns None , the response is effectively ignored. |
None |
Note
When doing an .append_message_stream()
, fn
gets called on every chunk of the response (thus, it should be performant), and can optionally access more information (i.e., arguments) about the stream. The 1st argument (required) contains the accumulated content, the 2nd argument (optional) contains the current chunk, and the 3rd argument (optional) is a boolean indicating whether this chunk is the last one in the stream.
transform_user_input
=None) express.Chat.transform_user_input(fn
Transform user input.
Use this method as a decorator on a function (fn
) that transforms user input before storing it in the chat messages returned by .messages()
. This is useful for implementing RAG workflows, like taking a URL and scraping it for text before sending it to the model.
Parameters
Name | Type | Description | Default |
---|---|---|---|
fn | TransformUserInput | TransformUserInputAsync | None | A function to transform user input before storing it in the chat .messages() . If fn returns None , the user input is effectively ignored, and .on_user_submit() callbacks are suspended until more input is submitted. This behavior is often useful to catch and handle errors that occur during transformation. In this case, the transform function should append an error message to the chat (via .append_message() ) to inform the user of the error. |
None |
ui
express.Chat.ui(=None,
messages='Enter a message...',
placeholder='min(680px, 100%)',
width='auto',
height=True,
fill=None,
icon_assistant**kwargs,
)
Create a UI element for this Chat
.
Parameters
Name | Type | Description | Default |
---|---|---|---|
messages | Optional[Sequence[str | ChatMessageDict]] | A sequence of messages to display in the chat. Each message can be either a string or a dictionary with content and role keys. The content key should contain the message text, and the role key can be “assistant” or “user”. |
None |
placeholder | str | Placeholder text for the chat input. | 'Enter a message...' |
width | CssUnit | The width of the UI element. | 'min(680px, 100%)' |
height | CssUnit | The height of the UI element. | 'auto' |
fill | bool | Whether the chat should vertically take available space inside a fillable container. | True |
icon_assistant | HTML | Tag | TagList | None | The icon to use for the assistant chat messages. Can be a HTML or a tag in the form of :class:~htmltools.HTML or :class:~htmltools.Tag . If None , a default robot icon is used. |
None |
kwargs | TagAttrValue | Additional attributes for the chat container element. | {} |
update_user_input
express.Chat.update_user_input(=None,
value=None,
placeholder=False,
submit=False,
focus )
Update the user input.
Parameters
Name | Type | Description | Default |
---|---|---|---|
value | str | None | The value to set the user input to. | None |
placeholder | str | None | The placeholder text for the user input. | None |
submit | bool | Whether to automatically submit the text for the user. Requires value . |
False |
focus | bool | Whether to move focus to the input element. Requires value . |
False |
user_input
=False) express.Chat.user_input(transform
Reactively read the user’s message.
Parameters
Name | Type | Description | Default |
---|---|---|---|
transform | bool | Whether to apply the user input transformation function (if one was provided). | False |
Returns
Name | Type | Description |
---|---|---|
str | None | The user input message (before any transformation). |
Note
Most users shouldn’t need to use this method directly since the last item in .messages()
contains the most recent user input. It can be useful for:
- Taking a reactive dependency on the user’s input outside of a
.on_user_submit()
callback. - Maintaining message state separately from
.messages()
.