parallel_chat_text
parallel_chat_text(
chat,
prompts,
*,
max_active=10,
rpm=500,
on_error='return',
kwargs=None,
)Submit multiple chat prompts in parallel and return text responses.
This is a convenience function that wraps parallel_chat() and extracts just the text content from each response.
Parameters
| Name | Type | Description | Default |
|---|---|---|---|
| chat | Chat | A base chat object. | required |
| prompts | list[ContentT] | list[list[ContentT]] |
A list of prompts. Each prompt can be a string or a list of string/Content objects. | required |
| max_active | int | The maximum number of simultaneous requests to send. | 10 |
| rpm | int | Maximum number of requests per minute. Default is 500. | 500 |
| on_error | Literal['return', 'continue', 'stop'] | What to do when a request fails. One of: * "return" (the default): stop processing new requests, wait for in-flight requests to finish, then return. * "continue": keep going, performing every request. * "stop": stop processing and throw an error. |
'return' |
| kwargs | Optional[dict[str, Any]] | Additional keyword arguments to pass to the chat method. | None |
Returns
| Name | Type | Description |
|---|---|---|
| A list with one element for each prompt. Each element is either a string (if | ||
| successful), None (if the request wasn't submitted), or an error object (if | ||
| it failed). |
Examples
import chatlas as ctl
chat = ctl.ChatOpenAI()
countries = ["Canada", "New Zealand", "Jamaica", "United States"]
prompts = [f"What's the capital of {country}?" for country in countries]
# NOTE: if running from a script, you'd need to wrap this in an async function
# and call asyncio.run(main())
responses = await ctl.parallel_chat_text(chat, prompts)
for country, response in zip(countries, responses):
print(f"{country}: {response}")See Also
parallel_chat: Get full Chat objectsparallel_chat_structured: Extract structured data