Positron
Posit Assistant comes pre-installed in recent versions of Positron.
1. Enable Posit Assistant
Open your settings.json (Cmd+Shift+P / Ctrl+Shift+P, then Open User Settings (JSON)) and add:
"positron.assistant.enable": true,
"assistant.sidebarView": true,
This adds the Posit Assistant icon to the activity bar. Click it to open the chat panel.
2. Configure a Model Provider
You need at least one model provider to use Posit Assistant.
Once your providers are enabled, sign in or enter your credentials using the provider configuration dialog. To open it:
- Open the Command Palette (Cmd+Shift+P / Ctrl+Shift+P) and run Configure Language Model Providers, or
- Click the gear icon in the top bar and select Setup Instructions.
On older versions of Positron, you may need to manually enable LLM providers. To enable additional providers, open your settings.json (Cmd+Shift+P / Ctrl+Shift+P → Open User Settings (JSON)) and add the appropriate setting:
// Posit AI (managed service)
"positron.assistant.provider.positAI.enable": true,
// Amazon Bedrock (uses AWS CLI credentials)
"positron.assistant.provider.amazonBedrock.enable": true,
// OpenAI (requires API key)
"positron.assistant.provider.openAI.enable": true,
// Snowflake Cortex
"positron.assistant.provider.snowflakeCortex.enable": true,
3. Start a Conversation
Click the Posit Assistant icon in the activity bar (or run Open Positron Assistant in Editor Panel from the Command Palette), type a question, and press Enter.
Further Configuration
See Positron Settings for model routing, experimental features, and other Positron-specific options.
Known Issues
GitHub Copilot Premium Request Usage
When using GitHub Copilot models through Posit Assistant, each tool call and follow-up request made during a conversation counts as a separate premium request against your Copilot quota. This means a single user message may consume multiple premium requests if the assistant uses tools to complete the task.
This is a limitation of the VS Code Language Model API that Posit Assistant uses to access Copilot models. The API does not currently distinguish between an initial user request and the follow-up calls needed to fulfill it. To reduce consumption, you can use models from other providers (such as Anthropic or OpenAI) configured with direct API access, which do not have this limitation.