Profile Management
Kollabor provides flexible profile systems to manage LLM configurations, chat personalities, transcription settings, and notes preferences across both CLI and App.
Overview
Profiles in Kollabor allow you to:
- Manage multiple LLM configurations - switch between OpenAI, Anthropic, local models, and custom endpoints
- Customize chat personalities - create distinct AI assistants with different system prompts and behaviors
- Environment variable integration - override profile settings without editing config files
- Per-profile tool calling settings - control function execution behavior for each configuration
Profile Types
LLM Profiles (CLI)
API endpoint configurations with provider settings, model selection, temperature, max tokens, and authentication. Stored globally in ~/.kollabor-cli/config.json.
LLM Profiles (App)
API configurations for chat, notes, orb, and title generation. Each feature can use a different LLM profile with independent provider and model settings.
Chat Profiles (App)
Chat personality configurations with system prompts, linked LLM profiles, tool calling preferences, and icon customization.
Transcription & Notes Profiles (App)
Specialized profiles for audio transcription formatting and notes organization with custom templates and LLM links.
CLI Profile Management
The Kollab CLI uses the /profile command (aliases: /prof, /llm) for managing LLM profiles.
Listing Profiles
/profile list
# Shows all available profiles with details:
# - Profile name and description
# - Provider type (openai, anthropic, azure_openai, gemini, custom)
# - Model identifier
# - Temperature and max_tokens settings
# - Active profile indicator (*)Switching Profiles
/profile set claude
# Activates the 'claude' profile
# Changes persist across sessions (saved to config)Creating Profiles
The profile wizard guides you through creating a new LLM configuration:
/profile create
# Interactive wizard prompts:
# 1. Profile name (alphanumeric, hyphens, underscores)
# 2. Provider type (auto/openai/anthropic/azure_openai/gemini/openrouter/custom)
# 3. API key (masked: sk-...xyz)
# 4. Model (e.g., gpt-4, claude-sonnet-4)
# 5. Temperature (0.0-2.0, default: 0.7)
# 6. Max tokens (default: 4096)
# 7. Base URL (for custom providers, optional)
# 8. Organization ID (OpenAI only, optional)
# 9. Advanced settings prompt (y/n)
# - Description
# - Timeout (milliseconds, 0 = no timeout)
# - Tool calling support (y/n)
# - Streaming enabled (y/n)
# Profile validation and auto-detection:
# - Provider auto-detected from API key format
# - Profile name uniqueness check
# - Configuration validation before savingBuilt-in Profiles
defaultLocal LLM (http://localhost:1234/v1, qwen/qwen3-4b, temp: 0.7)
fastFast local model (qwen/qwen3-0.6b, temp: 0.3)
claudeAnthropic Claude (claude-sonnet-4-20250514, temp: 0.7, max: 4096)
openaiOpenAI GPT-4 (gpt-4-turbo, temp: 0.7, max: 4096)
Multi-Provider Support
Kollabor supports multiple LLM providers through a unified profile system:
| Provider | API Type | Features |
|---|---|---|
openai | OpenAI API | Function calling, streaming, organization support |
anthropic | Anthropic API | Tool use blocks, API versioning, streaming |
azure_openai | Azure OpenAI | Azure endpoints, deployment IDs |
gemini | Google Gemini | Gemini models via API |
openrouter | OpenRouter | Multi-model routing, unified API |
custom | OpenAI-compatible | Local models (Ollama, LM Studio), custom endpoints |
Environment Variable Overrides
CLI profiles support environment variable overrides using the pattern KOLLABOR_{PROFILE_NAME}_{FIELD}:
# Override model for 'claude' profile
export KOLLABOR_CLAUDE_MODEL="claude-opus-4"
# Override API key for 'openai' profile
export KOLLABOR_OPENAI_API_KEY="sk-..."
# Override temperature for custom profile
export KOLLABOR_MY_PROFILE_TEMPERATURE="0.9"
# Override base URL for local profile
export KOLLABOR_LOCAL_BASE_URL="http://localhost:11434/v1"
# Override max tokens
export KOLLABOR_FAST_MAX_TOKENS="2048"
# Override timeout (milliseconds, 0 = no timeout)
export KOLLABOR_CLAUDE_TIMEOUT="120000"
# Override streaming (true/false)
export KOLLABOR_OPENAI_STREAMING="false"
# Override tool support (true/false)
export KOLLABOR_CUSTOM_SUPPORTS_TOOLS="true"
# Special characters in profile names become underscores
# "my-local-llm" → KOLLABOR_MY_LOCAL_LLM_MODEL
# "fast.api" → KOLLABOR_FAST_API_API_KEYEnvironment variables take priority over config.json values. This allows temporary overrides without modifying saved configurations.
Creating Profiles from Environment Variables
If a profile doesn't exist in config.json, the CLI can auto-create it from environment variables:
# Define profile via environment (MODEL is required)
export KOLLABOR_PRODUCTION_MODEL="gpt-4-turbo"
export KOLLABOR_PRODUCTION_API_KEY="sk-..."
export KOLLABOR_PRODUCTION_PROVIDER="openai"
export KOLLABOR_PRODUCTION_TEMPERATURE="0.5"
# Activate profile (auto-created if not in config)
/profile set production
# Profile is created in memory for the session
# Not saved to config.json unless explicitly created via wizardApp Profile System
Kowork provides a comprehensive profile system with visual management through the Settings UI.
LLM API Profiles
Configure LLM endpoints for different features:
{
"id": "main",
"name": "Main Model",
"description": "Production LLM for general use",
"enabled": true,
"provider": "openai-compatible",
"apiUrl": "http://localhost:1234/v1/chat/completions",
"apiKey": "not-needed",
"model": "ibm/granite-4-h-tiny",
"temperature": 0.7,
"maxTokens": 4096,
"customHeaders": {
"X-Custom-Header": "value"
},
"streaming": {
"streamTimeout": 90000,
"retryAttempts": 3,
"retryDelay": 1000
}
}Chat Profiles
Chat profiles define AI personalities with system prompts and behavior settings:
{
"id": "code-helper",
"name": "Code Helper",
"icon": "code",
"systemPrompt": "You are an expert programming assistant...",
"llmProfileId": "main",
"enableToolCalling": true,
"toolConfirmation": "auto",
"maxToolIterations": 10,
"builtin": false
}Profile-LLM Linking
Chat profiles link to LLM profiles via llmProfileId. The system validates:
- →LLM profile exists and is enabled
- →API URL and API key are configured
- →Model identifier is specified
- →Provider settings are valid
If validation fails, the UI displays clear error messages and disables the send button until configuration is corrected.
Feature-Specific Profile Assignment
Different features can use different LLM profiles:
| Feature | Profile Assignment |
|---|---|
chat | Assigned via ChatProfile.llmProfileId |
notes | Global assignment via activeApiProfiles.notes |
orb | Global assignment via activeApiProfiles.orb |
titles | Global assignment via activeApiProfiles.titles |
Tool Calling Configuration
Chat profiles in the App support per-profile tool calling configuration:
| Setting | Type | Default | Description |
|---|---|---|---|
enableToolCalling | boolean | true | Enable/disable function calling |
toolConfirmation | enum | auto | 'auto', 'confirm', 'never' |
maxToolIterations | number | 10 | Max consecutive tool calls |
The CLI provides a global permission system via the /permissions command with modes: CONFIRM_ALL, DEFAULT, AUTO_APPROVE_EDITS, TRUST_ALL.
Configuration Reference
CLI Profile Fields (LLMProfile)
| Field | Type | Required | Description |
|---|---|---|---|
name | string | Yes | Profile identifier (alphanumeric, hyphens, underscores) |
provider | string | Yes | Provider type (openai, anthropic, etc.) |
model | string | Yes | Model identifier |
api_key | string | No | API authentication key (or via env var) |
base_url | string | No | Custom endpoint URL |
temperature | float | No | Sampling temperature (0.0-2.0, default: 0.7) |
max_tokens | int | No | Maximum tokens to generate |
timeout | int | No | Request timeout (ms, 0 = no timeout) |
streaming | bool | No | Enable streaming responses (default: true) |
supports_tools | bool | No | Enable tool/function calling (default: true) |
App LLM Profile Fields
| Field | Type | Required | Description |
|---|---|---|---|
id | string | Yes | Unique identifier |
name | string | Yes | Display name |
enabled | boolean | Yes | Enable/disable profile |
provider | enum | Yes | 'anthropic' or 'openai-compatible' |
apiUrl | string | Yes | API endpoint URL |
apiKey | string | Yes | API authentication key |
customHeaders | object | No | Custom HTTP headers (key-value pairs) |
streaming | object | No | Streaming configuration (timeout, retries, delay) |
Profile Storage
CLI Storage
# Global profiles (user-wide)
~/.kollabor-cli/config.json
└── core.llm.profiles
├── profile_name
│ ├── provider
│ ├── model
│ ├── temperature
│ ├── max_tokens
│ ├── api_key
│ ├── base_url
│ └── ...
└── active_profile
# Profiles are user-level settings
# Available across all projects
# Persisted in JSON formatApp Storage
# App configuration
~/.kollabor/config.json
└── llmSettings
├── apiProfiles (LLM API profiles)
├── activeApiProfiles (feature assignments)
├── chatProfiles (chat personalities)
├── transcriptionProfiles
└── notesProfiles
# Profile objects stored as { id: profile }
# Composables inject 'id' field when iterating
# Backend uses settings module for defaults