Profile Management

Kollabor provides flexible profile systems to manage LLM configurations, chat personalities, transcription settings, and notes preferences across both CLI and App.

Overview

Profiles in Kollabor allow you to:

  • Manage multiple LLM configurations - switch between OpenAI, Anthropic, local models, and custom endpoints
  • Customize chat personalities - create distinct AI assistants with different system prompts and behaviors
  • Environment variable integration - override profile settings without editing config files
  • Per-profile tool calling settings - control function execution behavior for each configuration

Profile Types

LLM Profiles (CLI)

API endpoint configurations with provider settings, model selection, temperature, max tokens, and authentication. Stored globally in ~/.kollabor-cli/config.json.

LLM Profiles (App)

API configurations for chat, notes, orb, and title generation. Each feature can use a different LLM profile with independent provider and model settings.

Chat Profiles (App)

Chat personality configurations with system prompts, linked LLM profiles, tool calling preferences, and icon customization.

Transcription & Notes Profiles (App)

Specialized profiles for audio transcription formatting and notes organization with custom templates and LLM links.

CLI Profile Management

The Kollab CLI uses the /profile command (aliases: /prof, /llm) for managing LLM profiles.

Listing Profiles

/profile list

# Shows all available profiles with details:
# - Profile name and description
# - Provider type (openai, anthropic, azure_openai, gemini, custom)
# - Model identifier
# - Temperature and max_tokens settings
# - Active profile indicator (*)

Switching Profiles

/profile set claude

# Activates the 'claude' profile
# Changes persist across sessions (saved to config)

Creating Profiles

The profile wizard guides you through creating a new LLM configuration:

/profile create

# Interactive wizard prompts:
# 1. Profile name (alphanumeric, hyphens, underscores)
# 2. Provider type (auto/openai/anthropic/azure_openai/gemini/openrouter/custom)
# 3. API key (masked: sk-...xyz)
# 4. Model (e.g., gpt-4, claude-sonnet-4)
# 5. Temperature (0.0-2.0, default: 0.7)
# 6. Max tokens (default: 4096)
# 7. Base URL (for custom providers, optional)
# 8. Organization ID (OpenAI only, optional)
# 9. Advanced settings prompt (y/n)
#    - Description
#    - Timeout (milliseconds, 0 = no timeout)
#    - Tool calling support (y/n)
#    - Streaming enabled (y/n)

# Profile validation and auto-detection:
# - Provider auto-detected from API key format
# - Profile name uniqueness check
# - Configuration validation before saving

Built-in Profiles

default

Local LLM (http://localhost:1234/v1, qwen/qwen3-4b, temp: 0.7)

fast

Fast local model (qwen/qwen3-0.6b, temp: 0.3)

claude

Anthropic Claude (claude-sonnet-4-20250514, temp: 0.7, max: 4096)

openai

OpenAI GPT-4 (gpt-4-turbo, temp: 0.7, max: 4096)

Multi-Provider Support

Kollabor supports multiple LLM providers through a unified profile system:

ProviderAPI TypeFeatures
openaiOpenAI APIFunction calling, streaming, organization support
anthropicAnthropic APITool use blocks, API versioning, streaming
azure_openaiAzure OpenAIAzure endpoints, deployment IDs
geminiGoogle GeminiGemini models via API
openrouterOpenRouterMulti-model routing, unified API
customOpenAI-compatibleLocal models (Ollama, LM Studio), custom endpoints

Environment Variable Overrides

CLI profiles support environment variable overrides using the pattern KOLLABOR_{PROFILE_NAME}_{FIELD}:

# Override model for 'claude' profile
export KOLLABOR_CLAUDE_MODEL="claude-opus-4"

# Override API key for 'openai' profile
export KOLLABOR_OPENAI_API_KEY="sk-..."

# Override temperature for custom profile
export KOLLABOR_MY_PROFILE_TEMPERATURE="0.9"

# Override base URL for local profile
export KOLLABOR_LOCAL_BASE_URL="http://localhost:11434/v1"

# Override max tokens
export KOLLABOR_FAST_MAX_TOKENS="2048"

# Override timeout (milliseconds, 0 = no timeout)
export KOLLABOR_CLAUDE_TIMEOUT="120000"

# Override streaming (true/false)
export KOLLABOR_OPENAI_STREAMING="false"

# Override tool support (true/false)
export KOLLABOR_CUSTOM_SUPPORTS_TOOLS="true"

# Special characters in profile names become underscores
# "my-local-llm" → KOLLABOR_MY_LOCAL_LLM_MODEL
# "fast.api" → KOLLABOR_FAST_API_API_KEY

Environment variables take priority over config.json values. This allows temporary overrides without modifying saved configurations.

Creating Profiles from Environment Variables

If a profile doesn't exist in config.json, the CLI can auto-create it from environment variables:

# Define profile via environment (MODEL is required)
export KOLLABOR_PRODUCTION_MODEL="gpt-4-turbo"
export KOLLABOR_PRODUCTION_API_KEY="sk-..."
export KOLLABOR_PRODUCTION_PROVIDER="openai"
export KOLLABOR_PRODUCTION_TEMPERATURE="0.5"

# Activate profile (auto-created if not in config)
/profile set production

# Profile is created in memory for the session
# Not saved to config.json unless explicitly created via wizard

App Profile System

Kowork provides a comprehensive profile system with visual management through the Settings UI.

LLM API Profiles

Configure LLM endpoints for different features:

{
  "id": "main",
  "name": "Main Model",
  "description": "Production LLM for general use",
  "enabled": true,
  "provider": "openai-compatible",
  "apiUrl": "http://localhost:1234/v1/chat/completions",
  "apiKey": "not-needed",
  "model": "ibm/granite-4-h-tiny",
  "temperature": 0.7,
  "maxTokens": 4096,
  "customHeaders": {
    "X-Custom-Header": "value"
  },
  "streaming": {
    "streamTimeout": 90000,
    "retryAttempts": 3,
    "retryDelay": 1000
  }
}

Chat Profiles

Chat profiles define AI personalities with system prompts and behavior settings:

{
  "id": "code-helper",
  "name": "Code Helper",
  "icon": "code",
  "systemPrompt": "You are an expert programming assistant...",
  "llmProfileId": "main",
  "enableToolCalling": true,
  "toolConfirmation": "auto",
  "maxToolIterations": 10,
  "builtin": false
}

Profile-LLM Linking

Chat profiles link to LLM profiles via llmProfileId. The system validates:

  • LLM profile exists and is enabled
  • API URL and API key are configured
  • Model identifier is specified
  • Provider settings are valid

If validation fails, the UI displays clear error messages and disables the send button until configuration is corrected.

Feature-Specific Profile Assignment

Different features can use different LLM profiles:

FeatureProfile Assignment
chatAssigned via ChatProfile.llmProfileId
notesGlobal assignment via activeApiProfiles.notes
orbGlobal assignment via activeApiProfiles.orb
titlesGlobal assignment via activeApiProfiles.titles

Tool Calling Configuration

Chat profiles in the App support per-profile tool calling configuration:

SettingTypeDefaultDescription
enableToolCallingbooleantrueEnable/disable function calling
toolConfirmationenumauto'auto', 'confirm', 'never'
maxToolIterationsnumber10Max consecutive tool calls

The CLI provides a global permission system via the /permissions command with modes: CONFIRM_ALL, DEFAULT, AUTO_APPROVE_EDITS, TRUST_ALL.

Configuration Reference

CLI Profile Fields (LLMProfile)

FieldTypeRequiredDescription
namestringYesProfile identifier (alphanumeric, hyphens, underscores)
providerstringYesProvider type (openai, anthropic, etc.)
modelstringYesModel identifier
api_keystringNoAPI authentication key (or via env var)
base_urlstringNoCustom endpoint URL
temperaturefloatNoSampling temperature (0.0-2.0, default: 0.7)
max_tokensintNoMaximum tokens to generate
timeoutintNoRequest timeout (ms, 0 = no timeout)
streamingboolNoEnable streaming responses (default: true)
supports_toolsboolNoEnable tool/function calling (default: true)

App LLM Profile Fields

FieldTypeRequiredDescription
idstringYesUnique identifier
namestringYesDisplay name
enabledbooleanYesEnable/disable profile
providerenumYes'anthropic' or 'openai-compatible'
apiUrlstringYesAPI endpoint URL
apiKeystringYesAPI authentication key
customHeadersobjectNoCustom HTTP headers (key-value pairs)
streamingobjectNoStreaming configuration (timeout, retries, delay)

Profile Storage

CLI Storage

# Global profiles (user-wide)
~/.kollabor-cli/config.json
  └── core.llm.profiles
      ├── profile_name
      │   ├── provider
      │   ├── model
      │   ├── temperature
      │   ├── max_tokens
      │   ├── api_key
      │   ├── base_url
      │   └── ...
      └── active_profile

# Profiles are user-level settings
# Available across all projects
# Persisted in JSON format

App Storage

# App configuration
~/.kollabor/config.json
  └── llmSettings
      ├── apiProfiles (LLM API profiles)
      ├── activeApiProfiles (feature assignments)
      ├── chatProfiles (chat personalities)
      ├── transcriptionProfiles
      └── notesProfiles

# Profile objects stored as { id: profile }
# Composables inject 'id' field when iterating
# Backend uses settings module for defaults