XAIBO Configuration Reference
Every agent is defined by an agent.yaml file that declares its LLM connections, cognitive modules, and wiring. This page documents the full schema.
Full Schema Example
llms:
main-llm:
model: "kimi-k2.6"
base_url: "https://relay.public.cloud.xpress.ai/v1"
api_key: "${XPRESSAI_API_TOKEN}"
temperature: 0.7
max_tokens: 8192
timeout: 600
memory-llm:
model: "mercury-2"
base_url: "https://relay.public.cloud.xpress.ai/v1"
api_key: "${XPRESSAI_API_TOKEN}"
temperature: 0.3
max_tokens: 4096
timeout: 300
system2-llm:
model: "claude-opus-4-6"
base_url: "https://relay.public.cloud.xpress.ai/v1"
api_key: "${XPRESSAI_API_TOKEN}"
temperature: 0.5
max_tokens: 16384
timeout: 600
modules:
pfc:
class: "modules.pfc.PFCOrchestrator"
llm: main-llm
system_prompt_file: "prompts/system.txt"
max_thoughts: 10
context_token_budget: 150000
hippo:
class: "modules.hippo.MemoryOrchestrator"
llm: memory-llm
memory_size: 10
mid_term_memory_size: 3
consolidation_interval: 300
system2:
class: "modules.system2.System2ThinkingProvider"
llm: system2-llm
thalamus:
class: "modules.thalamus.Thalamus"
meeseeks:
class: "modules.meeseeks.MeeseeksProvider"
llm: system2-llm
meeseeks:
- name: research
description: "Delegate a focused research task."
system_prompt: "You are a research specialist..."
response:
class: "modules.response_tool_provider.ResponseToolProvider"
desktop:
class: "modules.desktop.DesktopProvider"
tool_logger:
class: "modules.tool_logger.ToolLogger"
max_result_chars: 30000
entry_point: pfc
Sections
llms
Declares named LLM connections. Each entry defines a model endpoint that modules can reference by key.
| Field | Type | Required | Description |
|---|---|---|---|
model | string | Yes | Model identifier (e.g., kimi-k2.6, mercury-2, claude-opus-4-6) |
base_url | string | Yes | API endpoint URL |
api_key | string | Yes | API key. Supports ${ENV_VAR} syntax for environment variable substitution |
temperature | float | No | Sampling temperature (default varies by model) |
max_tokens | int | No | Maximum tokens per response |
timeout | int | No | Request timeout in seconds |
Use environment variable substitution (${XPRESSAI_API_TOKEN}) for API keys. Never hardcode secrets in agent.yaml.
modules
Declares the cognitive components that make up the agent. Each module has a class (Python import path) and module-specific configuration.
See Cognitive Architecture for detailed documentation of each module.
PFC (Prefrontal Cortex)
| Field | Type | Default | Description |
|---|---|---|---|
class | string | -- | modules.pfc.PFCOrchestrator |
llm | string | -- | Key from llms section |
system_prompt_file | string | -- | Path to system prompt text file (relative to the agent's directory, e.g., prompts/system.txt) |
max_thoughts | int | 10 | Maximum reasoning iterations per turn |
context_token_budget | int | 150000 | Token budget for context window |
Hippo (Memory Orchestrator)
| Field | Type | Default | Description |
|---|---|---|---|
class | string | -- | modules.hippo.MemoryOrchestrator |
llm | string | -- | Key from llms section |
memory_size | int | 10 | Number of recent messages in short-term memory |
mid_term_memory_size | int | 3 | Number of mid-term memory results to retrieve |
consolidation_interval | int | 300 | Seconds between memory consolidation cycles |
System 2
| Field | Type | Default | Description |
|---|---|---|---|
class | string | -- | modules.system2.System2ThinkingProvider |
llm | string | -- | Key from llms section (typically a high-capability model) |
Thalamus
| Field | Type | Default | Description |
|---|---|---|---|
class | string | -- | modules.thalamus.Thalamus |
No additional configuration. Acts as a safety validation layer.
Meeseeks
| Field | Type | Default | Description |
|---|---|---|---|
class | string | -- | modules.meeseeks.MeeseeksProvider |
llm | string | -- | Key from llms section |
meeseeks | list | -- | List of sub-agent definitions |
Each entry in meeseeks:
| Field | Type | Description |
|---|---|---|
name | string | Tool name exposed to PFC (e.g., research) |
description | string | Tool description shown to the LLM |
system_prompt | string | System prompt for the sub-agent |
Response Tool Provider
| Field | Type | Default | Description |
|---|---|---|---|
class | string | -- | modules.response_tool_provider.ResponseToolProvider |
Provides the respond tool that the PFC uses to deliver its final answer.
Desktop Provider
| Field | Type | Default | Description |
|---|---|---|---|
class | string | -- | modules.desktop.DesktopProvider |
Enables computer-use via Claude Sonnet (screenshot-action loop).
Tool Logger
| Field | Type | Default | Description |
|---|---|---|---|
class | string | -- | modules.tool_logger.ToolLogger |
max_result_chars | int | 30000 | Maximum characters per tool result before truncation |
entry_point
Specifies which module receives incoming messages. Almost always pfc.
entry_point: pfc
The entry point module's handle_message method is called first. It then orchestrates calls to other modules (memory, tools, system 2) as needed.
Validation Notes
entry_pointreference: Theentry_pointvalue must match a key defined in themodulessection. If it references a nonexistent module, the agent will fail to start with a configuration error.llmreferences in modules: Each module'sllmfield must match a key defined in thellmssection. A missing or misspelled LLM reference will cause a startup error.- Required modules: The
pfcandresponsemodules are required for a functional agent (PFC needs a way to deliver responses). All other modules are optional -- omitsystem2if you don't need deep reasoning, omitdesktopif the agent doesn't need computer-use, etc. Thethalamusmodule is strongly recommended for safety validation but technically optional. system_prompt_file: Paths are resolved relative to the agent's directory (e.g.,/data/home/agents/{agentName}/). A missing file will cause an error when the module initializes.