Skip to main content

Create a Custom Agent

Template agents are a great starting point, but sometimes you need full control over an agent's behavior -- its LLM, reasoning strategy, memory, and tools. This tutorial shows you how to build a custom agent from scratch using XAIBO, the platform's declarative agent framework.

XAIBO (eXpress AI Bot Orchestration) lets you configure agents entirely in YAML. No framework code to write -- you declare what modules your agent uses, wire them together, and the platform handles the rest.

Prerequisites

  • A platform account (Getting Started)
  • A running agent workspace (you can start from any template and modify it, or use a blank workspace)
  • Familiarity with YAML syntax
  • A platform token for API access (Authenticate with the API)

Steps

1. Understand the agent directory structure

Every agent has its own directory containing its configuration, prompts, tools, and knowledge. The key file is agent.yaml -- this is the configuration that XAIBO reads at startup.

To access the agent's directory, open the agent's page in the platform UI and launch Xircuits Studio, which provides a file browser and editor for all files in the agent's workspace. You can also access agent files via the platform API if you prefer working from the command line.

A typical agent directory looks like this:

agents/my-agent/
agent.yaml # Agent configuration (this is what you'll edit)
prompts/
system.txt # System prompt defining the agent's role
tools/
my_tools.py # Custom tool files (Python)
knowledge/
... # Agent's personal knowledge base

2. Learn the key sections of agent.yaml

The agent.yaml file has four main sections:

SectionPurpose
llmsDefine LLM connections -- model, endpoint, API key, and generation parameters
modulesWire together cognitive components that make up the agent's reasoning pipeline
entry_pointSpecify which module handles incoming messages
toolsReference tool provider modules

3. Configure your LLM connections

The llms section defines the language models your agent can use. All agents connect to the Platform Relay at relay.public.cloud.xpress.ai/v1, which provides an OpenAI-compatible API.

llms:
main-llm:
model: "kimi-k2.6"
base_url: "https://relay.public.cloud.xpress.ai/v1"
api_key: "${XPRESSAI_API_TOKEN}"
temperature: 0.7
max_tokens: 8192

You can define multiple LLM connections for different purposes:

Connection nameRecommended modelUse case
main-llmkimi-k2.6Primary reasoning -- fast, good at tool use and general orchestration
memory-llmmercury-2Memory operations -- indexing, retrieval, summarization
system2-llmclaude-opus-4-6Deep thinking -- complex analysis, planning, code generation
info

The ${XPRESSAI_API_TOKEN} environment variable is automatically injected into agent containers at startup. You do not need to set it manually.

tip

Model names (such as kimi-k2.6, mercury-2, and claude-opus-4-6) may change over time as new models are added. Check the platform for the current list of available models.

4. Wire the cognitive modules

The modules section is where you assemble the agent's brain. XAIBO provides several cognitive components inspired by neuroscience:

ModuleRole
PFC (Prefrontal Cortex)The orchestrator -- receives messages, plans actions, calls tools, and generates responses
Hippo (Hippocampus)Long-term memory -- stores and retrieves past interactions and learned knowledge
System2Deep thinking -- activated for complex reasoning tasks that need step-by-step analysis
ThalamusSafety and filtering -- screens inputs and outputs for policy compliance
MeeseeksSub-agents -- spawns temporary agents to handle specific subtasks

Here is a minimal configuration with just the PFC orchestrator and a response tool:

modules:
pfc:
class: "modules.pfc.PFCOrchestrator"
llm: main-llm
system_prompt_file: "prompts/system.txt"
max_thoughts: 10

response:
class: "modules.response_tool_provider.ResponseToolProvider"

The max_thoughts parameter controls how many reasoning steps the PFC takes before it must produce a final response. Set it higher for agents that need to chain many tool calls.

5. Set the entry point

The entry_point tells XAIBO which module receives incoming messages:

entry_point: pfc

For most agents, this is the PFC module. Messages come in, the PFC reasons about them, calls tools as needed, and generates a response.

6. Write your system prompt

Create a file at prompts/system.txt that defines who the agent is and how it should behave:

You are a data analyst agent. Your job is to help the team understand
trends in sales data, generate reports, and answer questions about
business metrics.

When asked to analyze data:
1. Clarify the time range and metrics of interest.
2. Pull the relevant data using your tools.
3. Summarize findings with specific numbers.
4. Suggest follow-up questions or actions.

Keep responses concise and data-driven. Use tables when comparing
multiple values. If you are uncertain about a number, say so rather
than guessing.

The system prompt is the single most impactful configuration. Spend time getting it right -- it shapes every interaction.

7. Put it all together

Here is a complete minimal agent.yaml. Note that a minimal agent does not require a tools section -- the PFC module discovers tools automatically from .py files in the tools/ directory:

llms:
main-llm:
model: "kimi-k2.6"
base_url: "https://relay.public.cloud.xpress.ai/v1"
api_key: "${XPRESSAI_API_TOKEN}"
temperature: 0.7
max_tokens: 8192

modules:
pfc:
class: "modules.pfc.PFCOrchestrator"
llm: main-llm
system_prompt_file: "prompts/system.txt"
max_thoughts: 10

response:
class: "modules.response_tool_provider.ResponseToolProvider"

entry_point: pfc

8. Deploy the agent

The platform reads agent.yaml from the agent's directory and starts the XAIBO runtime automatically. To deploy:

  1. Save your agent.yaml and prompts/system.txt in the agent's directory.
  2. Go to the Agents page in the platform UI.
  3. If the agent is already running, click Restart to pick up your changes.
  4. If this is a new agent, the platform provisions it on creation.

The agent container mounts your agent directory and starts the XAIBO runtime, which parses agent.yaml, initializes the modules, and begins listening for messages.

tip

When iterating on your agent's configuration, use the Restart button rather than deleting and re-creating the agent. Restarts pick up changes to agent.yaml, prompts, and tools without losing the agent's identity or conversation history.

9. Add more capabilities (optional)

Once you have a basic agent working, you can layer in additional modules:

Add memory:

modules:
# ... existing modules ...
hippo:
class: "modules.hippo.HippoMemory"
llm: memory-llm

llms:
# ... existing LLMs ...
memory-llm:
model: "mercury-2"
base_url: "https://relay.public.cloud.xpress.ai/v1"
api_key: "${XPRESSAI_API_TOKEN}"
temperature: 0.3
max_tokens: 4096

Add deep thinking for complex tasks:

modules:
# ... existing modules ...
system2:
class: "modules.system2.System2DeepThinking"
llm: system2-llm

llms:
# ... existing LLMs ...
system2-llm:
model: "claude-opus-4-6"
base_url: "https://relay.public.cloud.xpress.ai/v1"
api_key: "${XPRESSAI_API_TOKEN}"
temperature: 0.5
max_tokens: 16384

What you've done

  • Learned the XAIBO agent directory structure and configuration format
  • Configured LLM connections through the Platform Relay
  • Wired cognitive modules (PFC orchestrator, response tool provider)
  • Wrote a system prompt defining the agent's role
  • Deployed the agent and learned how to iterate on its configuration

Next steps

Your agent can reason and respond, but it does not have any custom tools yet. Head to Build a Custom Tool to give it domain-specific capabilities.


See also