1 User Guide 02 Configuration and Customization
Blightbow edited this page 2025-12-09 04:12:38 -05:00

Configuration and Customization

This guide covers common configuration tasks for developers familiar with LLMs who want to customize their assistant's behavior.

For complete configuration options, see Configuration-Reference.


Configuration Layers

The AI Assistant uses a three-layer configuration system:

Layer Stored Purpose
Layer 1: Static script.db.execution_config Persistent settings that survive restarts
Layer 2: Context Per-context overrides Different behavior for different situations
Layer 3: LLM Runtime assessment LLM decides optimal approach per-task

Key principle: Each layer can only restrict, never expand. A dangerous tool disabled in Layer 1 cannot be enabled in Layer 2.


Common Configuration Tasks

Changing the Tick Rate

The tick rate controls how often the assistant "wakes up" to process messages and make decisions.

> aisetup/config mybot set tick_rate 10
Value Use Case
3-5 Interactive testing, debugging
10-15 Normal operation
30-60 Background tasks, low-urgency assistants

Lower tick rates increase responsiveness but use more API calls.

Adjusting Temperature

Temperature controls randomness in LLM responses.

> aisetup/config mybot set llm_temperature 0.7
Value Effect
0.0 Deterministic, consistent responses
0.5 Balanced creativity
0.7-1.0 More creative, varied responses
1.5+ Very creative, potentially erratic

Leave as None to use the provider's default.

Customizing the System Prompt

The system prompt defines the assistant's persona and behavior.

View current prompt:

> aisetup/config mybot show system_prompt

Set a custom prompt:

> aisetup/template mybot create my_persona

This opens an editor or creates a template you can customize. The prompt can include placeholders:

  • {tick_rate} - Current tick rate
  • {reflection_interval} - Ticks between reflections
  • {tool_schemas} - Auto-generated tool documentation

Apply a template:

> aisetup/template mybot apply my_persona

Example custom prompt:

You are {name}, a wise wizard assistant living in the game world.
You speak in an archaic, formal manner and offer cryptic but helpful advice.

You run on a tick loop every {tick_rate} seconds.
When you have nothing to do, contemplate the mysteries of the universe.

Available tools: {tool_schemas}

Enabling Multi-Action Loops (ReAct)

By default, the assistant executes one tool per tick. Enable ReAct loops for multi-step tasks:

> aisetup/config mybot set multi_action_enabled true
> aisetup/config mybot set max_iterations_per_tick 5

With this enabled, the assistant can:

  • Chain multiple SAFE_CHAIN tools (inspections, searches)
  • Stop automatically at TERMINAL tools (say, page)
  • Execute DANGEROUS tools once then wait

When to enable:

  • Complex multi-step tasks
  • Research and exploration workflows
  • Autonomous building operations

When to keep disabled:

  • Simple Q&A assistants
  • Cost-sensitive deployments
  • Heavily rate-limited APIs

Configuring Sleep Schedules

Sleep mode allows the assistant to consolidate memories and reduce API usage during quiet periods.

Enable automatic sleep:

> aisetup/config mybot set sleep_schedule.enabled true
> aisetup/config mybot set sleep_schedule.sleep_start_hour 2
> aisetup/config mybot set sleep_schedule.sleep_duration_hours 4

This schedules sleep from 2:00 AM to 6:00 AM (server time).

Sleep phases:

  1. compacting - Actively processing memories, cannot be woken
  2. dreaming - Consolidation complete, can be woken by urgent events or tools

Manual sleep control:

> aisetup/config mybot set operating_mode sleep   # Force sleep
> aisetup/config mybot set operating_mode awake   # Force wake

Token Budget Management

Control how much context the assistant can use:

> aisetup/config mybot set max_context_tokens 50000
> aisetup/config mybot set max_history 30

Compaction thresholds:

  • compact_sleep_threshold (default 0.7) - Trigger during sleep at 70% capacity
  • compact_emergency_threshold (default 0.8) - Force compaction at 80%

Preserve recent messages during compaction:

> aisetup/config mybot set compact_preserve_window 25

Understanding Execution Contexts

The assistant operates in different contexts with independent configurations:

Context When Description
tick_event User message received Processing user input
tick_autonomous No pending events Self-directed behavior
reflection Every N ticks Self-assessment
reflection_cont After reflection tool Continuing reflection
sleep_consolidate During sleep Memory processing
goal_decompose Goal breakdown Task planning

Context-Specific Configuration

Each context can have its own settings:

> aisetup/context mybot tick_event set max_iterations 3
> aisetup/context mybot tick_autonomous set max_iterations 1

This allows:

  • More iterations when responding to users
  • Conservative behavior during autonomous periods
  • Restricted tools in sensitive contexts

Tool Filtering Per Context

Restrict available tools for specific contexts:

> aisetup/context mybot tick_autonomous set tool_filter say,pose,search_object

This limits autonomous behavior to safe, non-destructive tools.


Memory Systems

Session Memory

Session memory stores facts and patterns within the current context window.

The assistant manages this automatically, but you can:

  • Clear it: aihistory/clear/conversation mybot
  • View it: aihistory/conversation mybot

Entity Profiles

Entity profiles track persistent knowledge about players, NPCs, and objects.

Structure:

{
    "attributes": {},           # Consolidated facts (stable knowledge)
    "observations": [],         # Recent observations (episodic)
    "relationship": {
        "state": "acquaintance",  # stranger/acquaintance/friend/ally
        "favorability": 0.5,      # 0.0 to 1.0
        "interaction_count": 15
    }
}

Relationship thresholds (Dunbar-inspired):

State Favorability
stranger < 0.25
acquaintance 0.25 - 0.50
friend 0.50 - 0.75
ally ≥ 0.75

Entity consolidation happens during sleep mode - observations become attributes.

Journal System

The journal stores episodic memories with importance scores.

View journal:

> aihistory/journal mybot

Importance scoring:

  • 1-3: Low importance (routine, repetitive)
  • 4-6: Medium importance (notable events)
  • 7-9: High importance (significant discoveries, achievements)
  • 10: Critical (major events, breakthroughs)

Entries with importance ≤ 3 are pruned during sleep consolidation.

Optional: Semantic Memory (Mem0)

For persistent, semantically-searchable memory across sessions:

Install:

pip install mem0ai

Enable:

> aisetup/config mybot set memory_enabled true

Configure extraction (important for persona protection):

In mygame/server/conf/settings.py:

AI_MEMORY_CONFIG = {
    "llm": {
        "provider": "openai",
        "config": {"model": "gpt-4o-mini", "temperature": 0.1}
    },
    "custom_fact_extraction_prompt": """
    Extract only objective facts about the world, users, and events.
    Use third-person, factual language.
    Never extract assistant identity or personality information.
    """,
    "version": "v1.1"
}

The custom extraction prompt prevents the assistant's persona from being contaminated by stored memories.


Tool Categories and Control

Understanding Tool Categories

Category Behavior Examples
SAFE_CHAIN Can loop freely inspect_location, search_object, get_attributes
TERMINAL Ends loop, awaits response say, pose, page, whisper
DANGEROUS Single execution per tick command, set_attribute, spawn_prototype
ASYNC_REQUIRED Network/IO operations store_memory, delegate_task

Why Categories Matter

In a ReAct loop:

  1. SAFE_CHAIN tools execute, loop continues
  2. TERMINAL tools execute, loop ends (awaiting user response)
  3. DANGEROUS tools execute once, then tick ends
  4. ASYNC_REQUIRED tools have special handling

This prevents:

  • Infinite loops of dangerous operations
  • Multiple responses to a single message
  • Blocking on I/O operations

Restricting Tools

Globally disable a tool:

> aisetup/context mybot all set disabled_tools spawn_prototype,execute_batchcode

Per-context restrictions:

> aisetup/context mybot tick_autonomous set tool_filter say,pose,search_object

LLM Provider Options

Access multiple providers through one API:

> aisetup/config mybot set llm_provider openrouter
> aisetup/config mybot set llm_auth_token sk-or-v1-YOUR_KEY
> aisetup/config mybot set llm_model openai/gpt-4o-mini

Provider routing control:

> aisetup/config mybot set-extra provider_order ["Anthropic", "OpenAI"]
> aisetup/config mybot set-extra provider_allow_fallbacks true

OpenAI (Direct)

> aisetup/config mybot set llm_provider openai
> aisetup/config mybot set llm_auth_token sk-YOUR_KEY
> aisetup/config mybot set llm_model gpt-4o

Anthropic (Direct)

> aisetup/config mybot set llm_provider anthropic
> aisetup/config mybot set llm_auth_token sk-ant-YOUR_KEY
> aisetup/config mybot set llm_model claude-3-5-sonnet-20241022

Ollama (Local)

For local, privacy-focused deployment:

> aisetup/config mybot set llm_provider ollama
> aisetup/config mybot set llm_api_url http://localhost:11434/v1/chat/completions
> aisetup/config mybot set llm_model llama3.2

Note: Local models may have limited tool-calling support.

Provider Comparison

Provider Tool Calling Caching Best For
OpenRouter Yes (most models) Via Anthropic Flexibility, cost optimization
OpenAI Yes Automatic Reliability, GPT-4 access
Anthropic Yes Explicit Claude models, long context
Ollama Limited No Privacy, local testing

Monitoring and Debugging

aihistory Command

> aihistory mybot                    # Summary
> aihistory/conversation mybot=20    # Last 20 messages
> aihistory/execution mybot=10       # Last 10 tool executions
> aihistory/goals mybot              # Goal history
> aihistory/journal mybot            # Journal entries

Execution Log

Each tool execution is logged with:

  • Timestamp
  • Tool name
  • Parameters (sanitized)
  • Success/failure
  • Execution time

Understanding Emergency Stop

Emergency stop triggers after 5 consecutive failures (configurable).

Check status:

> aisetup mybot

Look for:

  • emergency_stop: True/False
  • consecutive_errors: N

Adjust threshold:

> aisetup/config mybot set max_consecutive_errors 10

Clear and recover:

> aisetup/reset mybot
> aisetup/start mybot

Debug Tips

  1. Increase verbosity: Set DEBUG = True in settings.py
  2. Watch execution log: aihistory/execution mybot=20
  3. Check token usage: aisetup mybot shows estimated_tokens
  4. Review conversation: aihistory/conversation mybot

Next Steps


Last updated: 2025-12-09