Table of Contents
- Architecture: Core Engine
- Overview
- 1. AssistantScript
- Key Lifecycle Hooks
- Persistent Configuration (db.*)
- Persistent State (db.*)
- Non-Persistent State (ndb.*)
- Tick Loop Flow (at_tick)
- 2. ReAct Loop (tool_execution.py)
- Execute React Loop Flow
- Termination Conditions
- Retry Configuration
- Token Advisory System
- Tool Caching
- 3. Message Classification (assistant_character.py)
- Key Patterns
- Conversation History Format
- Key Files
Architecture: Core Engine
Layer 2 - Tick Loop, ReAct Execution, and Message Classification
Overview
The core engine consists of three main components:
- AssistantScript (1,589 lines) - Main tick loop orchestrator
- tool_execution.py (654 lines) - ReAct loop and tool dispatch
- assistant_character.py (1,499 lines) - Message classification and entity memory
1. AssistantScript
The AssistantScript is the heart of the AI assistant, managing configuration, state, and the tick loop.
Key Lifecycle Hooks
| Hook | Purpose | Lines |
|---|---|---|
at_script_creation() |
Initialize 80+ persistent attributes | 67-299 |
at_start() |
Restore NDB, register tools, init RAG/memory | 300-458 |
at_tick() |
Main async loop: process events, call LLM, execute tools | 568-832 |
Persistent Configuration (db.*)
# Core identity
db.assistant_key # Unique identifier
db.tick_rate # Seconds between ticks (default: 5)
db.enabled # Whether tick loop is active
# LLM settings
db.llm_provider # Provider name
db.llm_model # Model identifier
db.system_prompt # Fallback prompt template
db.system_prompt_native # Native tool-calling prompt
db.max_context_tokens # Context window limit
# Execution config (Layer 1 of 3-layer system)
db.execution_config = {
"multi_action_enabled": True, # Enable ReAct loops
"max_iterations_per_tick": 5, # Max tool calls per tick (1-10)
"task_assessment_enabled": False, # Enable LLM complexity assessment
"sub_agents_enabled": False, # Enable delegation
"sub_agent_budget": 3, # Max concurrent sub-agents
}
# Sleep configuration
db.sleep_schedule = {
"enabled": False,
"sleep_start_hour": 2,
"sleep_duration_hours": 4,
"tick_rate_sleep": 60,
}
Persistent State (db.*)
db.pending_events # FIFO queue of events to process
db.conversation_history # Chat Completion format messages
db.current_tool_call # Tool execution state for continuation
db.current_goals # Goal hierarchy with version
db.operating_mode # "awake" or "sleep"
db.sleep_phase # "compacting" or "dreaming"
db.scheduled_wake_time # ISO timestamp for agent-initiated wake
db.activity_since_wake # Tool executions since last wake
db.estimated_tokens # Current context token usage
db.memory_links # A-MEM style memory link graph
db.consolidation_progress # Atomic compaction state tracking
db.reflection_state # Cumulative importance for reflection
Non-Persistent State (ndb.*)
# Recreated in at_start() after server restart
ndb.mem0_client # Memory client (network connection)
ndb.circuit_breaker_registry # Per-tool circuit breakers
ndb.tool_cache # Tool result cache
ndb.active_sub_agents # Count of running delegates
Tick Loop Flow (at_tick)
+-------------------------------------------------------------------------+
| at_tick() Entry |
+------------------------------------+------------------------------------+
|
+----------------v----------------+
| Early Exit Checks |
| - is_ticking? (collision) |
| - enabled? |
| - emergency_stopped? |
+----------------+----------------+
|
+----------------v----------------+
| Emergency Compaction |
| If awake AND tokens >= 80% |
| → pre_compact_extraction() |
| → compact_conversation() |
+----------------+----------------+
|
+----------------v----------------+
| Mode Dispatch |
| sleep → _run_sleep_tick() |
| awake → continue below |
+----------------+----------------+
|
+----------------v----------------+
| Reflection Check |
| ticks_since_reflection >= |
| reflection_interval? |
+----------------+----------------+
|
+----------------v----------------+
| Determine Context Type |
| - tick_event (has event) |
| - tick_autonomous (no event)|
+----------------+----------------+
|
+----------------v----------------+
| Get Execution Pattern |
| 3-layer composition |
| (static → context → LLM) |
+----------------+----------------+
|
+----------------------+----------------------+
| single_action | react_loop |
v v |
+-----------------+ +-----------------+ |
| Execute Single | | execute_react_ | |
| Tool Call | | loop() | |
+-----------------+ +-----------------+ |
2. ReAct Loop (tool_execution.py)
The ReAct loop enables multi-action execution within a single tick.
Execute React Loop Flow
+-------------------------------------------------------------------------+
| execute_react_loop() Entry |
+------------------------------------+------------------------------------+
|
+----------------v----------------+
| For i in max_iterations |
+----------------+----------------+
|
+----------------v----------------+
| Pre-iteration Checks |
| - Token usage < 80%? |
| - Iteration < max? |
+----------------+----------------+
| Continue
+----------------v----------------+
| Build LLM Messages |
| (context-specific) |
+----------------+----------------+
|
+----------------v----------------+
| Call LLM for Action |
+----------------+----------------+
|
+----------------v----------------+
| Parse Tool Call |
+----------------+----------------+
|
+----------------------+----------------------+
| noop | tool_call |
v v |
+-----------------+ +-----------------+ |
| TERMINATE | | Validate Tool | |
| (LLM done) | | Check Category | |
+-----------------+ +--------+--------+ |
| |
+---------------v---------------+ |
| Execute Tool | |
| (with cache check) | |
| (with retry logic) | |
+---------------+---------------+ |
| |
+---------------v---------------+ |
| Inject Token Advisory | |
+---------------+---------------+ |
| |
+---------------------+---------------------+|
| TERMINAL | DANGEROUS ||
v v v|
+-----------------+ +-----------------+ +---------v-------+
| TERMINATE | | TERMINATE | | Next Iteration |
| (await response)| | (safety limit) | | (SAFE_CHAIN) |
+-----------------+ +-----------------+ +-----------------+
Termination Conditions
| Condition | Reason Code | Behavior |
|---|---|---|
noop returned |
noop |
LLM explicitly signaled completion |
| TERMINAL tool | terminal_tool |
Communication tool (say, page) - await response |
| DANGEROUS tool | dangerous_tool |
State modification - single execution per tick |
| Token critical | critical_tokens |
80%+ context usage - force response |
| Max iterations | max_iterations |
Config limit reached |
| LLM error | llm_error |
Network/API failure |
| Parse error | parse_error |
Invalid LLM response format |
Retry Configuration
_TOOL_RETRY_CONFIG = RetryConfig(
max_attempts=4,
backoff_base=1.0, # Initial delay
backoff_max=10.0, # Maximum delay
backoff_multiplier=2.0, # Exponential factor
jitter=True # Randomize to prevent thundering herd
)
Retryable errors include: timeout, connection, network, temporary, rate limit, try again
Token Advisory System
Dual-trigger warning injected into tool results:
| Level | Threshold | Message | Loop Behavior |
|---|---|---|---|
| Warning | 60% | "Consider concluding soon" | Continue |
| Critical | 80% | "Provide final response now" | Terminate |
Tool Caching
# Cache configuration per tool
tool.cacheable = True
tool.cache_scope = "tick" # Cleared each tick
tool.cache_scope = "session" # Persistent until restart
tool.cache_ttl = 300.0 # Time-to-live in seconds
# Cache lookup before execution
cache_key = (tool_name, **params)
cached_result = tool_cache.get(cache_key, scope=tool.cache_scope)
3. Message Classification (assistant_character.py)
The AssistantCharacter handles message perception with a security-first approach.
Classification Results
| Result | Meaning | Action |
|---|---|---|
TRIGGER |
Message requires AI response | Add to pending_events, wake if sleeping |
CONTEXT |
Informational, store for awareness | Add to context buffer (no response) |
CAPTURE |
Command output being captured | Append to tool buffer |
IGNORE |
Discard message | No action |
8-Rule Classification System
Rules are evaluated in order. First match wins.
Rule 0: INTERACTION DISABLED
Condition: interaction_enabled == False
Result: IGNORE
Rule 1: SELF-MESSAGE DETECTION
Condition: sender == self
1a. If assistant_message flag → IGNORE (prevent loops)
1b. If command_output_buffer active → CAPTURE
1c. Otherwise → CONTEXT
Rule 2: DIRECT ADDRESSING (@mention)
Condition: enable_addressing AND @mention matches key
Result: TRIGGER
Rule 3: DIRECT MESSAGES
Condition: source_type in ["whisper", "page"]
Result: TRIGGER
Rule 4: ASSISTANT CROSS-TALK
Condition: assistant_message flag from another assistant
Result: CONTEXT (prevent assistant-to-assistant loops)
Rule 5: SENDER HAS TRIGGER PERMISSION
Condition: sender in trigger_permissions
Result: TRIGGER
Rule 6: CHANNEL CONFIGURATION
Condition: source_type == "channel" AND channel in configs
Result: Based on channel config (trigger/context/ignore)
Rule 7: DEFAULT FALLBACK
Condition: No other rules matched
Result: IGNORE
OOB-First Security Model
Message metadata is prioritized by trustworthiness:
Priority (Highest to Lowest):
1. **kwargs (SERVER-GENERATED)
- Set by Evennia's message routing
- Cannot be spoofed by users
2. tuple OOB data
- Embedded by Evennia's message system
- from_channel, type, senders keys
3. Pattern matching (HEURISTIC ONLY)
- Fallback for legacy systems
- NEVER use for security decisions
Source Trust Scoring
Messages include trust metadata for memory storage:
| Source Type | Trust Score | Rationale |
|---|---|---|
| Whisper/Page | 0.9 | Direct, private communication |
| @Addressed | 0.7 | Intentional public address |
| Channel | 0.6 | Configurable per channel |
| Say | 0.4 | Room-level public speech |
| Pose | 0.3 | Actions, may be ambiguous |
| Emit | 0.2 | Anonymous, untraceable |
Trust score formula includes sender history and permission overrides.
Key Patterns
1. Delegation Pattern
The script delegates to specialized modules rather than implementing inline:
# In AssistantScript
def _build_llm_messages(self, context_type=None):
from .llm_interaction import build_llm_messages
result = yield build_llm_messages(self, context_type)
defer.returnValue(result)
2. Defensive NDB Initialization
NDB attributes are recreated in at_start() after each server restart:
def at_start(self):
# NDB is cleared on reload - recreate
if not hasattr(self.ndb, "tool_cache") or self.ndb.tool_cache is None:
self.ndb.tool_cache = ToolResultCache()
3. Async Pattern (Twisted @inlineCallbacks)
All LLM and tool calls use Twisted's deferred pattern:
@inlineCallbacks
def at_tick(self):
messages = yield self._build_llm_messages()
result = yield self._execute_tool_call(character, tool_call)
defer.returnValue(result)
Conversation History Format
Native Tool Calling
# Assistant message with tool call
{
"role": "assistant",
"content": None,
"tool_calls": [{
"id": "call_abc123",
"type": "function",
"function": {
"name": "inspect_location",
"arguments": "{\"detail_level\": \"full\"}"
}
}]
}
# Tool result message
{
"role": "tool",
"tool_call_id": "call_abc123",
"content": "{\"success\": true, ...}"
}
Fallback Format
# Assistant message with JSON
{
"role": "assistant",
"content": "{\"tool\": \"inspect_location\", \"parameters\": {...}}"
}
# Tool result as user message
{
"role": "user",
"content": "[TOOL RESULT: inspect_location]\n{\"success\": true, ...}"
}
Key Files
| File | Lines | Purpose |
|---|---|---|
assistant_script.py |
568-832 | at_tick() main loop |
assistant_script.py |
67-299 | at_script_creation() config |
assistant_script.py |
300-458 | at_start() NDB init |
tool_execution.py |
381-573 | execute_react_loop() |
tool_execution.py |
52-277 | execute_tool_call() |
assistant_character.py |
722-870 | _classify_message() |
assistant_character.py |
522-582 | _detect_source_type() |
assistant_character.py |
1009-1100 | at_msg_receive() |
See also: Architecture-Overview | Architecture-Context-System