Page:
Architecture Logging
Pages
Architecture Commands and API
Architecture Context System
Architecture Core Engine
Architecture Event Sourcing
Architecture Generative Reflection
Architecture Helpers
Architecture Journal System
Architecture LLM Interaction
Architecture LLM Providers
Architecture Logging
Architecture Memory and Sleep
Architecture Overview
Architecture Persona Protection
Architecture Prompt System
Architecture RAG Implementation
Architecture Resilience System
Architecture Safety System
Architecture Self Management
Architecture Sub Agent Delegation
Architecture Task Assessment
Architecture Token Management
Architecture Tool System
Configuration Reference
Context and Memory Flow Analysis
Data Flow 01 Context Compaction
Data Flow 02 ReAct Loop
Data Flow 03 Memory Consolidation
Data Flow 04 Message Classification
Data Flow 05 Entity Profile System
Data Flow 06 Tool Execution
Data Flow 07 Sleep Mode Transitions
Data Flow 08 LLM Provider Interaction
Data Flow 09 Self Management Operations
Home
LLM Decision Patterns
Research Foundations
User Guide 00 Index
User Guide 01 Getting Started
User Guide 02 Configuration and Customization
User Guide 03 Advanced Capabilities
User Guide 04 Troubleshooting
No results
1
Architecture Logging
blightbow edited this page 2025-12-08 04:50:35 +00:00
Architecture: Logging System
Infrastructure - Execution Logging and Metrics
Overview
The logging system provides structured execution tracking and metrics collection:
- Execution log - Ring buffer of tool executions
- Metrics aggregation - Tool call counts, success rates, timing
- External logging - JSON-formatted entries for analytics
1. Execution Log
log_execution()
Records each tool execution (script_logging.py:15-86):
from evennia.contrib.base_systems.ai.script_logging import log_execution
log_execution(
script,
tool_name="say",
tool_call={"parameters": {...}, "reasoning": "..."},
success=True,
error_msg=None,
duration_ms=0.150
)
Log Entry Schema
{
"tick": int, # Current tick number
"timestamp": str, # ISO 8601 timestamp
"tool": str, # Tool name
"parameters": dict, # Tool parameters
"reasoning": str, # LLM reasoning for tool choice
"success": bool, # Execution result
"error": str, # Error message if failed
"duration_ms": int, # Execution time in milliseconds
"provider": str, # LLM provider used
"model": str, # Model name
}
Ring Buffer
Logs are stored as a ring buffer with configurable max size:
logs = script.db.execution_log or []
logs.append(log_entry)
if len(logs) > script.db.max_log_entries:
logs = logs[-script.db.max_log_entries:]
script.db.execution_log = logs
2. Metrics Tracking
Metrics Schema
Stored in script.db.metrics:
{
"tool_calls": { # Per-tool call counts
"say": 50,
"inspect_location": 30,
...
},
"tool_errors": { # Per-tool error counts
"execute_command": 2,
...
},
"tasks_attempted": int, # Total tool executions
"tasks_completed": int, # Successful executions
"tasks_failed": int, # Failed executions
"total_execution_time_ms": int, # Cumulative time
}
Metrics Update Flow
On each tool execution:
if tool_name and tool_name != "noop":
metrics["tool_calls"][tool_name] += 1
if success:
metrics["tasks_completed"] += 1
else:
metrics["tasks_failed"] += 1
metrics["tool_errors"][tool_name] += 1
metrics["tasks_attempted"] += 1
metrics["total_execution_time_ms"] += duration_ms
3. Query Helpers
get_top_tools()
Returns most frequently used tools:
from evennia.contrib.base_systems.ai.script_logging import get_top_tools
top = get_top_tools(script, limit=3)
# "say(50), inspect_location(30), recall_memories(25)"
get_recent_errors()
Returns recent error messages for diagnostics:
from evennia.contrib.base_systems.ai.script_logging import get_recent_errors
errors = get_recent_errors(script, limit=3)
# "execute_command: Permission denied; say: Rate limited"
Implementation:
logs = script.db.execution_log[-20:] if script.db.execution_log else []
errors = [log for log in logs if not log.get("success") and log.get("error")]
recent = errors[-limit:]
return "; ".join([f"{log['tool']}: {log['error'][:50]}" for log in recent])
4. External Logging
Each execution is also logged to Evennia's logger for external analytics:
logger.log_info(f"[EXECUTION] {json.dumps(log_entry)}")
This enables:
- Log aggregation systems
- External monitoring
- Historical analysis beyond ring buffer
5. Event Sourcing Integration
Tool executions are recorded to the event sourcing system:
from evennia.contrib.base_systems.ai.helpers import record_eventsourcing_event
record_eventsourcing_event(
script,
"tool execution",
"execute_tool",
tool_name=tool_name,
parameters=parameters,
reasoning=reasoning,
success=success,
error=error_msg,
execution_time_ms=duration_ms,
)
Key Files
| File | Lines | Purpose |
|---|---|---|
script_logging.py |
15-86 | log_execution() main function |
script_logging.py |
88-106 | get_top_tools() helper |
script_logging.py |
109-129 | get_recent_errors() helper |
See also: Architecture-Event-Sourcing | Architecture-Core-Engine