Configuration Reference
Complete reference for all configurable attributes in the AI Assistant system.
All attributes are stored on script.db.* and persist across server restarts.
Quick Navigation
Core Settings
| Attribute |
Type |
Default |
Description |
assistant_key |
str |
None |
Unique identifier for this assistant instance |
tick_rate |
int |
5 |
Seconds between ticks (awake mode) |
enabled |
bool |
False |
Whether the assistant is active |
max_history |
int |
50 |
Maximum conversation turns to keep |
default_tool_timeout |
int |
30 |
Default tool execution timeout (seconds) |
character_dbref |
str |
None |
Database reference to AssistantCharacter |
LLM Provider Settings
Connection Settings
| Attribute |
Type |
Default |
Description |
llm_provider |
str |
"openai" |
Provider type: openai, anthropic, openrouter, ollama, local |
llm_api_url |
str |
OpenRouter URL |
Full URL to Chat Completion API endpoint |
llm_auth_token |
str |
"" |
Bearer token for API authentication |
llm_model |
str |
"" |
Model name (e.g., gpt-4, claude-3-opus) |
Generation Parameters
| Attribute |
Type |
Default |
Description |
llm_temperature |
float |
None |
Sampling temperature (0.0-2.0). None = provider default |
llm_top_p |
float |
None |
Nucleus sampling (0.0-1.0). None = provider default |
llm_max_tokens |
int |
None |
Maximum tokens in response. None = provider default |
llm_reasoning_effort |
str |
None |
OpenAI o1 models: "low", "medium", "high" |
llm_extra_params |
dict |
{} |
Arbitrary additional parameters (e.g., {"min_p": 0.05}) |
OpenRouter-Specific
| Attribute |
Type |
Default |
Description |
llm_app_name |
str |
"Evennia AI Assistant" |
X-Title header for OpenRouter dashboard |
llm_site_url |
str |
"" |
HTTP-Referer header for tracking |
Prompt Templates
| Attribute |
Type |
Default |
Description |
system_prompt |
str |
Fallback template |
System prompt for JSON fallback mode |
system_prompt_native |
str |
Default template |
System prompt for native tool calling |
applied_template |
str |
"default" |
Name of currently applied template |
reflection_prompt_template |
str |
Reflection template |
Template for reflection prompts |
context_configs |
dict |
{} |
Per-context component/tool customizations |
Context Window Management
Token Limits
| Attribute |
Type |
Default |
Description |
max_context_tokens |
int |
100000 |
Maximum context window size |
estimated_tokens |
int |
0 |
Running token count estimate (updated each tick) |
Context Compaction
| Attribute |
Type |
Default |
Description |
compact_enabled |
bool |
True |
Enable context compaction |
compact_sleep_threshold |
float |
0.7 |
Token % to trigger compaction during sleep (70%) |
compact_emergency_threshold |
float |
0.8 |
Token % to force emergency compaction (80%) |
compact_preserve_window |
int |
20 |
Messages to keep intact during compaction |
compact_model |
str |
None |
Model for summarization. None = use main LLM |
compact_prompt |
str |
None |
Custom compaction prompt. None = use built-in |
last_compaction |
str |
None |
ISO timestamp of last compaction |
| Attribute |
Type |
Default |
Description |
pre_compact_extraction_enabled |
bool |
True |
Enable pre-compaction fact extraction |
pre_compact_max_iterations |
int |
5 |
Max ReAct loop cycles for extraction |
pre_compact_prompt |
str |
None |
Custom extraction prompt. None = use default |
RAG/Vector Search
Qdrant Configuration
| Attribute |
Type |
Default |
Description |
rag_enabled |
bool |
False |
Enable RAG/vector search |
qdrant_host |
str |
"localhost" |
Qdrant server hostname |
qdrant_port |
int |
6333 |
Qdrant HTTP port |
qdrant_grpc_port |
int |
None |
Qdrant gRPC port (optional, better performance) |
qdrant_api_key |
str |
None |
Qdrant Cloud API key |
qdrant_use_tls |
bool |
False |
Enable HTTPS for cloud/production |
qdrant_timeout |
int |
10 |
Connection timeout in seconds |
Embedding Configuration
| Attribute |
Type |
Default |
Description |
rag_embedding_provider |
str |
"auto" |
auto, fastembed, openai, ollama |
rag_embedding_url |
str |
None |
Custom embedding API URL |
rag_embedding_token |
str |
None |
Embedding API token (falls back to llm_auth_token) |
rag_embedding_model |
str |
None |
Custom embedding model name |
Semantic Memory (Mem0)
| Attribute |
Type |
Default |
Description |
memory_enabled |
bool |
False |
Enable Mem0 semantic memory |
memory_config |
dict |
{} |
Mem0 configuration (llm.provider, vector_store, etc.) |
Operating Mode / Sleep
Mode State
| Attribute |
Type |
Default |
Description |
operating_mode |
str |
"awake" |
Current mode: "awake" or "sleep" |
mode_transition_reason |
str |
None |
Why last transition occurred |
last_mode_transition |
str |
None |
ISO timestamp of last mode change |
Sleep Schedule (Automatic)
| Attribute |
Type |
Default |
Description |
sleep_schedule.enabled |
bool |
False |
Enable automatic sleep scheduling |
sleep_schedule.sleep_start_hour |
int |
2 |
Hour to enter sleep (0-23, server timezone) |
sleep_schedule.sleep_duration_hours |
int |
4 |
How long to stay in sleep mode |
sleep_schedule.tick_rate_sleep |
int |
60 |
Seconds between sleep ticks |
sleep_schedule.min_awake_ticks |
int |
10 |
Minimum ticks before auto-sleep |
Agent-Initiated Sleep
| Attribute |
Type |
Default |
Description |
scheduled_wake_time |
str |
None |
ISO timestamp when to wake (agent-set via go_to_sleep) |
sleep_depth |
str |
None |
"light" (wake on urgent), "deep" (timer only) |
sleep_phase |
str |
None |
"compacting" or "dreaming" (None when awake) |
sleep_initiated_by |
str |
None |
"tool", "schedule", "manual" |
sleep_reason |
str |
None |
Agent's stated reason for sleeping |
sleep_cooldown_until |
str |
None |
ISO timestamp, prevents immediate re-sleep |
sleep_cooldown_minutes |
int |
60 |
Cooldown duration after waking |
min_activity_before_sleep |
int |
10 |
Minimum interactions before sleep allowed |
activity_since_wake |
int |
0 |
Counter reset on wake, incremented on tool use |
Consolidation Progress
| Attribute |
Type |
Default |
Description |
consolidation_progress.started_at |
str |
None |
ISO timestamp when compaction began |
consolidation_progress.total_entries |
int |
0 |
Total journal entries to consolidate |
consolidation_progress.processed_entries |
int |
0 |
Entries consolidated so far |
consolidation_progress.completed |
bool |
False |
Whether consolidation is complete |
Wait Mode
| Attribute |
Type |
Default |
Description |
wait_until |
str |
None |
ISO timestamp to resume processing |
wait_mode |
str |
None |
"soft" (wake on messages), "hard" (timer only) |
wait_reason |
str |
None |
Reason for waiting |
Execution Configuration
All stored under execution_config dict:
Multi-Action Loop
| Key |
Type |
Default |
Description |
multi_action_enabled |
bool |
False |
Master toggle for ReAct loops |
max_iterations_per_tick |
int |
5 |
Hard limit on actions per tick |
Task Assessment (Layer 3)
| Key |
Type |
Default |
Description |
task_assessment_enabled |
bool |
False |
Enable LLM task complexity assessment |
use_quick_assessment |
bool |
True |
Use heuristics instead of LLM when assessment disabled |
Sub-Agent Delegation
| Key |
Type |
Default |
Description |
sub_agents_enabled |
bool |
False |
Toggle for sub-agent delegation |
sub_agent_budget |
int |
3 |
Max concurrent sub-agents |
Personality Preservation
| Key |
Type |
Default |
Description |
personality_insulation |
str |
"none" |
"none", "orchestrator", "delegate" |
delegate_assistant_tag |
str |
None |
Tag to find delegate assistant |
result_summarization |
bool |
False |
Return summary instead of raw tool output |
Reflection and Metrics
Reflection Scheduling
| Attribute |
Type |
Default |
Description |
tick_count |
int |
0 |
Total ticks executed |
reflection_interval |
int |
10 |
Reflect every N ticks |
last_reflection_tick |
int |
0 |
Tick count at last reflection |
Generative Agents Reflection State
| Attribute |
Type |
Default |
Description |
reflection_state.cumulative_importance |
float |
0.0 |
Sum of importance scores since last reflection |
reflection_state.last_reflection_time |
str |
None |
ISO timestamp of last reflection |
reflection_state.reflection_count |
int |
0 |
Total reflections completed |
reflection_state.threshold |
int |
150 |
Cumulative importance to trigger reflection |
reflection_state.entries_since_reflection |
list |
[] |
Journal entry IDs contributing to cumulative |
Metrics Tracking
| Attribute |
Type |
Default |
Description |
metrics.tasks_attempted |
int |
0 |
Total tasks attempted |
metrics.tasks_completed |
int |
0 |
Tasks successfully completed |
metrics.tasks_failed |
int |
0 |
Tasks that failed |
metrics.tool_calls |
dict |
{} |
tool_name → count |
metrics.tool_errors |
dict |
{} |
tool_name → error count |
metrics.tool_retries |
dict |
{} |
tool_name → retry count |
metrics.total_execution_time_ms |
int |
0 |
Cumulative execution time |
Logging
| Attribute |
Type |
Default |
Description |
execution_log |
list |
[] |
Last N structured log entries |
max_log_entries |
int |
100 |
Maximum log entries to retain |
Error Handling
| Attribute |
Type |
Default |
Description |
emergency_stop |
bool |
False |
Emergency stop flag |
max_consecutive_errors |
int |
5 |
Errors before emergency stop |
consecutive_errors |
int |
0 |
Current consecutive error count |
Event Sourcing
| Attribute |
Type |
Default |
Description |
eventsourcing_enabled |
bool |
False |
Enable event sourcing |
eventsourcing_id |
str |
None |
UUID of the event-sourced aggregate |
event_archive_dir |
str |
"server/archives/ai_events" |
Directory for archived events |
State Attributes (Runtime)
These are managed automatically and should not be set directly:
| Attribute |
Type |
Description |
conversation_history |
list |
Chat Completion message history |
pending_events |
list |
Queue of user input events |
current_tool_call |
dict |
State of currently executing tool |
is_ticking |
bool |
Whether a tick is currently in progress |
current_goals |
list |
Active goals |
goal_history |
list |
Completed/abandoned goals |
NDB Attributes (Non-Persistent)
These are stored in script.ndb.* and recreated on server restart:
| Attribute |
Type |
Description |
rag_client |
QdrantRAGClient |
RAG client instance (if enabled) |
memory_client |
TwistedMem0Client |
Mem0 client instance (if enabled) |
last_provider |
str |
Provider from last LLM call |
last_model |
str |
Model from last LLM call |
last_supports_tools |
bool |
Whether last provider supported tools |
last_rate_limit |
RateLimitInfo |
Rate limit info from last call |
in_reflection |
bool |
Whether in reflection session |
Document created: 2025-12-06