2 Architecture Helpers
blightbow edited this page 2025-12-09 02:35:29 +00:00

Architecture: Helpers

Layer 2 - Cross-Layer Utilities Package


Overview

The helpers/ package centralizes reusable logic across all layers:

  • Commands, API views, and core engine all import from helpers
  • Eliminates code duplication
  • Provides canonical implementations

All public symbols are re-exported from helpers/__init__.py for backward compatibility:

from evennia.contrib.base_systems.ai.helpers import get_assistant_script

Direct module imports also work:

from evennia.contrib.base_systems.ai.helpers.lookup import get_assistant_script

Package Structure

Module Purpose Key Exports
lookup.py Script/character retrieval, NDB access get_assistant_script(), get_assistant_character()
execution.py Config composition, state retrieval get_execution_config(), get_execution_state()
delegation.py Sub-agent status helpers get_delegation_status_for_command()
entity_profiles.py O-Mem persona attributes & observations create_entity_profile(), add_entity_observation()
working_memory.py Active conversations, pending actions start_conversation(), add_pending_action()
entity_context.py Prompt component assembly format_entity_context_for_prompt()
episodic_index.py Hybrid-scored journal search search_episodic_memory(), build_keyword_index()
service_health.py Circuit breaker health monitoring get_service_health()
error_utils.py Error sanitization, eventsourcing sanitize_error_for_response(), logger

1. Lookup & Validation (lookup.py)

Script Lookup

def get_assistant_script(assistant_key: str) -> AssistantScript | None:
    """
    Tag-based script lookup (most robust method).

    Uses: search.search_script_tag(key=f"ai_assistant:{key}", category="system")
    """

Character Lookup

def get_assistant_character(script) -> AssistantCharacter | None:
    """
    Get character from script's stored dbref.

    Uses: search.search_object(script.db.character_dbref)
    """

Validated Lookup

def get_validated_assistant(key) -> tuple[script, character, validation]:
    """
    Full validation via registry.

    Returns: (script, character, validation_dict)
    """

Defensive NDB Access

def get_ndb_client(obj, attr_name, check_initialized=False):
    """
    Safely retrieve NDB client attribute.

    NDB is wiped on server reload - this provides consistent checks.

    Example:
        client = get_ndb_client(script, "memory_client", check_initialized=True)
        if not client:
            return {"error": "Memory client not initialized"}
    """

2. Execution Patterns (execution.py)

Config Defaults

EXECUTION_CONFIG_DEFAULTS = {
    "multi_action_enabled": False,
    "max_iterations_per_tick": 5,
    "task_assessment_enabled": False,
    "use_quick_assessment": True,
    "sub_agents_enabled": False,
    "sub_agent_budget": 3,
    "personality_insulation": "none",
    "delegate_assistant_tag": None,
    "result_summarization": False,
    "trusted_api": False,
}

Key Functions

def get_execution_config(script) -> dict:
    """Get config with defaults applied."""

def set_execution_config_field(script, field, value) -> tuple[bool, str]:
    """Validated config setter with type checking."""

def get_effective_execution_pattern(script, context_type) -> ExecutionPattern:
    """3-layer composition: static -> context -> LLM assessment."""

def get_execution_state(script) -> dict:
    """Current state: tokens, errors, signals."""

def count_conversation_tokens(history: list) -> int:
    """Count tokens in conversation history for compaction decisions."""

3. Delegation (delegation.py)

Key Functions

def get_delegation_status_for_command(script) -> dict:
    """Get delegation status for display in commands."""

def get_tool_categories() -> dict:
    """Get category descriptions for tools."""

def get_tools_by_category(category: str) -> list:
    """Get filtered tool list by category."""

def get_tool_category_counts() -> dict:
    """Get count of tools per category."""

def format_context_signals(state: dict) -> dict:
    """Format context signals for API response."""

4. Entity Profiles (entity_profiles.py)

O-Mem inspired persona attributes and observations.

Relationship Thresholds (Dunbar)

RELATIONSHIP_STATE_THRESHOLDS = {
    "stranger": 0.0,      # 150+ people (outer network)
    "acquaintance": 0.25, # ~50 people (affinity group)
    "friend": 0.50,       # ~15 people (sympathy group)
    "ally": 0.75,         # ~5 people (support clique)
}

Profile Structure

profile = {
    "entity_id": "#123",
    "entity_type": "player",  # player | npc | object

    # Pa - Persona Attributes (stable characteristics)
    "attributes": {
        "communication_style": None,  # formal | casual | terse
        "interests": [],               # ["combat", "crafting"]
        "temperament": None,           # friendly | suspicious
        "preferences": {},             # Key-value specifics
    },

    # Pf - Persona Fact Events (observations, compactable)
    "observations": [
        {"content": "...", "timestamp": "...", "source": "direct"}
    ],

    # Relationship Metrics
    "relationship": {
        "favorability": 0.0,  # -1.0 to +1.0
        "trust": 0.5,         # 0.0 to 1.0
        "rapport": 0.0,       # 0.0 to 1.0
        "state": "stranger",  # Computed from score
        "interaction_count": 0,
    },

    "created_at": "...",
    "updated_at": "...",
}

Key Functions

def create_entity_profile(character, entity_id, entity_type="player"):
    """Create new profile with O-Mem structure."""

def get_entity_profile(character, entity_id):
    """Retrieve profile by ID."""

def update_entity_profile(character, entity_id, updates):
    """Update profile fields."""

def delete_entity_profile(character, entity_id):
    """Remove profile."""

def add_entity_observation(character, entity_id, content, source="observation"):
    """Add observation to Pf (persona facts)."""

def update_relationship_metric(character, entity_id, metric, value):
    """Update specific relationship metric."""

def calculate_relationship_score(profile) -> float:
    """Weighted score: favorability (0.5) + trust (0.3) + rapport (0.2)."""

def get_relationship_state_from_score(score) -> str:
    """Map score to Dunbar relationship state."""

5. Working Memory (working_memory.py)

Active conversation tracking for ongoing interactions.

Conversation Structure

working_memory = {
    "#123": {
        "topic": "quest_help",
        "started_at": "2025-12-06T10:00:00Z",
        "last_message_at": "2025-12-06T10:05:00Z",
        "messages": [
            {"role": "user", "content": "...", "timestamp": "..."},
            {"role": "assistant", "content": "...", "timestamp": "..."},
        ],
        "pending_actions": [
            {"description": "Explain quest", "priority": 1, "created_at": "..."},
        ],
    },
}

Key Functions

CONVERSATION_STALE_HOURS = 24  # Default staleness threshold

def get_working_memory(character) -> dict:
    """Get full working memory dict."""

def start_conversation(character, entity_id, topic=None):
    """Initialize or resume working memory for entity."""

def update_conversation_topic(character, entity_id, topic):
    """Update conversation topic."""

def add_message_to_conversation(character, entity_id, content, role="user"):
    """Append message to conversation."""

def end_conversation(character, entity_id):
    """End and archive conversation."""

def add_pending_action(character, entity_id, description, priority=1):
    """Queue action for entity."""

def complete_pending_action(character, entity_id, action_id):
    """Mark action as completed."""

def get_pending_actions(character, entity_id) -> list:
    """Get pending actions for entity."""

def get_active_conversation(character, entity_id) -> dict | None:
    """Get active conversation if not stale."""

def clear_stale_conversations(character, max_age_hours=24):
    """Remove conversations older than threshold."""

def get_entities_by_topic(character, topic) -> list:
    """Find entities with matching conversation topic."""

6. Entity Context Formatting (entity_context.py)

Formats entity data for prompt components.

Key Functions

def format_entity_context_for_prompt(character, max_entities=5, max_observations=3):
    """
    Format active entities for prompt injection.

    Returns formatted string for COMPONENT_ENTITY_CONTEXT (ID 1500).
    Prioritizes by: conversation recency, relationship state, interaction count.
    """

def get_speaker_entity_id_from_event(event) -> str | None:
    """Extract entity ID from event metadata."""

ENTITY_CONSOLIDATION_PROMPT = "..."  # System prompt for consolidation

def consolidate_entity_observations(script, character, entity_id, min_observations=5):
    """LLM-powered Pf -> Pa consolidation (O-Mem pattern)."""

def run_entity_consolidation_batch(script, character, max_entities=5):
    """Batch consolidation for sleep phase."""

Output Format:

[ACTIVE ENTITIES]
Player Alice (#123) - friend
  Observations: Prefers formal speech, Expert in combat
  Conversation: quest_help (5 messages)
  Pending: Explain quest mechanics

NPC Guard (#456) - acquaintance
  Observations: Suspicious of strangers

7. Episodic Memory Index (episodic_index.py)

Hybrid-scored search across journal entries (Generative Agents style).

Scoring Formula

score = alpha_recency * recency_score + alpha_importance * importance_score + alpha_relevance * relevance_score
  • recency_score: Time decay (configurable decay days)
  • importance_score: Normalized 1-10 importance
  • relevance_score: Keyword overlap with query

Key Functions

STOPWORDS = {...}  # Common words to exclude from keyword extraction

def extract_keywords_from_text(text: str) -> set[str]:
    """Extract keywords excluding stopwords."""

def build_keyword_index(entries: list) -> dict:
    """Build inverted index for fast keyword lookup."""

def score_episodic_memory(entry, query, now, decay_days, alphas) -> float:
    """Score single entry against query."""

def search_episodic_memory(
    entries: list,
    query: str,
    days_back: int = 30,
    min_importance: int = 1,
    alpha_recency: float = 1.0,
    alpha_importance: float = 1.0,
    alpha_relevance: float = 1.0,
    top_k: int = 5,
) -> list[dict]:
    """Hybrid-scored journal search. Returns entries sorted by combined score."""

def prune_low_importance_entries(
    character,
    importance_threshold=3,
    age_days=30,
    max_prune=5,
    preserve_consolidated=True,
) -> int:
    """Remove old low-importance entries from journal. Returns count pruned."""

8. Service Health (service_health.py)

Circuit breaker health monitoring.

def get_service_health(script) -> dict:
    """
    Get health status of external services.

    Returns dict with circuit breaker states for:
    - LLM provider
    - Memory service (Mem0/Qdrant)
    - External APIs
    """

9. Error Utilities (error_utils.py)

Error handling and eventsourcing integration.

from evennia.utils import logger

def sanitize_error_for_response(error: Exception) -> str:
    """
    Sanitize error message for safe display.

    Removes sensitive information like file paths, credentials.
    """

def record_eventsourcing_event(script, event_type, method, **kwargs):
    """Wrapper for recording events to eventsourcing system."""

Cross-Layer Usage

Commands

from evennia.contrib.base_systems.ai.helpers import (
    get_assistant_script,
    get_execution_config,
)

script = get_assistant_script("mybot")
config = get_execution_config(script)

API Views

from evennia.contrib.base_systems.ai.helpers import (
    get_validated_assistant,
    format_context_signals,
)

script, character, validation = get_validated_assistant(key)

Core Engine

from .helpers import (
    get_ndb_client,
    run_entity_consolidation_batch,
    search_episodic_memory,
)

client = get_ndb_client(script, "memory_client")
results = yield run_entity_consolidation_batch(script, character)

Key Functions Reference

Module Function Purpose
lookup get_assistant_script() Tag-based script lookup
lookup get_assistant_character() Character from script
lookup get_ndb_client() Defensive NDB access
execution get_execution_config() Config with defaults
execution get_effective_execution_pattern() 3-layer composition
execution count_conversation_tokens() Token counting for compaction
delegation get_delegation_status_for_command() Delegation info
delegation format_context_signals() API response formatting
entity_profiles create_entity_profile() O-Mem profile creation
entity_profiles add_entity_observation() Add to Pf
entity_profiles update_relationship_metric() Adjust favorability
working_memory start_conversation() Working memory init
working_memory add_pending_action() Queue action
entity_context format_entity_context_for_prompt() Prompt formatting
entity_context consolidate_entity_observations() Pf -> Pa synthesis
episodic_index search_episodic_memory() Hybrid journal search
episodic_index prune_low_importance_entries() Journal cleanup
service_health get_service_health() Circuit breaker status
error_utils sanitize_error_for_response() Safe error messages

See also: Architecture-Overview | Architecture-Memory-and-Sleep | Architecture-Commands-and-API