Architecture: Generative Reflection
Layer 2 - Stanford Generative Agents Reflection Process
Overview
Implements the three-step reflection process from Park et al. (2023):
- Question generation - Derive high-level questions from recent memories
- Evidence retrieval - Find supporting memories for each question
- Insight generation - Synthesize insights with citations
Includes persona protection to prevent self-referential content.
1. Reflection Trigger
Cumulative Importance Threshold
Reflection triggers when cumulative importance reaches threshold (generative_reflection.py:123-138):
from evennia.contrib.base_systems.ai.generative_reflection import check_reflection_trigger
if check_reflection_trigger(script):
result = yield run_reflection(script, character)
State tracked in script.db.reflection_state:
{
"threshold": 150, # Importance threshold
"cumulative_importance": 0, # Running total
"entries_since_reflection": [], # Entry IDs to reflect on
"last_reflection": <timestamp>,
}
2. Question Generation (Step 1)
generate_reflection_questions()
Generates 3 high-level questions from recent entries (generative_reflection.py:145-200):
from evennia.contrib.base_systems.ai.generative_reflection import (
generate_reflection_questions
)
questions = yield generate_reflection_questions(script, recent_entries, num_questions=3)
# ["What patterns exist in market activity?", "How do NPCs respond to...", ...]
Prompt Focus
Questions target:
- World state and environment patterns
- NPC and player behavior patterns
- Event outcomes and implications
3. Evidence Retrieval (Step 2)
retrieve_evidence_for_question()
Searches Mem0 for supporting memories (generative_reflection.py:221-268):
from evennia.contrib.base_systems.ai.generative_reflection import (
retrieve_evidence_for_question
)
evidence = yield retrieve_evidence_for_question(script, question, limit=10)
# [{"id": "mem_123", "content": "...", "score": 0.85}, ...]
4. Insight Generation (Step 3)
generate_insights()
Synthesizes insights with evidence citations (generative_reflection.py:275-346):
from evennia.contrib.base_systems.ai.generative_reflection import generate_insights
insights = yield generate_insights(
script,
questions,
evidence_per_question, # Dict[question -> List[evidence]]
num_insights=5,
)
# [{"content": "Insight text", "evidence_ids": ["mem_1", "mem_2"]}, ...]
Citation Format
LLM response format: insight text (because of evidence 1, 5, 3)
Parsed into structured output with evidence ID references.
5. Persona Protection
Self-Reference Filtering
Filters insights that reference the assistant itself (generative_reflection.py:83-116):
SELF_REFERENCE_PATTERNS = [
r"^I\s+(should|am|tend|need|must|will|have|was|can|could)",
r"(?i)(the assistant|my behavior|my approach|myself|my identity)",
r"(?i)(i learned|i realized|i discovered|i noticed|i found)",
r"(?i)(i will|i need to|i should|i must)",
]
Filtered insights are logged but not stored.
Prompt Constraints
The insight generation prompt explicitly forbids self-reference:
IMPORTANT CONSTRAINTS:
- Focus on WORLD STATE and PATTERNS, not assistant behavior
- Do NOT generate insights about your own identity, capabilities, or behavior
- Do NOT use first-person ("I should...", "I noticed...")
- Frame insights as objective observations about the world
6. Full Reflection Pipeline
run_reflection()
Orchestrates the complete process (generative_reflection.py:385-478):
from evennia.contrib.base_systems.ai.generative_reflection import run_reflection
result = yield run_reflection(script, character)
# {
# "success": True,
# "questions": [...],
# "insights": [...],
# "entries_reflected": 15,
# "stored_entries": [42],
# "error": None,
# }
Pipeline Flow
check_reflection_trigger()
│
▼
run_reflection(script, character)
│
├── Get entries from reflection_state.entries_since_reflection
│
├── Step 1: generate_reflection_questions()
│
├── Step 2: retrieve_evidence_for_question() (for each question)
│
├── Step 3: generate_insights()
│
├── filter_self_referential_insights()
│
├── store_reflection_as_journal()
│
└── reset_reflection_state()
7. Journal Storage
store_reflection_as_journal()
Stores insights as [SYNTHESIS] journal entries (generative_reflection.py:485-548):
entry = {
"id": entry_id,
"timestamp": timezone.now(),
"content": "[SYNTHESIS] Reflection on: question summary\n\n- insight 1\n- insight 2...",
"tags": ["synthesis", "meta_learning", "reflection"],
"source_type": "inference",
"source_trust": 0.7, # Synthetic observations have moderate trust
"source_entity": "self_reflection",
"importance": 8, # Synthesis entries are generally important
"importance_method": "manual",
}
8. Integration with Sleep
Called during dreaming phase in rag_memory.py:run_sleep_tick:
from evennia.contrib.base_systems.ai.generative_reflection import (
check_reflection_trigger,
run_reflection,
)
if check_reflection_trigger(script):
result = yield run_reflection(script, character)
Key Files
| File | Lines | Purpose |
|---|---|---|
generative_reflection.py |
43-56 | Question generation prompt |
generative_reflection.py |
58-77 | Insight generation prompt |
generative_reflection.py |
83-116 | Persona protection patterns |
generative_reflection.py |
123-138 | check_reflection_trigger() |
generative_reflection.py |
145-200 | generate_reflection_questions() |
generative_reflection.py |
221-268 | retrieve_evidence_for_question() |
generative_reflection.py |
275-346 | generate_insights() |
generative_reflection.py |
385-478 | run_reflection() |
generative_reflection.py |
485-548 | store_reflection_as_journal() |
See also: Architecture-Journal-System | Architecture-Memory-and-Sleep | Research-Foundations