Add Render Preview API endpoint for prompt assembly visibility #25

Open
opened 2025-12-14 06:23:07 +00:00 by blightbow · 0 comments
Owner

Summary

Add a /workbench/preview/ endpoint that renders a sample prompt output, showing users exactly how their component configuration will be assembled into an LLM prompt.

Background

This is Phase 2 of the terminal component separation work from #16. With terminal components now handled implicitly by runtime state, users need visibility into how their prompt will actually render.

Requirements

New Endpoint

  • GET /api/ai-assistants/{key}/workbench/preview/
  • Query params: context_type (required)
  • Returns rendered messages array with component attribution

Response Format

{
  "context_type": "tick_event",
  "messages": [
    {
      "role": "system",
      "content": "...",
      "component": "system_prompt",
      "tokens": 245
    },
    {
      "role": "user", 
      "content": "...",
      "component": "pending_event",
      "tokens": 32,
      "is_terminal": true
    }
  ],
  "total_tokens": 1847,
  "model": "gpt-4o-mini"
}

Mock Data

Use representative mock data for runtime components:

  • pending_event: Sample game event
  • tool_result: Sample tool response
  • conversation_history: Sample dialogue

Implementation Notes

  • Leverage existing build_llm_messages() logic with mock inputs
  • Include terminal components in preview (they're runtime-determined but we can show placeholders)
  • Token counts should use actual tokenizer
  • Closes requirement from #16 implementation plan
  • Enables frontend "Preview" button in workbench UI
## Summary Add a `/workbench/preview/` endpoint that renders a sample prompt output, showing users exactly how their component configuration will be assembled into an LLM prompt. ## Background This is Phase 2 of the terminal component separation work from #16. With terminal components now handled implicitly by runtime state, users need visibility into how their prompt will actually render. ## Requirements ### New Endpoint - `GET /api/ai-assistants/{key}/workbench/preview/` - Query params: `context_type` (required) - Returns rendered messages array with component attribution ### Response Format ```json { "context_type": "tick_event", "messages": [ { "role": "system", "content": "...", "component": "system_prompt", "tokens": 245 }, { "role": "user", "content": "...", "component": "pending_event", "tokens": 32, "is_terminal": true } ], "total_tokens": 1847, "model": "gpt-4o-mini" } ``` ### Mock Data Use representative mock data for runtime components: - `pending_event`: Sample game event - `tool_result`: Sample tool response - `conversation_history`: Sample dialogue ## Implementation Notes - Leverage existing `build_llm_messages()` logic with mock inputs - Include terminal components in preview (they're runtime-determined but we can show placeholders) - Token counts should use actual tokenizer ## Related - Closes requirement from #16 implementation plan - Enables frontend "Preview" button in workbench UI
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
blightbow/evennia_ai#25
No description provided.