Best Practices
Optimizing your LLM performance with injected context.
1. System Prompt Positioning
Always place the Clawmato context block at the very end of your system prompt. LLMs tend to pay more attention to the most recent instructions (recency bias).
You are a helpful assistant...
[Other Instructions]
--- BEGIN REAL-TIME CONTEXT ---
[Clawmato Data]
--- END REAL-TIME CONTEXT ---
[Other Instructions]
--- BEGIN REAL-TIME CONTEXT ---
[Clawmato Data]
--- END REAL-TIME CONTEXT ---
2. Handle Conflicting Data
Real-time data may sometimes conflict with the model's training data (e.g., old political leaders vs. new ones). Explicitly instruct your model to "Prioritize the provided Real-Time Context over internal knowledge."
3. Latency Optimization
For chat applications, call the Clawmato API in parallel with your other backend logic. Do not wait for the user to finish typing if you can predict the intent.