playground and training
Playground
Experiment safely, understand training states, and fine-tune model behavior before releasing updates.

Playground workflow
Use the Playground as a safe testing area before exposing changes to users. Compare models, adjust prompts, and verify tone without touching the live agent.
Testing checklist
- Set the agent to Private so experiments do not impact production embeds.
- Use built-in or custom prompts to probe tricky scenarios.
- Review AI responses, tweak settings, and save once satisfied.
- Remember: Playground chats still consume message credits.
Training status states
- Training pending – the system is still processing new data.
- Training completed – the agent is refreshed and ready for questions.
- Failed – something went wrong; retry once the issue is resolved.
AI models
Each model balances speed, precision, and cost differently. Choose the one that matches the response quality and latency expectations for your use case.
- Speed-focused models reply quickly with concise answers.
- Accuracy-focused models spend more credits but provide richer detail.
- Budget-friendly models prioritize volume over depth.
Temperature control
Temperature ranges from 0–1 and dictates creativity. Lower values prioritize deterministic, factual answers; higher values invite expressive language but increase variance.
- 0.0–0.3: Reliable and safe responses.
- 0.4–0.7: Balanced tone for most assistants.
- 0.8–1.0: Exploratory or marketing-friendly replies.

System prompt
Define the agent’s persona, voice, and guardrails. The system prompt keeps messaging on brand, limits scope, and makes sure the AI focuses on the right audience.
- Locks tone to match your brand voice.
- Keeps answers consistent across channels.
- Reminds the model which topics to prioritize.

AI actions
AI Actions are currently in development and will roll out soon. The placeholder reminds teams that deeper integrations are on the roadmap.
Next up
Operations