Product tour
Detect and Prevent Adversarial Attacks on LLM Applications
Start the self-guided tour to learn how to:
- Detect hallucinations causing changes in a LLM chatbot's behavior
- Discover attempted prompt injection attacks
- Visualize prompts and responses clusters in a 3D UMAP visualization
- Perform root cause analysis to identify malicious users and prompts
- Create a custom metric and alert to immediately identify future attacks