🛡️ Safeguards
Safeguards allow you to block AI outputs based on specific keywords in responses. They are perfect for preventing harmful, unethical, and dangerous responses from Await Cortex. They also help protect against malicious prompt injections and jailbreaking.
Feature: Safeguards
Instructions
Fallback and Warning Example
A Fallback makes the AI respond with a canned response based on your keyword flags
A Warning allows the AI to generate a response put provides a disclaimer for it’s answer
Safeguard Example: Medical Disclaimer
What it looks like in the agent:
Fallback
Disclaimer
Replace