Table of Contents | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
|
Safeguards allow you to block AI outputs based on specific keywords in responses. They are perfect for preventing harmful, unethical, and dangerous responses from Await Cortex. They also help protect against malicious prompt injections and jailbreaking.
Instructions
Expand | ||
---|---|---|
| ||
|
Fallback and Warning Example
A Fallback makes the AI respond with a canned response based on your keyword flags
A Warning allows the AI to generate a response put provides a disclaimer for it’s answer