Guardrails
Category
•
AI Ethics
Definition
Guardrails are predefined constraints, rules, or policies applied to AI systems to ensure safe, ethical, and compliant outputs. They can limit certain topics, enforce accuracy checks, or prevent harmful behaviors.
tl;dr
Predefined rules that limit AI actions to ensure safety, ethics, and compliance.