
Summary
This detection rule monitors AWS Bedrock model invocations to identify instances where guardrail features have intervened, indicating potential attempts to prompt the AI models to generate inappropriate or harmful content. When an invocation is blocked by guardrails, a notification is generated that includes details about the nature of the request, the user attempting the action, and the specific reason for the intervention. This information is crucial for security teams to assess risks associated with AI model use and can help in detecting manipulative attacks aimed at model poisoning or prompt injections. The rule emphasizes the importance of closely analyzing user interactions that result in guardrail intervention to prevent misuse of AI technologies.
Categories
- Cloud
- Application
- Identity Management
Data Sources
- Cloud Service
- Application Log
ATT&CK Techniques
- T0018.000
Created: 2025-07-15