
Summary
This detection rule identifies unusual modifications made to configuration files of Generative AI (GenAI) tools, which may indicate malicious actions such as the injection of unauthorized server configurations by adversaries. These modifications can allow attackers to hijack AI agents for purposes including persistent access, command and control (C2), or data exfiltration. The rule captures various attack vectors, including malware altering configuration files directly, supply chain attacks through compromised dependencies, and prompt injection attacks that exploit the GenAI tool's capabilities. The investigation involves analyzing the process responsible for the modification, the context of the change, and the contents of the modified files. False positives may occur from legitimate changes or GenAI updates, requiring careful review of the modification context.
Categories
- Endpoint
- Cloud
- Application
Data Sources
- File
- Process
ATT&CK Techniques
- T1556
- T1554
Created: 2025-12-04