azure-ai-anomalydetector-java
Audited by Gen Agent Trust Hub on Feb 13, 2026
The skill consists of three Markdown files: SKILL.md, references/acceptance-criteria.md, and references/examples.md. These files primarily provide documentation, code snippets, and best practices for integrating the Azure AI Anomaly Detector SDK into Java applications.
- Prompt Injection: No patterns indicative of prompt injection were found in any of the files. The language is instructional and technical.
- Data Exfiltration: The skill demonstrates retrieving Azure endpoint and API keys from environment variables (
System.getenv), which is a secure practice. It also shows how to reference data in Azure Blob Storage (https://storage.blob.core.windows.net/) for model training and inference. This is for data ingestion into a trusted Microsoft service, not exfiltration. No sensitive file paths are accessed or exfiltrated. - Obfuscation: No obfuscation techniques (Base64, zero-width characters, homoglyphs, URL/hex/HTML encoding) were detected in any of the files.
- Unverifiable Dependencies: The skill specifies Maven dependencies for
com.azure:azure-ai-anomalydetectorandcom.azure:azure-identity. These belong to theazureorganization, which is listed as a trusted GitHub organization. While these are external dependencies, they are from a trusted source and the skill itself does not execute any package installation commands; it merely provides instructions for a developer. - Privilege Escalation: No commands or instructions for privilege escalation (e.g.,
sudo,chmod 777, service installation) were found. - Persistence Mechanisms: No patterns for establishing persistence (e.g., modifying shell profiles, creating cron jobs) were detected.
- Metadata Poisoning: The skill's metadata (name, description) is benign and accurately reflects its purpose. No malicious instructions were hidden in metadata fields.
- Indirect Prompt Injection: The skill describes processing external data from Azure Blob Storage. While any system processing external data carries an inherent, general risk of indirect prompt injection if that data were to contain malicious instructions for an LLM, this skill's function is anomaly detection, not LLM processing of the data. Therefore, the risk to the AI agent itself is minimal in this context.
- Time-Delayed / Conditional Attacks: No conditional logic based on dates, times, usage counters, or environment variables that would trigger malicious behavior was found.
Adversarial Reasoning: The skill is purely informational and instructional for a human developer. It does not contain any executable scripts or commands for the AI agent. The code examples are standard usage of a well-known SDK. There are no hidden elements or suspicious behaviors. The stated purpose (Azure AI Anomaly Detector SDK for Java) perfectly matches the content. The use of trusted Azure SDKs and secure credential practices further reinforces its safety.
Conclusion: The skill is safe as it provides documentation and code examples for a trusted SDK and does not contain any executable components for the AI agent.