alternative-agent-frameworks

Fail

Audited by Gen Agent Trust Hub on Feb 16, 2026

Risk Level: HIGHEXTERNAL_DOWNLOADS
Full Analysis
  • Metadata Poisoning (HIGH): The skill repeatedly references non-existent models (GPT-5.2, GPT-5.2-Codex) and future-dated release timelines (Microsoft Agent Framework in Q1 2026). This deceptive metadata undermines the skill's integrity and suggests it was generated with malicious intent or extreme hallucination.
  • Unverifiable Dependencies (HIGH): The 'OpenAI Agents SDK' code examples use the import from agents import Agent. This is not an official OpenAI library. An 'agents' package exists on PyPI but is unrelated to OpenAI; directing users to use such an import can lead to 'AI Package Hallucination' attacks where users install malicious packages registered under hallucinated names.
  • Credential Access (LOW): The Microsoft Agent Framework example correctly uses os.environ["OPENAI_API_KEY"]. While this is standard practice, the inclusion of these patterns alongside fake framework data increases the risk that users might provide credentials to untrusted code environments.
Recommendations
  • AI detected serious security threats
Audit Metadata
Risk Level
HIGH
Analyzed
Feb 16, 2026, 01:27 AM