linkedin-export
Fail
Audited by Gen Agent Trust Hub on Feb 21, 2026
Risk Level: HIGHEXTERNAL_DOWNLOADSCOMMAND_EXECUTION
Full Analysis
- [EXTERNAL_DOWNLOADS] (HIGH): The skill instructions and source code in li_ingest.py recommend installing an external tool via 'go install github.com/dontizi/rlama@latest'. This repository is not on the trusted organizations list, posing a significant risk of executing malicious or unvetted code during the installation process.
- [COMMAND_EXECUTION] (MEDIUM): The script li_ingest.py uses 'subprocess.run' to execute external binaries including 'rlama' and 'ollama'. While the script uses argument lists to prevent shell injection, the reliance on an external binary from an untrusted source remains a security risk.
- [INDIRECT_PROMPT_INJECTION] (LOW): The skill provides an attack surface by indexing untrusted message content for retrieval by an AI agent. 1. Ingestion points: LinkedIn message content and profile data parsed from CSV files within a user-provided ZIP. 2. Boundary markers: None identified in the provided code; content is processed directly into Markdown documents for indexing. 3. Capability inventory: Execution of RAG queries via subprocess calls and manipulation of local files. 4. Sanitization: No sanitization or filtering of message content is performed before ingestion, which could allow malicious instructions in messages to influence the agent's behavior at runtime.
Recommendations
- AI detected serious security threats
Audit Metadata