NYC

letta-api-client

Fail

Audited by Socket on Feb 15, 2026

5 alerts found:

Obfuscated Filex2Securityx3
Obfuscated FileHIGH
examples/python/04_custom_tool_secrets.py

No explicit malicious payload or obfuscated backdoor is present in the analyzed snippet. The main security concerns are: (1) credential exposure — passing a secrets dict or relying on env vars that may be uploaded/stored by Letta will expose API keys to a third party; (2) remote execution/trust boundary — registering functions as tools may upload code or enable remote execution on Letta infrastructure; and (3) data leakage via logs and remote model endpoints (conversation context and tool inputs). Recommendation: never hardcode or upload secrets; prefer runtime secret injection (vaults) or scoped ephemeral credentials; confirm Letta's secrets handling and execution model before registering tools; sanitize and limit data sent to remote services and avoid printing sensitive identifiers. If you cannot trust Letta's backend for secret storage or execution, do not pass secrets or register sensitive tools.

Confidence: 98%
SecurityMEDIUM
examples/python/13_client_side_tools.py

The code provides a mechanism for executing arbitrary shell commands on the local host based on instructions originating from a remote Letta agent, and returns command output back to that remote agent. This creates a high-risk remote-to-local execution and data-exfiltration capability. While not clearly 'malware' in intent (it appears to implement an intended capability: local execution of approved agent tool calls), it is dangerous: a compromised or malicious agent or upstream service could use this to execute destructive commands or steal sensitive files. Recommend not running this script unless you fully trust the remote agent/service, add strict allowlists or explicit human approval prompts, avoid shell=True, and sandbox/limit what can be executed and what outputs are returned.

Confidence: 90%Severity: 80%
SecurityMEDIUM
examples/typescript/13_client_side_tools.ts

This module directly executes arbitrary shell commands provided by remote agent messages and returns the results to the remote service. That design enables remote code execution and data exfiltration with the privileges of the running process. In practical terms this is high risk: do not run this code on sensitive or production hosts. If local command execution is required, implement strict mitigations: explicit human-in-the-loop approval, a strict allowlist of safe commands and arguments, sandboxed execution (container or unprivileged user), output redaction, and restricting network/external access. Treat this code as a severe security hazard until rearchitected with those protections.

Confidence: 80%Severity: 90%
SecurityMEDIUM
client-side-tools.md

The code is an explicit example to run arbitrary local commands on user machines in response to agent tool calls. It is not hidden malware, but it enables high-risk behavior: remote-controlled execution and local data exfiltration contingent on user approval. With careless or automated approval, a malicious agent could execute destructive or data-stealing commands. Use only with strict validation, allowlists, least privilege, and secure UX for approvals.

Confidence: 90%Severity: 80%
Obfuscated FileHIGH
client-injection.md

The snippet is benign documentation showing powerful cloud-injected capabilities. There is no direct evidence of embedded malware or obfuscation. However, the documented APIs create straightforward exfiltration and persistence vectors (reading secrets via os.getenv, persisting them to memory, sending them to external HTTP endpoints, or forwarding to other agents). Treat any third-party tool that runs with the injected client and environment access as high-risk: apply least privilege, enforce auditing, and review tool code before deployment.

Confidence: 98%
Audit Metadata
Analyzed At
Feb 15, 2026, 09:15 PM
Package URL
pkg:socket/skills-sh/letta-ai%2Fskills%2Fletta-api-client%2F@cc4b6f6c16c49c84ba0ab7b25d092699f59652a4