gh-aw-safe-outputs
π‘οΈ GitHub Agentic Workflows - Safe Outputs Skill
π Purpose
Master the safe outputs pattern in GitHub Agentic Workflows - the foundational security mechanism that enables AI agents to perform write operations safely through explicit, human-approved outputs. This skill provides comprehensive expertise in designing, implementing, and operating safe output patterns for controlled AI automation.
π― Core Concept
What Are Safe Outputs?
Safe outputs are the only way AI agents can perform write operations (create/update files, issues, PRs) in GitHub Agentic Workflows. Unlike direct tool access, safe outputs require explicit approval and sanitization before execution.
Key Principles:
- π Write Isolation: All write operations go through safe outputs
- β Explicit Approval: Outputs must be explicitly declared in workflow
- οΏ½οΏ½ Automatic Sanitization: All outputs sanitized before execution
- π Auditable: All actions logged and traceable
- π« No Direct Writes: AI cannot write files/issues/PRs directly
Security Model:
βββββββββββββββ ββββββββββββββββ βββββββββββββββββββ
β β β β β β
β AI Agent βββββββΆβ Safe Outputs βββββββΆβ Sanitization β
β (Read-Only) β β (Declare) β β & Execution β
β β β β β β
βββββββββββββββ ββββββββββββββββ βββββββββββββββββββ
β
βΌ
ββββββββββββββββ
β Workflow β
β Configurationβ
β (Allowlist) β
ββββββββββββββββ
ποΈ Safe Output Types
1. safeoutputs___issue
Create or update GitHub issues with AI-generated content.
Configuration:
tools:
safeoutputs___issue:
# No additional config required
Usage Pattern:
# Workflow markdown body
Analyze this codebase and create issues for improvement opportunities.
For each issue, use safeoutputs___issue:
- title: Brief description
- body: Detailed explanation with code examples
- labels: ["enhancement", "ai-generated"]
AI Output Format:
{
"tool": "safeoutputs___issue",
"title": "Improve error handling in authentication module",
"body": "## Problem\n\nThe authentication module lacks comprehensive error handling...\n\n## Proposed Solution\n\n1. Add try-catch blocks\n2. Implement error logging\n3. Return user-friendly messages",
"labels": ["enhancement", "security"]
}
Sanitization Applied:
- β Title limited to 256 characters
- β Body sanitized for XSS (HTML tags stripped)
- β Labels validated against repository labels
- β Malicious URLs removed
- β Code injection patterns blocked
Execution:
- Creates new issue if none exists
- Updates existing issue if specified
- Returns issue URL
- Logs all actions
2. safeoutputs___pull_request
Create pull requests with AI-generated code changes.
Configuration:
tools:
safeoutputs___pull_request:
base_branch: main # Optional: default branch
auto_merge: false # Optional: enable auto-merge
Usage Pattern:
Review this codebase and propose improvements.
Create a pull request using safeoutputs___pull_request:
- branch: "ai/improve-error-handling"
- title: "Improve error handling"
- body: Description of changes
- files: List of file changes
AI Output Format:
{
"tool": "safeoutputs___pull_request",
"branch": "ai/improve-error-handling",
"title": "Improve error handling in authentication module",
"body": "## Changes\n\n- Added try-catch blocks\n- Implemented error logging\n- Added tests",
"files": [
{
"path": "src/auth.ts",
"content": "// New content...",
"encoding": "utf-8"
}
]
}
Sanitization Applied:
- β Branch name validated (alphanumeric, hyphens, slashes only)
- β File paths validated (no directory traversal)
- β File content scanned for secrets
- β File size limits enforced (max 1MB per file)
- β Binary files rejected
- β Suspicious patterns blocked
Execution:
- Creates new branch from base
- Commits files with AI signature
- Creates pull request
- Returns PR URL
- Triggers CI/CD
3. safeoutputs___comment
Add comments to issues or pull requests.
Configuration:
tools:
safeoutputs___comment:
# No additional config required
Usage Pattern:
Provide helpful feedback on this pull request.
Add a comment using safeoutputs___comment:
- issue_number: 123 # or pr_number
- body: Your feedback
AI Output Format:
{
"tool": "safeoutputs___comment",
"issue_number": 123,
"body": "## Code Review\n\nI noticed a few areas for improvement:\n\n1. **Error Handling**: Consider adding...\n2. **Performance**: The loop in line 45 could be optimized..."
}
Sanitization Applied:
- β Body sanitized for XSS
- β Malicious URLs removed
- β Code injection blocked
- β Length limits enforced
4. safeoutputs___file
Create or update files in the repository.
Configuration:
tools:
safeoutputs___file:
allowed_paths:
- "docs/**"
- "*.md"
- "src/**/*.ts"
max_file_size: 1048576 # 1MB
Usage Pattern:
Update documentation to reflect recent changes.
Create/update files using safeoutputs___file:
- path: docs/api.md
- content: Updated documentation
- commit_message: Update API documentation
AI Output Format:
{
"tool": "safeoutputs___file",
"path": "docs/api.md",
"content": "# API Documentation\n\n## Authentication\n\n...",
"commit_message": "Update API documentation with new endpoints",
"encoding": "utf-8"
}
Sanitization Applied:
- β Path validated against allowed_paths
- β No directory traversal (../)
- β File size limits enforced
- β Secret scanning
- β Binary file detection
- β Malicious content removed
5. safeoutputs___label
Add or remove labels from issues or pull requests.
Configuration:
tools:
safeoutputs___label:
# No additional config required
Usage Pattern:
Triage this issue and apply appropriate labels.
Use safeoutputs___label:
- issue_number: 123
- add: ["bug", "high-priority"]
- remove: ["needs-triage"]
AI Output Format:
{
"tool": "safeoutputs___label",
"issue_number": 123,
"add": ["bug", "high-priority"],
"remove": ["needs-triage"]
}
Sanitization Applied:
- β Labels validated against repository labels
- β Label names sanitized
- β Protected labels blocked (e.g., "security-approved")
6. safeoutputs___noop
No operation - AI provides information without taking action.
Configuration:
tools:
safeoutputs___noop:
# Always available, no config
Usage Pattern:
Analyze this issue and provide recommendations without taking action.
Use safeoutputs___noop to report findings.
AI Output Format:
{
"tool": "safeoutputs___noop",
"message": "## Analysis\n\nThis issue appears to be a duplicate of #456.\n\n## Recommendation\n\nClose this issue and direct the reporter to #456."
}
Use Cases:
- β Read-only analysis
- β Recommendations without action
- β Dry-run scenarios
- β Human-in-the-loop workflows
π Security Architecture
Defense-in-Depth Layers
Layer 1: Compile-Time Validation
# Workflow declares allowed tools
tools:
safeoutputs___issue:
safeoutputs___file:
allowed_paths: ["docs/**"]
- Tool allowlist validated at compile time
- Configuration schema validated
- Path patterns validated
Layer 2: Runtime Isolation
AI Agent Environment:
- Read-only filesystem
- No network access (except MCP)
- Limited memory/CPU
- Sandboxed container
Layer 3: Output Sanitization
// Sanitization pipeline
function sanitize(output: SafeOutput): SanitizedOutput {
// 1. Schema validation
validateSchema(output);
// 2. Content sanitization
sanitizeHTML(output.body);
sanitizeURLs(output.body);
// 3. Secret scanning
detectSecrets(output);
// 4. Path validation
validatePaths(output.files);
// 5. Size limits
enforceSize(output);
return sanitized;
}
Layer 4: Execution Control
Approved outputs only:
- Tool in workflow allowlist? β
- Path in allowed_paths? β
- Size under limit? β
- No secrets detected? β
- Sanitization passed? β
β Execute action
β Log to audit trail
Threat Model
Threats Mitigated:
| Threat | Mitigation | Layer |
|---|---|---|
| Path Traversal | Path validation, allowlist | Layer 3 |
| Secret Leakage | Secret scanning | Layer 3 |
| XSS Injection | HTML sanitization | Layer 3 |
| Code Injection | Pattern blocking | Layer 3 |
| Unauthorized Files | allowed_paths | Layer 1 |
| Excessive Size | Size limits | Layer 3 |
| Binary Uploads | Binary detection | Layer 3 |
| Protected Labels | Label validation | Layer 3 |
| Branch Hijacking | Branch validation | Layer 3 |
π Configuration Patterns
Minimal (High Security)
on: issues
permissions: read-all
tools:
github:
toolsets: [issues]
safeoutputs___comment: # Comment only
Use Case: Issue triage, read-only analysis
Standard (Balanced)
on: issues
permissions: read-all
tools:
github:
toolsets: [issues, repos]
safeoutputs___issue:
safeoutputs___comment:
safeoutputs___label:
safeoutputs___file:
allowed_paths:
- "docs/**/*.md"
Use Case: Documentation updates, issue management
Advanced (Controlled Automation)
on: pull_request
permissions: read-all
tools:
github:
toolsets: [issues, repos, pull_requests]
bash:
allowed-commands: [npm, git]
safeoutputs___pull_request:
base_branch: main
safeoutputs___comment:
safeoutputs___file:
allowed_paths:
- "src/**/*.ts"
- "tests/**/*.test.ts"
- "docs/**"
max_file_size: 1048576
Use Case: Code generation, automated PRs
π― Best Practices
1. Principle of Least Privilege
# β DON'T: Grant all tools
tools:
safeoutputs___*:
# β
DO: Grant only needed tools
tools:
safeoutputs___issue:
safeoutputs___comment:
2. Restrict File Paths
# β DON'T: Allow all paths
tools:
safeoutputs___file:
allowed_paths: ["**"]
# β
DO: Whitelist specific paths
tools:
safeoutputs___file:
allowed_paths:
- "docs/**/*.md"
- "README.md"
3. Use Human-in-the-Loop for Critical Operations
# For critical operations, use noop + manual approval
tools:
safeoutputs___noop: # AI provides recommendation
# Human reviews recommendation
# Human manually executes if appropriate
4. Audit Trail
# Always enable audit logging (automatic)
# Review logs regularly:
# - /tmp/gh-aw/audit.log
# - GitHub Actions logs
5. Test in Sandbox First
# Test workflow with noop only
tools:
safeoutputs___noop:
# Review AI outputs
# Enable actual tools gradually
π Workflow Examples
Example 1: Issue Triage
---
on: issues
tools:
github:
toolsets: [issues]
safeoutputs___label:
safeoutputs___comment:
---
Analyze this issue and provide triage:
1. Determine if it's a bug, feature, or question
2. Apply appropriate labels using safeoutputs___label
3. Add helpful comment using safeoutputs___comment
4. Suggest assignee if applicable
Example 2: Documentation Updates
---
on: push
tools:
github:
toolsets: [repos]
bash:
allowed-commands: [git]
safeoutputs___file:
allowed_paths: ["docs/**"]
---
Review recent code changes and update documentation:
1. Identify changed files
2. Review related documentation
3. Update docs using safeoutputs___file
4. Ensure examples are current
Example 3: Code Review
---
on: pull_request
tools:
github:
toolsets: [pull_requests]
safeoutputs___comment:
---
Review this pull request and provide feedback:
1. Check code quality
2. Identify potential issues
3. Add review comment using safeoutputs___comment
4. Suggest improvements
π¨ Common Pitfalls
Pitfall 1: Overly Permissive Paths
# β BAD
allowed_paths: ["**"] # Allows all files
# β
GOOD
allowed_paths: ["docs/**/*.md"] # Specific patterns
Pitfall 2: Assuming Direct Write Access
# β BAD: Trying to write files directly
Write to src/config.ts
# β
GOOD: Using safe outputs
Use safeoutputs___file to update src/config.ts
Pitfall 3: Not Testing Sanitization
# Test with malicious inputs:
# - Path traversal: ../../../etc/passwd
# - XSS: <script>alert('xss')</script>
# - Secrets: API_KEY=abc123def456
Pitfall 4: Ignoring Size Limits
# Configure appropriate limits
max_file_size: 1048576 # 1MB for code
max_file_size: 10485760 # 10MB for docs
π Monitoring & Observability
Log Analysis
# Review safe outputs logs
tail -f /tmp/gh-aw/safe-outputs.log
# Filter by tool
grep "safeoutputs___file" /tmp/gh-aw/safe-outputs.log
# Count usage
grep -c "safeoutputs___" /tmp/gh-aw/safe-outputs.log
Metrics to Track
- β Total safe outputs executed
- β Outputs by type
- β Sanitization blocks
- β Execution failures
- β Average output size
- β Most used tools
Security Alerts
# Alert on suspicious patterns
alerts:
- pattern: "Path traversal blocked"
severity: high
- pattern: "Secret detected"
severity: critical
- pattern: "XSS blocked"
severity: high
π Related Skills
- gh-aw-security-architecture - Overall security model
- gh-aw-mcp-gateway - MCP protocol integration
- gh-aw-workflow-authoring - Workflow creation
- gh-aw-logging-monitoring - Observability
π References
- GitHub Agentic Workflows Documentation
- Safe Outputs Specification
- Security Architecture
- Guardrails Overview
β Remember
- β Safe outputs are the ONLY way AI writes
- β All outputs are sanitized automatically
- β Use least privilege (minimal tools)
- β Restrict paths with allowed_paths
- β Test with malicious inputs
- β Monitor audit logs regularly
- β Use noop for recommendations
- β Human approval for critical operations
- β Size limits prevent resource exhaustion
- β Secret scanning prevents leaks
Version: 1.0.0
Last Updated: 2026-02-16
Maintained by: Hack23 AB