install-script-generator
Install Script Generator
Generate robust, cross-platform installation scripts with automatic environment detection, verification, and documentation.
Repo Sync Before Edits (mandatory)
Before generating any output files, sync with the remote to avoid conflicts:
branch="$(git rev-parse --abbrev-ref HEAD)"
git fetch origin
git pull --rebase origin "$branch"
If the working tree is dirty, stash first, sync, then pop. If origin is missing or conflicts occur, stop and ask the user before continuing.
Workflow
Phase 1: Environment Exploration
Gather comprehensive system information:
# Run the environment explorer script
python3 {SKILL_DIR}/scripts/env_explorer.py
Use sub-agents for parallel discovery. Launch multiple Agent tool calls concurrently to keep the main context clean:
- Agent 1 — System detection: Run
env_explorer.pyand parse the JSON output. Detect OS, version, CPU architecture, and user permissions (admin/sudo availability). Return a structured summary. - Agent 2 — Package manager inventory: Identify all available package managers (apt, yum, brew, choco, winget) and their versions. Check shell environment (bash, zsh, powershell, cmd). Return a capability list.
- Agent 3 — Existing dependencies: Scan for already-installed dependencies and their versions relevant to the target software. Return a dependency status report.
Collect the results from all three agents before proceeding.
The script detects:
- Operating system (Windows/Linux/macOS) and version
- CPU architecture (x86_64, ARM64, etc.)
- Package managers available (apt, yum, brew, choco, winget)
- Shell environment (bash, zsh, powershell, cmd)
- Existing dependencies and versions
- User permissions (admin/sudo availability)
Output: JSON summary of system capabilities and constraints.
Phase 2: Installation Planning
Based on the environment analysis and target software:
- Identify dependencies - List all required packages/libraries
- Check existing installations - Avoid reinstalling what exists
- Order operations - Resolve dependency graph
- Add verification steps - Each step must be verifiable
- Plan rollback - Define cleanup on failure
Create the plan using:
python3 {SKILL_DIR}/scripts/plan_generator.py --target "<software_name>" --env-file env_info.json
Plan structure:
target: "<software_name>"
platform: "detected_os"
steps:
- name: "Install dependency X"
command: "..."
verify: "command to verify success"
rollback: "cleanup command if failed"
- name: "Configure system"
command: "..."
verify: "..."
Phase 3: Execution
Execute the plan with real-time verification:
python3 {SKILL_DIR}/scripts/executor.py --plan installation_plan.yaml
Execution behavior:
- Run each step sequentially
- Verify success after each step
- On failure: execute rollback, report error, stop
- Log all output for debugging
- Generate installation report
Phase 4: Documentation Generation
After successful installation, generate usage documentation:
python3 {SKILL_DIR}/scripts/doc_generator.py --target "<software_name>" --plan installation_plan.yaml
Use sub-agents for parallel documentation. The documentation sections are independent of each other. Dispatch them concurrently using the Agent tool, then collect results:
- Agent A — Installation report: Generate
install_report.mdwith the execution log, step-by-step status, and any warnings or errors encountered during installation. - Agent B — Usage guide: Generate
USAGE_GUIDE.mdwith a quick start guide, common commands/usage examples, and troubleshooting tips based on the installed software. - Agent C — Uninstall & maintenance: Generate the uninstallation instructions and maintenance notes (upgrade paths, configuration locations, log file paths).
Each agent should return the path(s) of files it created or updated.
Output includes:
- Installation summary (what was installed, where)
- Quick start guide
- Common commands/usage examples
- Troubleshooting tips
- Uninstallation instructions
Output Files
The skill generates these files in the current directory:
| File | Description |
|---|---|
env_info.json |
System environment analysis |
installation_plan.yaml |
Detailed installation steps |
install_report.md |
Execution log and status |
USAGE_GUIDE.md |
User documentation |
Platform-Specific Notes
Windows
- Prefer
wingetoverchocowhen available - Use PowerShell for script execution
- Handle UAC elevation requirements
Linux
- Detect distro family (Debian/RedHat/Arch)
- Use appropriate package manager
- Handle sudo requirements gracefully
macOS
- Use Homebrew as primary package manager
- Handle Apple Silicon vs Intel differences
- Respect Gatekeeper and notarization
Example Usage
User request: "Create an installation script for Node.js"
- Run env_explorer.py to detect system
- Generate plan with Node.js as target
- Execute plan (installs Node.js + npm)
- Generate USAGE_GUIDE.md with npm commands
Error Handling
- All scripts exit with non-zero codes on failure
- Verification failures trigger rollback
- Detailed error messages include remediation hints
- Partial installations are cleaned up automatically
More from montimage/skills
skill-auditor
Analyze agent skills for security risks, malicious patterns, and potential dangers before installation. Use when asked to "audit a skill", "check if a skill is safe", "analyze skill security", "review skill risk", "should I install this skill", "is this skill safe", "scan this skill", or when evaluating any skill directory for trust and safety. Also triggers when the user pastes a skill install command like "npx skills add https://github.com/org/repo --skill name". Produces a comprehensive security report with a clear install/reject verdict. Trigger this skill proactively whenever the user is about to install a third-party skill or mentions concerns about skill safety.
30code-review
Perform code reviews following best practices from Code Smells and The Pragmatic Programmer. Use when asked to "review this code", "check for code smells", "review my PR", "audit the codebase", "find bugs", "check code quality", "what's wrong with this code", "is this code good", or any request for quality feedback on code changes. Supports both full codebase audits and focused PR/diff reviews. Outputs structured markdown reports grouped by severity. Trigger this skill whenever the user wants a second opinion on code, even if they don't explicitly say "review".
15skill-creator
Create new skills, modify and improve existing skills, and measure skill performance. Use when users want to create a skill from scratch, update or optimize an existing skill, package a skill for distribution, or iterate on skill quality. Trigger this skill whenever the user says "create a skill", "build a skill", "make a skill for X", "update this skill", "improve this skill", "package this skill", or mentions wanting to extend Claude's capabilities with specialized workflows or tools.
9oss-ready
Transform projects into professional open-source repositories with standard components. Use when users ask to "make this open source", "add open source files", "setup OSS standards", "create contributing guide", "add license", "prepare for public release", "add CODE_OF_CONDUCT", "add SECURITY.md", "GitHub templates", or want to prepare a project for public release with README, CONTRIBUTING, LICENSE, and GitHub templates. Trigger this skill whenever the user mentions open-sourcing, public repos, community standards, or making a project contribution-ready — even if they just say "let's open source this".
7test-coverage
Expand unit test coverage by targeting untested branches and edge cases. Use when users ask to "increase test coverage", "add more tests", "expand unit tests", "cover edge cases", "improve test coverage", "find untested code", "what's not tested", "run coverage report", "write missing tests", or want to identify and fill gaps in existing test suites. Adapts to project's testing framework. Trigger this skill whenever the user mentions test gaps, untested code, coverage percentages, or wants to harden their test suite.
7devops-pipeline
Implement pre-commit hooks and GitHub Actions for quality assurance. Use when asked to "setup CI/CD", "add pre-commit hooks", "create GitHub Actions", "setup quality gates", "automate testing", "add linting to CI", "setup code quality checks", "configure CI pipeline", "add automated checks", or any DevOps automation for code quality. Detects project type and configures appropriate tools. Trigger this skill whenever the user mentions CI, CD, pre-commit, GitHub Actions, linting automation, or quality gates — even if they don't use those exact terms.
7