dotnet-ci-benchmarking
Pass
Audited by Gen Agent Trust Hub on Mar 7, 2026
Risk Level: SAFEPROMPT_INJECTIONCOMMAND_EXECUTION
Full Analysis
- [PROMPT_INJECTION]: The skill describes an automated workflow that processes benchmark results and generates a Markdown report for GitHub Pull Request comments, creating a surface for indirect prompt injection.
- Ingestion points: The
scripts/compare-benchmarks.pyscript (provided as a template) reads and parses JSON files exported by BenchmarkDotNet from directories such as./baseline-resultsand${{ env.RESULTS_DIR }}. - Boundary markers: There are no boundary markers or instructions to ignore potential commands within the data when generating the
benchmark-comparison.mdreport. - Capability inventory: The workflow utilizes the
actions/github-script@v7action to read the generated Markdown and executegithub.rest.issues.createCommentto post the content directly to the GitHub PR. - Sanitization: Benchmark names (
r['name']) and statistics are interpolated directly into the Markdown table. If an attacker can control benchmark names (e.g., through a malicious pull request), they could inject arbitrary Markdown, links, or deceptive text into the PR comment stream. - [COMMAND_EXECUTION]: The skill provides templates for executing system-level commands within a CI environment.
- The GitHub Actions workflows explicitly execute
dotnet runfor benchmarking andpython3for result comparison. - These commands are well-documented and consistent with the primary purpose of the skill.
Audit Metadata