dotnet-ci-benchmarking

Pass

Audited by Gen Agent Trust Hub on Mar 7, 2026

Risk Level: SAFEPROMPT_INJECTIONCOMMAND_EXECUTION
Full Analysis
  • [PROMPT_INJECTION]: The skill describes an automated workflow that processes benchmark results and generates a Markdown report for GitHub Pull Request comments, creating a surface for indirect prompt injection.
  • Ingestion points: The scripts/compare-benchmarks.py script (provided as a template) reads and parses JSON files exported by BenchmarkDotNet from directories such as ./baseline-results and ${{ env.RESULTS_DIR }}.
  • Boundary markers: There are no boundary markers or instructions to ignore potential commands within the data when generating the benchmark-comparison.md report.
  • Capability inventory: The workflow utilizes the actions/github-script@v7 action to read the generated Markdown and execute github.rest.issues.createComment to post the content directly to the GitHub PR.
  • Sanitization: Benchmark names (r['name']) and statistics are interpolated directly into the Markdown table. If an attacker can control benchmark names (e.g., through a malicious pull request), they could inject arbitrary Markdown, links, or deceptive text into the PR comment stream.
  • [COMMAND_EXECUTION]: The skill provides templates for executing system-level commands within a CI environment.
  • The GitHub Actions workflows explicitly execute dotnet run for benchmarking and python3 for result comparison.
  • These commands are well-documented and consistent with the primary purpose of the skill.
Audit Metadata
Risk Level
SAFE
Analyzed
Mar 7, 2026, 03:43 PM