bilibili-toolkit
Fail
Audited by Gen Agent Trust Hub on Mar 1, 2026
Risk Level: HIGHCREDENTIALS_UNSAFEPROMPT_INJECTIONCOMMAND_EXECUTIONEXTERNAL_DOWNLOADS
Full Analysis
- [CREDENTIALS_UNSAFE]: Multiple scripts, including bili_collect_and_export.py, bili_kb_llama.py, and bili_search_llama.py, contain a hardcoded default database password '15671040800q' for PostgreSQL connections.
- [PROMPT_INJECTION]: The skill is vulnerable to indirect prompt injection because it processes untrusted Bilibili video transcripts for use in LLM-based summarization and QA without adequate safety measures.
- Ingestion points: Untrusted transcripts are fetched from Bilibili via the bili_collect_and_export.py script and stored in a database.
- Boundary markers: There are no explicit delimiters or boundary markers used when transcript text is interpolated into LLM prompts in scripts like bili_up_summarizer.py or bili_search_llama.py.
- Capability inventory: The skill possesses database read/write permissions and communicates with external AI service APIs.
- Sanitization: No sanitization or validation logic is applied to the video transcript content before it is passed to the LLM.
- [COMMAND_EXECUTION]: The script bili_video.py uses subprocess.run to invoke the ffmpeg utility. It relies on a hardcoded absolute path ('D:\Program Files\ffmpeg-7.0.2-essentials_build\bin\ffmpeg.exe') and manipulates local files on the D: drive based on external video metadata.
- [EXTERNAL_DOWNLOADS]: The skill performs multiple network operations to download media from Bilibili and interacts with external API providers including SiliconFlow (for ASR and embeddings) and LongMao (for LLM analysis).
Recommendations
- AI detected serious security threats
Audit Metadata