creative-qa
Creative QA
Follow shared release-shell rules in:
postplus-sharedrelease-shell rules
Use this skill after a human has actually reviewed an asset.
This skill is for:
- recording human judgments about generated assets
- preserving why something was approved, revised, or rejected
- pointing feedback back to the stage that should change
- turning review notes into structured data for later analysis
This skill is not for autonomous AI approval.
Core Idea
The first version should assume:
- humans decide quality
- AI may help summarize or prefill observations
- only human-confirmed feedback becomes the durable QA record
If there is no human feedback yet, there may be no QA record yet. That is acceptable.
Human Rule
Do not invent a final verdict on behalf of a human reviewer.
Allowed:
- turn human notes into structured fields
- normalize categories
- suggest likely blame stages
Not allowed:
- silently approving an asset
- replacing a human verdict with an AI guess
Objects
1. QA Report
One review record tied to one asset version.
Should include:
qaReportIdtargetObjectTypetargetObjectIdtargetVersionreviewerverdictgoodReasonsbadReasonsissueCategoriesblameStageproposedAction
2. Feedback Record
An optional follow-up object that can be consumed by rerun workflows.
Should include:
feedbackIdqaReportIdfeedbackCategoryfeedbackTextdependencyImpactrerunTarget
Scope
This skill should support human review records for:
imagevoice_takevideo_renderfinal_export
Local Persistence Convention
Store QA next to the asset being reviewed.
In the released product shell:
- keep draft QA notes or intermediate review payloads under
<work-folder>/.postplus/creative-qa/ - keep the final confirmed QA record next to the reviewed asset or final deliverable
One possible project-local layout is:
<work-folder>/.postplus/creative-qa/video-render/
qa-v1.json
or:
reviews/voice-take-1.review.json
The key requirement is stable linkage back to the reviewed object.
Review Categories
Common issue categories:
lip_syncpersona_driftaudio_styleaudio_pacinghook_weakad_likeugc_native_feelvisual_realismsubtitle_accuracymixed
Common blame stages:
imagescriptvoicerendersubtitlemixed
Output Rule
The QA layer should answer:
- what was reviewed
- who reviewed it
- what they decided
- why they decided it
- what should happen next
Read references/qa-schema.md when creating or updating these files.
More from postplusai/postplus-skills
audio-transcription
Transcribe local or remote audio into durable text and timestamp artifacts using hosted Whisper models. Use this when the job is speech-to-text from audio files and you need request/response persistence, optional timestamps, and subtitle-ready outputs.
84google-trends-research
Research Google Trends search-intent signals for topic discovery, keyword momentum, regional interest, and rising queries without treating search trends as the same thing as platform content heat or marketplace demand.
78seedance-submitter
Use when preparing, submitting, polling, or debugging Seedance 2.0 video generation jobs from product images, storyboard images, UGC scripts, voiceover copy, or promptPlan request JSON. Use for splitting scripts into render segments, uploading references, creating request JSON, submitting jobs through the hosted capability, polling predictions, and handing off local render paths.
76social-media-publisher
Prepare and, after explicit approval, publish social posts through the PostPlus platform-owned Postiz workspace.
76facebook-research
Research Facebook pages, public follower or following surfaces, and public posts using hosted collection capability. Use this when the user wants Facebook account research, follower-surface sampling, or public post metrics.
76x-tools
Local execution tools for X/Twitter hosted collection workflows, including actor runs, dataset normalization, tweet ranking, account ranking, audience graph construction, and language clustering.
75