prompt-adapt
Installation
SKILL.md
Prompt Adapter
Convert prompts between AI models while preserving intent and maximizing output quality.
Adaptation Workflow
Step 1: Identify Source and Target
Determine:
- Source model: What model was this prompt written for?
- Target model: What model should it run on?
- Priority: Preserve style fidelity or optimize for target strengths?
Step 2: Analyze Source Prompt
Break down the prompt into components:
- Core subject/action
- Style modifiers
- Technical parameters (model-specific)
- Negative prompts (if any)
- Aspect ratio / dimensions
Step 3: Apply Model Translation Rules
Load {PROMPT_ENGINE_DIR}/references/model-guide.md for detailed rules. Key translations:
Midjourney -> Flux:
- Remove
--ar,--v,--style,--s,--chaosparameters - Expand shorthand into natural language descriptions
- Flux prefers longer, more descriptive prompts
- Remove
::weight syntax, integrate naturally
Midjourney -> DALL-E:
- Remove all
--parameters - Rewrite as clear, direct descriptions
- DALL-E prefers straightforward language over artistic jargon
- Remove negative prompts (DALL-E doesn't support them well)
Flux -> Midjourney:
- Add
--arfor aspect ratio - Add
--v 6.1or appropriate version - Condense long descriptions into key phrases
- Add style parameters (
--style rawfor photorealistic)
Any -> Sora (Video):
- Add camera movement descriptions (pan, zoom, tracking, etc.)
- Add temporal flow ("the scene transitions from... to...")
- Specify duration if possible
- Focus on motion and action over static details
Any -> Leonardo AI:
- Reference specific Leonardo models (Phoenix, Alchemy, etc.)
- Use Leonardo-specific quality tokens
- Adapt negative prompts to Leonardo format
Step 4: Search for Target Model Examples
Find reference prompts in the target model:
python3 {PROMPT_ENGINE_DIR}/scripts/search_prompts.py "SUBJECT" --model TARGET_MODEL --limit 3
Use these as style references for the adaptation.
Step 5: Present Adaptation
Output format:
- Original prompt (source model labeled)
- Adapted prompt (target model labeled)
- Translation notes (what changed and why)
- Parameter mapping (source params -> target params)
- Confidence level (High/Medium/Low -- based on model compatibility)
Common Pitfalls
- Midjourney weight syntax (
::2) has no direct equivalent in most models - DALL-E ignores most style parameters -- weave them into descriptions
- Sora needs temporal language that image models don't use
- Aspect ratios must be specified differently per platform
- Some styles only work well on specific models (e.g.,
--nijiis Midjourney-only)
Weekly Installs
2
Repository
agricidaniel/cl…-promptsGitHub Stars
76
First Seen
Mar 22, 2026
Security Audits