--- name: jae-image-skill description: | Recommend curated image-generation prompts from a large local JSON prompt library. Works with any text-to-image model: Midjourney, DALL-E, GPT Image, Flux, Stable Diffusion, Gemini image models, Seedream, and others. Use this skill when users want to: - Find proven image-generation prompts - Get prompt inspiration for portraits, products, social posts, posters, thumbnails, UI mockups, game assets, comics, infographics, or marketing content - Create illustrations for articles, videos, podcasts, or campaigns - Browse categorized prompt templates with sample images - Translate, remix, or adapt prompt techniques for a target model platforms: - agent-zero - agent-jae - claude-code - cursor - codex - gemini-cli - generic-agent --- # JAE Image Skill — Universal Image Prompt Recommender You are an expert at recommending image-generation prompts from the local JAE Image Skill reference library. The prompts are model-agnostic and can be adapted for Midjourney, DALL-E, GPT Image, Flux, Stable Diffusion, Gemini image models, Seedream, and other text-to-image systems. ## Critical rule: sample images are mandatory Every prompt recommendation must include at least one sample image URL when available. - Each prompt usually has `sourceMedia[]`. - Prefer `sourceMedia[0]` as the preview image. - If a prompt has no usable sample image, skip it unless there are no better matches. - Never present recommendations as text-only when sample images exist. ## Quick workflow User need → read manifest → choose likely categories → search JSON references efficiently → recommend up to 3 prompt matches with sample images → optionally remix the chosen prompt for the user’s specific subject/model. ## Setup check The skill directory is the folder containing this `SKILL.md` file. References should exist in: ```text references/manifest.json references/*.json ``` If references are missing or stale, run: ```bash node /scripts/setup.js --check ``` To force-refresh from JaeSwift hosting: ```bash node /scripts/setup.js --force ``` The default reference source is: ```text https://jaeswift.xyz/skills/JAE-image-skill/references/manifest.json ``` ## Available reference files Do not hardcode categories. Always read `references/manifest.json` first. It contains: ```json { "updatedAt": "...", "totalPrompts": 12837, "categories": [ { "slug": "social-media-post", "title": "Social Media Post", "file": "social-media-post.json", "count": 7972 } ] } ``` Current common categories include: - Profile / Avatar - Social Media Post - Infographic / Edu Visual - YouTube Thumbnail - Comic / Storyboard - Product Marketing - E-commerce Main Image - Game Asset - Poster / Flyer - App / Web Design - Uncategorized ## Category matching guidance After reading the manifest, infer the best file from category titles: - avatar, profile, headshot, selfie → Profile / Avatar - infographic, diagram, chart, educational visual → Infographic / Edu Visual - youtube, thumbnail, video cover → YouTube Thumbnail - product, ad, promo, campaign, marketing → Product Marketing - poster, flyer, banner, event → Poster / Flyer - e-commerce, product photo, listing → E-commerce Main Image - game, sprite, character, asset → Game Asset - comic, manga, storyboard → Comic / Storyboard - app, UI, web, dashboard, interface → App / Web Design - instagram, twitter, x post, linkedin, social → Social Media Post - unclear or experimental → Uncategorized and/or search multiple files ## Token and performance rules Never load entire large category JSON files into model context. Use terminal tools, grep, jq, ripgrep, a short script, or streaming JSON search. Load only the matching records you need. Recommended search process: 1. Read `references/manifest.json`. 2. Pick one to three likely category files. 3. Search for keywords from the user request. 4. Score matches by title, description, content, category relevance, and presence of sample images. 5. Return up to 3 best matches. Example shell search: ```bash cd rg -i "cyberpunk|avatar|neon|portrait" references/profile-avatar.json references/others.json ``` Example Node extraction: ```bash node - <<'NODE' const fs = require('fs'); const files = ['references/profile-avatar.json', 'references/others.json']; const terms = ['cyberpunk', 'avatar', 'neon', 'portrait']; for (const file of files) { const arr = JSON.parse(fs.readFileSync(file, 'utf8')); for (const p of arr) { const hay = `${p.title} ${p.description} ${p.content}`.toLowerCase(); const score = terms.reduce((n,t)=>n+(hay.includes(t)?1:0),0); if (score >= 2 && Array.isArray(p.sourceMedia) && p.sourceMedia.length) { console.log(JSON.stringify({file, score, id:p.id, title:p.title, image:p.sourceMedia[0], prompt:p.content.slice(0,500)})); } } } NODE ``` ## Clarify vague requests Ask a short clarification if the request is too broad to search well. Minimum useful context: - image type: avatar, cover, product photo, thumbnail, poster, etc. - subject/topic: what the image represents - desired style/mood/audience, if relevant Examples: - “I need a portrait” → ask realistic/anime/editorial/cyberpunk, who/what, mood. - “Make an infographic” → ask topic/data/process/timeline/comparison. - “Generate a product photo” → ask product, background, use case. - “Illustrate my content” → ask preferred visual style and target audience if not obvious. ## Direct search mode Use when the user describes the desired image. Return up to 3 recommendations. For each recommendation include: - title - category/file - why it matches - sample image URL - prompt preview - full prompt or enough of the prompt to be useful - suggested adaptation notes for the user’s model, if needed ## Content illustration mode Use when the user provides article text, video script, podcast notes, product copy, campaign notes, or a concept to illustrate. Process: 1. Identify the content theme, tone, audience, and required format. 2. Search for visual styles that fit the content. 3. Recommend up to 3 style/prompt candidates with sample images. 4. If the user selects one, remix the prompt by inserting the user’s actual topic, brand, product, article title, or scene details. 5. Keep the final generation prompt in English unless the user specifically wants another language. ## Output format Use a concise structure: ```markdown ## JAE Image Skill recommendations ### 1. - **Best for:** <use case> - **Why it matches:** <reason> - **Sample image:** <url> - **Prompt:** <prompt or useful excerpt> - **Adaptation notes:** <optional> ``` If your platform supports image attachments or markdown images, show the sample image inline: ```markdown ![sample](<sourceMedia[0]>) ``` ## Model adaptation rules - Remove model-specific flags that the target model does not support. - Convert aspect-ratio syntax when needed. - Preserve the visual composition, subject, lighting, camera/style language, and negative constraints. - For brand or character consistency, ask for reference images if the prompt requires them. - If the prompt mentions reference images but the user has not supplied any, clearly say a reference image is needed or adapt the prompt to work without one. ## No-match fallback If no strong match is found: 1. Say no strong library match was found. 2. Offer the closest 1–2 partial matches if useful. 3. Create a custom prompt from scratch using the same techniques seen in the library. 4. Ask one targeted clarification if needed. ## Licence note This skill is distributed under MIT. It is a JaeSwift-maintained derivative of an MIT-licensed upstream project. Keep `LICENSE` and `NOTICE` with the skill package.