prompt-api
Prompt API
Procedures
Step 1: Identify the integration surface
- Inspect the workspace for browser entry points, UI handlers, and any existing AI abstraction layer.
- Execute
node scripts/find-frontend-targets.mjs .to inventory likely frontend files and existing Prompt API usage when a Node runtime is available. - If a Node runtime is unavailable, inspect the nearest
package.json, HTML entry point, and framework entry files manually to identify the browser app boundary. - If the workspace contains multiple frontend apps, prefer the app that contains the active route, component, or user-requested feature surface.
- If the inventory still leaves multiple plausible frontend targets, stop and ask the user which app should receive the Prompt API integration.
- If the project is not a browser web app, stop and explain that this skill does not apply.
Step 2: Confirm Prompt API viability
- Read
references/prompt-api-reference.mdbefore writing code. - Read
references/examples.mdwhen the feature needs a spec-valid message shape for text, multimodal, prefix, or tool-enabled sessions. - Read
references/compatibility.mdwhen the feature must support multiple browser generations or decide between native support and polyfills. - Read
references/polyfills.mdwhen the feature needs concrete package installation or backend configuration examples for Prompt API or Task API polyfills. - Verify that the feature runs in a secure window context and that the
language-modelpermissions-policy allows access from the current frame. - If the integration must run in a Web Worker or other non-window context, stop and explain the platform limitation.
- Choose the session shape the feature needs:
prompt(),promptStreaming(),initialPrompts,append(),measureContextUsage(),tools, orresponseConstraint. - If the project uses TypeScript, add or preserve typings that cover the Prompt API surface used by the project.
Step 3: Implement a guarded session wrapper
- Read
assets/language-model-service.template.tsand adapt it to the framework, state model, and file layout in the workspace. - Gate session creation behind
LanguageModel.availability()using the same creation options that the feature will use at runtime, including expected modalities and tools. - Create sessions only after user activation when model download or instantiation may begin.
- Use
AbortControllerfor cancelable prompts and calldestroy()when the session is no longer needed. - If the feature runs in a cross-origin iframe, require
allow="language-model"on the embedding iframe. - Do not depend on
params(),topK, ortemperature; the spec marks them EXPERIMENTAL and extension-only, so portable web page integrations must not require them. - Treat
availability()as a passive capability check: if it reportsdownloadingbefore user activation, do not assume the current page initiated that download or lock the UI into an app-started busy state.
Step 4: Wire UX and fallback behavior
- Surface distinct states for unavailable devices, model download, ready sessions, and in-flight prompts.
- If download progress matters to the feature, attach a
monitorlistener duringLanguageModel.create()and render progress in the UI. - Keep a non-AI fallback for unsupported browsers, unsupported devices, or blocked iframe contexts.
- If the feature needs structured output, pass a JSON Schema through
responseConstraint, useomitResponseConstraintInputonly when the prompt already carries the required format instructions, and parse the returned string before using it. - Respect prompt-shape validation rules:
systemmessages belong ininitialPrompts,prefix: trueapplies only to the finalassistantmessage, andassistantmessage content must remain text-only. - If
availability()reportsdownloadingbefore the app has calledcreate(), present that as informational browser state rather than a page-owned active download, and keep controls usable unless the app itself is busy.
Step 5: Validate behavior
- Test short responses with
prompt()and long responses withpromptStreaming()when applicable. - Verify that repeated prompts reuse context intentionally, that destroyed sessions are not reused, and that the app uses compatibility checks for context measurement and overflow handling across browser versions.
- Read
references/troubleshooting.mdif the integration throwsNotSupportedErroror behaves differently across frames or execution contexts. - Run the workspace build, typecheck, or tests after editing.
Error Handling
- If
LanguageModelis missing, prefer progressive enhancement with a maintained Prompt API polyfill or a non-AI fallback instead of inventing a custom compatibility layer. - If
availability()returnsdownloadingbefore the app has calledcreate(), treat it as passive browser state. Only surface live progress and block prompt submission when the app itself has startedLanguageModel.create(). - If
availability()orprompt()throwsNotSupportedError, align the creation and prompt options with the actual modalities, languages, message roles, and tools used by the feature. - If the feature must run in Web Workers, redirect the integration to a window context because the Prompt API is not available in workers.
- If the feature lives in a cross-origin iframe, require
allow="language-model"from the embedding page before continuing. - If
node scripts/find-frontend-targets.mjs .cannot run, identify the browser app boundary manually and continue only after a single target app is clear.
More from webmaxru/agent-skills
github-agentic-workflows
Authors, reviews, installs, and debugs GitHub Agentic Workflows in repositories, including workflow markdown, frontmatter, gh aw compile and run flows, safe outputs, security guardrails, and operational patterns. Use when creating or maintaining GH-AW automation. Don't use for standard deterministic GitHub Actions YAML, generic CI pipelines, or non-GitHub automation systems.
96proofreader-api
Implements and debugs browser Proofreader API integrations in JavaScript or TypeScript web apps. Use when adding Proofreader availability checks, monitored model downloads, proofread flows, correction metadata handling, or permissions-policy checks for built-in proofreading. Don't use for generic prompt engineering, server-side LLM SDKs, or cloud AI services.
93webmcp
Implements and debugs browser WebMCP integrations in JavaScript or TypeScript web apps. Use when exposing imperative tools through navigator.modelContext, annotating HTML forms for declarative tools, handling agent-invoked form flows, or validating WebMCP behavior in the current Chrome preview. Don't use for server-side MCP servers, REST tool backends, or non-browser providers.
92writing-assistance-apis
Implements and debugs browser Summarizer, Writer, and Rewriter integrations in JavaScript or TypeScript web apps. Use when adding availability checks, model download UX, session creation, summarize or write or rewrite flows, streaming output, abort handling, or permissions-policy constraints for built-in writing assistance APIs. Don't use for generic prompt engineering, server-side LLM SDKs, or cloud AI services.
92agent-package-manager
Installs, configures, audits, and operates Agent Package Manager (APM) in repositories. Use when initializing apm.yml, installing or updating packages, validating manifests, managing lockfiles, compiling agent context, browsing MCP servers, setting up runtimes, or packaging resolved context for CI and team distribution. Don't use for writing a single skill by hand, generic package managers like npm or pip, or non-APM agent configuration systems.
92language-detector-api
Implements and debugs browser Language Detector API integrations in JavaScript or TypeScript web apps. Use when adding LanguageDetector support checks, availability and model download flows, session creation, detect() calls, input-usage measurement, permissions-policy handling, or compatibility fallbacks for built-in language detection. Don't use for server-side language detection SDKs, cloud translation services, or generic NLP pipelines.
92