Globales Ranking · von 601 Skills
generative-ui AI Agent Skill
Quellcode ansehen: b-open-io/prompts
SafeInstallation
npx skills add b-open-io/prompts --skill generative-ui 10
Installationen
Generative UI
Produce JSON specs constrained to a catalog of predefined components. Never write arbitrary JSX — generate structured JSON that a renderer turns into platform-specific UI.
For conceptual background, decision criteria, and common patterns, see
README.md.
Renderer Selection
| Need | Package | Skill |
|---|---|---|
| Web app UI | @json-render/react |
json-render-react |
| shadcn/ui components | @json-render/shadcn |
json-render-shadcn |
| Mobile native | @json-render/react-native |
json-render-react-native |
| Video compositions | @json-render/remotion |
json-render-remotion |
| HTML email | @json-render/react-email |
json-render-react-email |
| OG/social images | @json-render/image |
json-render-image |
| Vue web apps | @json-render/vue |
(no skill yet) |
| PDF documents | @json-render/react-pdf |
(no skill yet) |
Always invoke the renderer-specific skill for implementation details. This skill covers when and why; the renderer skills cover how.
Catalog Design Principles
- Pick, don't spread — Explicitly select components from
shadcnComponentDefinitions. Never spread all 36 into your catalog. - Minimal catalog — Start with 5-8 components. Add more only when the AI needs them.
- Custom components — Define with Zod schemas. Use slots for children, actions for interactivity.
- Two entry points —
@json-render/shadcn/catalog(server-safe schemas) and@json-render/shadcn(React implementations).
GemSkills Integration
Generate visual assets within generative UI workflows:
| Asset Type | Skill | Use Case |
|---|---|---|
| Hero images, backgrounds | generate-image |
Dashboard headers, card backgrounds |
| Logos, vector graphics | generate-svg |
Brand elements within generated UI |
| App icons | generate-icon |
Platform-specific icon sets |
| Post-processing | edit-image |
Crop, resize, style-transfer on generated images |
| Video backgrounds | generate-video |
Remotion compositions with AI video |
| Style exploration | browsing-styles |
Browse 169 visual styles before generating |
Pipeline: browsing-styles (pick style) -> generate-image (create) -> edit-image (refine) -> optimize-images (compress)
MCP Apps Delivery
Generative UI specs can be delivered directly inside chat hosts (Claude, ChatGPT, VS Code Copilot) via MCP Apps. The json-render React renderer runs inside a Vite-bundled single-file HTML served as a ui:// resource.
Delivery path:
- AI generates a json-render spec (JSON)
- MCP tool returns the spec as
structuredContent(a structured JSON response the host renders in the UI, separate from the text the model sees) - The MCP App View (sandboxed iframe) receives it via
ontoolresult - View's embedded
<Renderer>component renders the spec as interactive UI - User interacts — View calls server tools for fresh data, re-renders
This combines generative UI's guardrailed output with MCP Apps' context preservation and bidirectional data flow. No tab switching, no separate web app.
AI generates spec → MCP tool returns structuredContent
→ Host renders ui:// resource in iframe
→ View renders spec with json-render <Renderer>
→ User interacts → View calls tools → fresh specFor building MCP Apps that deliver generative UI, use Skill(bopen-tools:mcp-apps).
Delivery Channels
| Renderer | Package | Delivery Channel |
|---|---|---|
| Web | @json-render/react |
Web app or MCP App (ui:// resource) |
| shadcn/ui | @json-render/shadcn |
Web app or MCP App (ui:// resource) |
| Mobile | @json-render/react-native |
React Native app |
| Video | @json-render/remotion |
Video file |
@json-render/react-email |
Email (HTML) | |
| Images | @json-render/image |
Image file (PNG/SVG) |
MCP Apps delivery is available for any renderer that targets the browser (React, shadcn). Bundle the renderer + catalog + registry into a single HTML file with Vite + vite-plugin-singlefile, serve it as a ui:// resource.
Reference Files
references/renderer-guide.md— Deep dive on each renderer's API and patternsreferences/component-libraries.md— Available components and custom component patterns
Installationen
Sicherheitsprüfung
Quellcode ansehen
b-open-io/prompts
Mehr aus dieser Quelle
Power your AI Agents with
the best open-source models.
Drop-in OpenAI-compatible API. No data leaves Europe.
Explore Inference APIGLM
GLM 5
$1.00 / $3.20
per M tokens
Kimi
Kimi K2.5
$0.60 / $2.80
per M tokens
MiniMax
MiniMax M2.5
$0.30 / $1.20
per M tokens
Qwen
Qwen3.5 122B
$0.40 / $3.00
per M tokens
So verwenden Sie diesen Skill
Install generative-ui by running npx skills add b-open-io/prompts --skill generative-ui in your project directory. Führen Sie den obigen Installationsbefehl in Ihrem Projektverzeichnis aus. Die Skill-Datei wird von GitHub heruntergeladen und in Ihrem Projekt platziert.
Keine Konfiguration erforderlich. Ihr KI-Agent (Claude Code, Cursor, Windsurf usw.) erkennt installierte Skills automatisch und nutzt sie als Kontext bei der Code-Generierung.
Der Skill verbessert das Verständnis Ihres Agenten für generative-ui, und hilft ihm, etablierte Muster zu befolgen, häufige Fehler zu vermeiden und produktionsreifen Code zu erzeugen.
Was Sie erhalten
Skills sind Klartext-Anweisungsdateien — kein ausführbarer Code. Sie kodieren Expertenwissen über Frameworks, Sprachen oder Tools, das Ihr KI-Agent liest, um seine Ausgabe zu verbessern. Das bedeutet null Laufzeit-Overhead, keine Abhängigkeitskonflikte und volle Transparenz: Sie können jede Anweisung vor der Installation lesen und prüfen.
Kompatibilität
Dieser Skill funktioniert mit jedem KI-Coding-Agenten, der das skills.sh-Format unterstützt, einschließlich Claude Code (Anthropic), Cursor, Windsurf, Cline, Aider und anderen Tools, die projektbezogene Kontextdateien lesen. Skills sind auf Transportebene framework-agnostisch — der Inhalt bestimmt, für welche Sprache oder welches Framework er gilt.
Chat with 100+ AI Models in one App.
Use Claude, ChatGPT, Gemini alongside with EU-Hosted Models like Deepseek, GLM-5, Kimi K2.5 and many more.