#601

Global Rank · of 601 Skills

charting AI Agent Skill

View Source: b-open-io/prompts

Safe

Installation

npx skills add b-open-io/prompts --skill charting

9

Installs

Charting Intelligence & Data-to-Viz Pipeline Engineering

Enable any agent to (1) instantly identify the exact chart needed from raw data, (2) generate the precise path of queries/transforms to materialize that chart, and (3) evaluate and choose the optimal charting library/stack based on performance, scale, and interactivity requirements.

This is not "just call a library" — it is full-stack visualization strategy.

1. Core Decision Framework — Choosing the Chart That Fits the Data AND the Story

Before any code runs, answer these questions in order:

What is the goal of the viewer?

Goal Chart Type
Compare values Bar/Column (grouped or stacked)
Show trend over time Line or Area
Show distribution / spread Histogram, Box Plot, Violin
Show relationship / correlation Scatter, Bubble, Heatmap
Show composition / parts-of-whole Stacked Bar or Area (never pie if >5 slices)
Show hierarchy / flow Treemap, Sunburst, Sankey
Show geographic pattern Choropleth or Symbol Map

How many variables and what types?

Variables Chart
1 numeric, unordered Histogram / Density
1 numeric + time Line
1 categorical + 1 numeric Bar
2 numeric Scatter
1 categorical + time series Grouped or Stacked Line/Area
Many-to-many relationships Heatmap or Parallel Coordinates

Audience & Context Check

Audience Approach
Executive dashboard Big numbers + simple bars/lines, zero clutter
Analyst/explorer Interactive tooltips, zoom, hover details, multiple linked views
Mobile Horizontal bars, large text, minimal colors
Accessibility High contrast, patterns instead of color-only, alt-text descriptions

Rule of Thumb Table

Data Situation Best Chart (first choice) Avoid
>5 categories Bar (horizontal) Pie
Time series >20 points Line Column
Correlation between 2 measures Scatter Line (unless ordered)
Parts of whole >5 slices Stacked Bar or Treemap Pie/Donut
Outliers or distribution shape Box + Violin Bar
Flow between stages Sankey Anything else

2. The Data Pipeline Engine

Most databases do NOT have the exact aggregation ready. Auto-generate the full pipeline:

Step A — Inventory

  • Scan schema or sample 100 rows — detect column types, null rates, cardinality
  • Flag missing aggregations (e.g., "no daily_sales_by_region view exists")

Step B — Required Transformations

Auto-generate SQL or pandas code for:

  • Joins needed?
  • GROUP BY + SUM/AVG/COUNT?
  • Window functions for running totals or YoY?
  • Binning (e.g., age into decades)?
  • Pivot/unpivot?
  • Outlier flagging or imputation?

Step C — Materialization Strategy

Scale Strategy
One-off (<10k rows) Run query on-the-fly
Medium Create materialized view or cached table
Large/Real-time Pre-aggregate in Spark/DuckDB, incremental refresh
Extreme Stream + windowed aggregates (Flink/Kafka)

Step D — Validation

  • Run a tiny sample query first — confirm the shape matches the chosen chart type
  • If not, loop back and adjust aggregation

Example

User says "show monthly revenue by product category":

"I need: LEFT JOIN orders -> products -> categories; GROUP BY month, category; SUM(revenue). No view exists -> I will create temp table or run inline. Chart type: Stacked Area. Library recommendation below."

3. Library Selection Matrix

Always output the performance trade-off and recommended stack.

Scale / Requirement Recommended Library Why Fallback
<10k points, simple web dashboard Chart.js or Recharts <10 ms render, ~60 KB bundle N/A
10k-500k points, interactive Apache ECharts or Plotly.js Canvas + WebGL, 60 fps on 100k points D3 (slower)
500k-10M+ points, real-time LightningChart or Highcharts Stock + WebGL GPU accelerated, <50 ms at 5M points Anything SVG-based fails
Python backend + web Plotly Dash or Bokeh Server-side render + client streaming Matplotlib (static only)
Python notebook exploration Seaborn + Plotly Instant, beautiful defaults --
Extremely large / streaming DuckDB + Observable Plot or Perspective In-memory columnar, sub-second on billions --
No JavaScript (PDF reports) Matplotlib + WeasyPrint or ReportLab Pure Python, vector output --

Optimization Rules (apply automatically)

  • Downsample for overview, show full detail on zoom (ECharts built-in)
  • Use Canvas instead of SVG above ~5k elements
  • Pre-aggregate at DB level whenever possible (biggest single win)
  • Lazy load charts below the fold
  • Bundle size: tree-shake everything except the one chart type you need
  • GPU vs CPU: if >100k points and user needs pan/zoom, force WebGL path

4. Full Workflow

  1. Parse intent — identify required chart type from user request
  2. Schema scan — detect column types, cardinality, row estimates
  3. Decision framework — output chart recommendation + rationale
  4. Generate transforms — exact SQL/pandas/transform code needed
  5. Choose library — select by performance tier based on row estimate
  6. Emit deliverables:
    • Chart spec (JSON for the library or React component)
    • SQL/transform script
    • Performance warning or confirmation
    • Accessibility note + alt-text template

5. Advanced Capabilities

  • "Show me what I should be charting but aren't" — auto-correlation scan + suggested visuals
  • "Optimize this dashboard for 10x speed" — rewrite query + switch library
  • "Make this mobile-first" — auto-switch to horizontal bars + simplify
  • Color-blind & accessibility mode — toggle patterns, high contrast
  • Export — SVG/PNG/PDF with embedded data table

Installs

Installs 9
Global Rank #601 of 601

Security Audit

ath Safe
socket Safe
Alerts: 0 Score: 90
snyk Low
EU EU-Hosted Inference API

Power your AI Agents with the best open-source models.

Drop-in OpenAI-compatible API. No data leaves Europe.

Explore Inference API

GLM

GLM 5

$1.00 / $3.20

per M tokens

Kimi

Kimi K2.5

$0.60 / $2.80

per M tokens

MiniMax

MiniMax M2.5

$0.30 / $1.20

per M tokens

Qwen

Qwen3.5 122B

$0.40 / $3.00

per M tokens

How to use this skill

1

Install charting by running npx skills add b-open-io/prompts --skill charting in your project directory. Run the install command above in your project directory. The skill file will be downloaded from GitHub and placed in your project.

2

No configuration needed. Your AI agent (Claude Code, Cursor, Windsurf, etc.) automatically detects installed skills and uses them as context when generating code.

3

The skill enhances your agent's understanding of charting, helping it follow established patterns, avoid common mistakes, and produce production-ready output.

What you get

Skills are plain-text instruction files — not executable code. They encode expert knowledge about frameworks, languages, or tools that your AI agent reads to improve its output. This means zero runtime overhead, no dependency conflicts, and full transparency: you can read and review every instruction before installing.

Compatibility

This skill works with any AI coding agent that supports the skills.sh format, including Claude Code (Anthropic), Cursor, Windsurf, Cline, Aider, and other tools that read project-level context files. Skills are framework-agnostic at the transport level — the content inside determines which language or framework it applies to.

Data sourced from the skills.sh registry and GitHub. Install counts and security audits are updated regularly.

EU Made in Europe

Chat with 100+ AI Models in one App.

Use Claude, ChatGPT, Gemini alongside with EU-Hosted Models like Deepseek, GLM-5, Kimi K2.5 and many more.

Get the App:

Customer Support