Installation
npx skills add b-open-io/prompts --skill wait-for-ci 9
Installs
Wait for CI
Wait for CI/CD pipelines to finish and get actionable results — without burning context on polling logic. Everything deterministic is handled by scripts. Your job is to spawn them and act on what comes back.
Why Scripts Handle the Waiting
Polling a CI system is pure mechanics — no reasoning needed. The wait-ci.sh script handles all the waiting, retrying, and status parsing. Running it as a background task means you can do other work while CI runs, and you get notified with structured JSON when it finishes.
Quick Start
Two steps: detect, then wait.
Step 1: Detect CI System
bash <skill-path>/scripts/detect-ci.sh /path/to/projectReturns JSON like:
{
"ci": "github-actions",
"deploy": "vercel",
"config_file": ".github/workflows/ci.yml",
"workflow_count": 3,
"repo": "owner/repo",
"branch": "feature-x",
"sha": "abc1234",
"tools": { "gh": true, "glab": false, "vercel": true }
}If ci is "unknown", tell the user no CI configuration was found and ask what they use.
If the required CLI tool is missing (tools.gh, tools.glab, or tools.vercel is false), tell the user to install it before proceeding.
Step 2: Wait as a Background Task
Run the wait script with run_in_background: true so you stay unblocked:
bash <skill-path>/scripts/wait-ci.sh github-actions \
--repo owner/repo \
--branch feature-x \
--sha abc1234 \
--timeout 600The script blocks until CI finishes (or times out), then outputs JSON:
{
"ci": "github-actions",
"repo": "owner/repo",
"branch": "feature-x",
"sha": "abc1234",
"status": "completed",
"conclusion": "failure",
"elapsed_seconds": 142,
"details": "Failed: lint: failure; | Logs: error: unused variable...",
"url": "https://github.com/owner/repo/actions/runs/12345"
}Step 3: Act on Results
When the background task completes, you get the JSON. Use it:
conclusion |
What to do |
|---|---|
success |
Report success. Proceed with next steps (merge, deploy, etc.) |
failure |
Read the details field for failed jobs and log excerpts. Fix the issue and push again. |
cancelled |
Tell the user CI was cancelled — they may need to re-trigger. |
timeout |
CI took too long. Link the user to the run URL so they can check manually. |
missing_tool |
Tell the user which CLI tool to install. |
no_run_found |
Push may not have triggered CI. Check if workflows exist for this branch. |
When CI fails, the details field contains the failed job names and a log excerpt (last 30 lines of the failed step). This is usually enough to identify the issue without manually opening the CI dashboard.
Monitoring Both CI and Deployment
Some projects have both CI (GitHub Actions) and deployment (Vercel). You can run both in parallel as separate background tasks:
# Background task 1: CI
bash <skill-path>/scripts/wait-ci.sh github-actions --repo owner/repo --branch main
# Background task 2: Deployment
bash <skill-path>/scripts/wait-ci.sh vercel --branch mainEach will notify you independently when done.
Typical Agent Workflows
Push-and-Verify
After pushing a fix or feature:
- Run
detect-ci.shto get the CI config - Spawn
wait-ci.shas a background task - Continue working on other tasks (or tell the user you're waiting)
- When notified, report the result or fix failures
Iterative Fix Loop
When CI fails:
- Read the failure details from the JSON output
- Fix the issue in code
- Commit and push
- Spawn a new
wait-ci.shbackground task for the new commit - Repeat until green
Pre-Merge Gate
Before merging a PR:
- Spawn
wait-ci.shfor the PR branch - Only proceed with merge when
conclusionissuccess - If it fails, fix and re-push instead of merging broken code
Supported CI Systems
| System | CLI Required | Detection | Wait Method |
|---|---|---|---|
| GitHub Actions | gh |
.github/workflows/*.yml |
gh run watch (blocking, efficient) |
| GitLab CI | glab |
.gitlab-ci.yml |
glab ci status (polling) |
| Vercel | vercel |
vercel.json or .vercel/ |
vercel inspect (polling) |
GitHub Actions uses gh run watch which is a native blocking wait — no polling overhead. GitLab and Vercel use periodic polling at 15-second intervals.
Options
| Flag | Default | Description |
|---|---|---|
--repo |
From git remote | Repository in owner/repo format |
--branch |
Current branch | Branch to monitor |
--sha |
Current HEAD | Commit SHA to match |
--timeout |
600 (10 min) | Max seconds to wait before timing out |
--poll |
15 | Seconds between status checks (GitLab/Vercel only) |
Installs
Security Audit
View Source
b-open-io/prompts
More from this source
Power your AI Agents with
the best open-source models.
Drop-in OpenAI-compatible API. No data leaves Europe.
Explore Inference APIGLM
GLM 5
$1.00 / $3.20
per M tokens
Kimi
Kimi K2.5
$0.60 / $2.80
per M tokens
MiniMax
MiniMax M2.5
$0.30 / $1.20
per M tokens
Qwen
Qwen3.5 122B
$0.40 / $3.00
per M tokens
How to use this skill
Install wait-for-ci by running npx skills add b-open-io/prompts --skill wait-for-ci in your project directory. Run the install command above in your project directory. The skill file will be downloaded from GitHub and placed in your project.
No configuration needed. Your AI agent (Claude Code, Cursor, Windsurf, etc.) automatically detects installed skills and uses them as context when generating code.
The skill enhances your agent's understanding of wait-for-ci, helping it follow established patterns, avoid common mistakes, and produce production-ready output.
What you get
Skills are plain-text instruction files — not executable code. They encode expert knowledge about frameworks, languages, or tools that your AI agent reads to improve its output. This means zero runtime overhead, no dependency conflicts, and full transparency: you can read and review every instruction before installing.
Compatibility
This skill works with any AI coding agent that supports the skills.sh format, including Claude Code (Anthropic), Cursor, Windsurf, Cline, Aider, and other tools that read project-level context files. Skills are framework-agnostic at the transport level — the content inside determines which language or framework it applies to.
Chat with 100+ AI Models in one App.
Use Claude, ChatGPT, Gemini alongside with EU-Hosted Models like Deepseek, GLM-5, Kimi K2.5 and many more.