#601

Globales Ranking · von 601 Skills

readiness-report AI Agent Skill

Quellcode ansehen: dirnbauer/webconsulting-skills

Safe

Installation

npx skills add dirnbauer/webconsulting-skills --skill readiness-report

28

Installationen

Agent Readiness Report

Evaluate how well a repository supports autonomous AI development by analyzing it across eight technical pillars and five maturity levels.

Overview

Agent Readiness measures how prepared a codebase is for AI-assisted development. Poor feedback loops, missing documentation, or lack of tooling cause agents to waste cycles on preventable errors. This skill identifies those gaps and prioritizes fixes.

Quick Start

The user will run /readiness-report to evaluate the current repository. The agent will then:

  1. Clone the repo, scan repository structure, CI configs, and tooling
  2. Evaluate 81 criteria across 9 technical pillars
  3. Determine maturity level (L1-L5) based on 80% threshold per level
  4. Provide prioritized recommendations

Workflow

Step 1: Run Repository Analysis

Execute the analysis script to gather signals from the repository:

python scripts/analyze_repo.py --repo-path .

This script checks for:

  • Configuration files (.eslintrc, pyproject.toml, etc.)
  • CI/CD workflows (.github/workflows/, .gitlab-ci.yml)
  • Documentation (README, AGENTS.md, CONTRIBUTING.md)
  • Test infrastructure (test directories, coverage configs)
  • Security configurations (CODEOWNERS, .gitignore, secrets management)

Step 2: Generate Report

After analysis, generate the formatted report:

python scripts/generate_report.py --analysis-file /tmp/readiness_analysis.json

Step 3: Present Results

The report includes:

  1. Overall Score: Pass rate percentage and maturity level achieved
  2. Level Progress: Bar showing L1-L5 completion percentages
  3. Strengths: Top-performing pillars with passing criteria
  4. Opportunities: Prioritized list of improvements to implement
  5. Detailed Criteria: Full breakdown by pillar showing each criterion status

Nine Technical Pillars

Each pillar addresses specific failure modes in AI-assisted development:

Pillar Purpose Key Signals
Style & Validation Catch bugs instantly Linters, formatters, type checkers
Build System Fast, reliable builds Build docs, CI speed, automation
Testing Verify correctness Unit/integration tests, coverage
Documentation Guide the agent AGENTS.md, README, architecture docs
Dev Environment Reproducible setup Devcontainer, env templates
Debugging & Observability Diagnose issues Logging, tracing, metrics
Security Protect the codebase CODEOWNERS, secrets management
Task Discovery Find work to do Issue templates, PR templates
Product & Analytics Error-to-insight loop Error tracking, product analytics

See references/criteria.md for the complete list of 81 criteria per pillar.

Five Maturity Levels

Level Name Description Agent Capability
L1 Initial Basic version control Manual assistance only
L2 Managed Basic CI/CD and testing Simple, well-defined tasks
L3 Standardized Production-ready for agents Routine maintenance
L4 Measured Comprehensive automation Complex features
L5 Optimized Full autonomous capability End-to-end development

Level Progression: To unlock a level, pass ≥80% of criteria at that level AND all previous levels.

See references/maturity-levels.md for detailed level requirements.

Interpreting Results

Pass vs Fail vs Skip

  • Pass: Criterion met (contributes to score)
  • Fail: Criterion not met (opportunity for improvement)
  • Skip: Not applicable to this repository type (excluded from score)

Priority Order

Fix gaps in this order:

  1. L1-L2 failures: Foundation issues blocking basic agent operation
  2. L3 failures: Production readiness gaps
  3. High-impact L4+ failures: Optimization opportunities

Common Quick Wins

  1. Add AGENTS.md: Document commands, architecture, and workflows for AI agents
  2. Configure pre-commit hooks: Catch style issues before CI
  3. Add PR/issue templates: Structure task discovery
  4. Document single-command setup: Enable fast environment provisioning

Resources

  • scripts/analyze_repo.py - Repository analysis script
  • scripts/generate_report.py - Report generation and formatting
  • references/criteria.md - Complete criteria definitions by pillar
  • references/maturity-levels.md - Detailed level requirements

Automated Remediation

After reviewing the report, common fixes can be automated:

  • Generate AGENTS.md from repository structure
  • Add missing issue/PR templates
  • Configure standard linters and formatters
  • Set up pre-commit hooks

Ask to "fix readiness gaps" to begin automated remediation of failing criteria.

Adapted from OpenHands.
Thanks to Netresearch DTT GmbH for their contributions to the TYPO3 community.

Installationen

Installationen 28
Globales Ranking #601 von 601

Sicherheitsprüfung

ath Safe
socket Safe
Warnungen: 0 Bewertung: 90
snyk Low
EU EU-Hosted Inference API

Power your AI Agents with the best open-source models.

Drop-in OpenAI-compatible API. No data leaves Europe.

Explore Inference API

GLM

GLM 5

$1.00 / $3.20

per M tokens

Kimi

Kimi K2.5

$0.60 / $2.80

per M tokens

MiniMax

MiniMax M2.5

$0.30 / $1.20

per M tokens

Qwen

Qwen3.5 122B

$0.40 / $3.00

per M tokens

So verwenden Sie diesen Skill

1

Install readiness-report by running npx skills add dirnbauer/webconsulting-skills --skill readiness-report in your project directory. Führen Sie den obigen Installationsbefehl in Ihrem Projektverzeichnis aus. Die Skill-Datei wird von GitHub heruntergeladen und in Ihrem Projekt platziert.

2

Keine Konfiguration erforderlich. Ihr KI-Agent (Claude Code, Cursor, Windsurf usw.) erkennt installierte Skills automatisch und nutzt sie als Kontext bei der Code-Generierung.

3

Der Skill verbessert das Verständnis Ihres Agenten für readiness-report, und hilft ihm, etablierte Muster zu befolgen, häufige Fehler zu vermeiden und produktionsreifen Code zu erzeugen.

Was Sie erhalten

Skills sind Klartext-Anweisungsdateien — kein ausführbarer Code. Sie kodieren Expertenwissen über Frameworks, Sprachen oder Tools, das Ihr KI-Agent liest, um seine Ausgabe zu verbessern. Das bedeutet null Laufzeit-Overhead, keine Abhängigkeitskonflikte und volle Transparenz: Sie können jede Anweisung vor der Installation lesen und prüfen.

Kompatibilität

Dieser Skill funktioniert mit jedem KI-Coding-Agenten, der das skills.sh-Format unterstützt, einschließlich Claude Code (Anthropic), Cursor, Windsurf, Cline, Aider und anderen Tools, die projektbezogene Kontextdateien lesen. Skills sind auf Transportebene framework-agnostisch — der Inhalt bestimmt, für welche Sprache oder welches Framework er gilt.

Data sourced from the skills.sh registry and GitHub. Install counts and security audits are updated regularly.

EU Made in Europe

Chat with 100+ AI Models in one App.

Use Claude, ChatGPT, Gemini alongside with EU-Hosted Models like Deepseek, GLM-5, Kimi K2.5 and many more.

Kundensupport