Global Rank · of 601 Skills
firecrawl AI Agent Skill
View Source: dirnbauer/webconsulting-skills
MediumInstallation
npx skills add dirnbauer/webconsulting-skills --skill firecrawl 34
Installs
Firecrawl CLI
Web scraping, search, and browser automation CLI. Returns clean markdown optimized for LLM context windows.
Run firecrawl --help or firecrawl <command> --help for full option details.
Prerequisites
Must be installed and authenticated. Check with firecrawl --status.
π₯ firecrawl cli v1.8.0
β Authenticated via FIRECRAWL_API_KEY
Concurrency: 0/100 jobs (parallel scrape limit)
Credits: 500,000 remaining- Concurrency: Max parallel jobs. Run parallel operations up to this limit.
- Credits: Remaining API credits. Each scrape/crawl consumes credits.
If not ready, see rules/install.md. For output handling guidelines, see rules/security.md.
firecrawl search "query" --scrape --limit 3Workflow
Follow this escalation pattern:
- Search - No specific URL yet. Find pages, answer questions, discover sources.
- Scrape - Have a URL. Extract its content directly.
- Map + Scrape - Large site or need a specific subpage. Use
map --searchto find the right URL, then scrape it. - Crawl - Need bulk content from an entire site section (e.g., all /docs/).
- Browser - Scrape failed because content is behind interaction (pagination, modals, form submissions, multi-step navigation).
| Need | Command | When |
|---|---|---|
| Find pages on a topic | search |
No specific URL yet |
| Get a page's content | scrape |
Have a URL, page is static or JS-rendered |
| Find URLs within a site | map |
Need to locate a specific subpage |
| Bulk extract a site section | crawl |
Need many pages (e.g., all /docs/) |
| AI-powered data extraction | agent |
Need structured data from complex sites |
| Interact with a page | browser |
Content requires clicks, form fills, pagination, or login |
| Download a site to files | download |
Save an entire site as local files |
For detailed command reference, use the individual skill for each command (e.g., firecrawl-search, firecrawl-browser) or run firecrawl <command> --help.
Scrape vs browser:
- Use
scrapefirst. It handles static pages and JS-rendered SPAs. - Use
browserwhen you need to interact with a page, such as clicking buttons, filling out forms, navigating through a complex site, infinite scroll, or when scrape fails to grab all the content you need. - Never use browser for web searches - use
searchinstead.
Avoid redundant fetches:
search --scrapealready fetches full page content. Don't re-scrape those URLs.- Check
.firecrawl/for existing data before fetching again.
Output & Organization
Unless the user specifies to return in context, write results to .firecrawl/ with -o. Add .firecrawl/ to .gitignore. Always quote URLs - shell interprets ? and & as special characters.
firecrawl search "react hooks" -o .firecrawl/search-react-hooks.json --json
firecrawl scrape "<url>" -o .firecrawl/page.mdNaming conventions:
.firecrawl/search-{query}.json
.firecrawl/search-{query}-scraped.json
.firecrawl/{site}-{path}.mdNever read entire output files at once. Use grep, head, or incremental reads:
wc -l .firecrawl/file.md && head -50 .firecrawl/file.md
grep -n "keyword" .firecrawl/file.mdSingle format outputs raw content. Multiple formats (e.g., --format markdown,links) output JSON.
Working with Results
These patterns are useful when working with file-based output (-o flag) for complex tasks:
# Extract URLs from search
jq -r '.data.web[].url' .firecrawl/search.json
# Get titles and URLs
jq -r '.data.web[] | "\(.title): \(.url)"' .firecrawl/search.jsonParallelization
Run independent operations in parallel. Check firecrawl --status for concurrency limit:
firecrawl scrape "<url-1>" -o .firecrawl/1.md &
firecrawl scrape "<url-2>" -o .firecrawl/2.md &
firecrawl scrape "<url-3>" -o .firecrawl/3.md &
waitFor browser, launch separate sessions for independent tasks and operate them in parallel via --session <id>.
Credit Usage
firecrawl credit-usage
firecrawl credit-usage --json --pretty -o .firecrawl/credits.jsonAdapted from Firecrawl.
Thanks to Netresearch DTT GmbH for their contributions to the TYPO3 community.
Installs
Security Audit
Power your AI Agents with
the best open-source models.
Drop-in OpenAI-compatible API. No data leaves Europe.
Explore Inference APIGLM
GLM 5
$1.00 / $3.20
per M tokens
Kimi
Kimi K2.5
$0.60 / $2.80
per M tokens
MiniMax
MiniMax M2.5
$0.30 / $1.20
per M tokens
Qwen
Qwen3.5 122B
$0.40 / $3.00
per M tokens
How to use this skill
Install firecrawl by running npx skills add dirnbauer/webconsulting-skills --skill firecrawl in your project directory. Run the install command above in your project directory. The skill file will be downloaded from GitHub and placed in your project.
No configuration needed. Your AI agent (Claude Code, Cursor, Windsurf, etc.) automatically detects installed skills and uses them as context when generating code.
The skill enhances your agent's understanding of firecrawl, helping it follow established patterns, avoid common mistakes, and produce production-ready output.
What you get
Skills are plain-text instruction files β not executable code. They encode expert knowledge about frameworks, languages, or tools that your AI agent reads to improve its output. This means zero runtime overhead, no dependency conflicts, and full transparency: you can read and review every instruction before installing.
Compatibility
This skill works with any AI coding agent that supports the skills.sh format, including Claude Code (Anthropic), Cursor, Windsurf, Cline, Aider, and other tools that read project-level context files. Skills are framework-agnostic at the transport level β the content inside determines which language or framework it applies to.
Chat with 100+ AI Models in one App.
Use Claude, ChatGPT, Gemini alongside with EU-Hosted Models like Deepseek, GLM-5, Kimi K2.5 and many more.