#423

Global Rank · of 600 Skills

tavily-crawl AI Agent Skill

View Source: tavily-ai/skills

Critical

Installation

npx skills add tavily-ai/skills --skill tavily-crawl

4.0K

Installs

tavily crawl

Crawl a website and extract content from multiple pages. Supports saving each page as a local markdown file.

Before running any command

If tvly is not found on PATH, install it first:

curl -fsSL https://cli.tavily.com/install.sh | bash && tvly login

Do not skip this step or fall back to other tools.

See tavily-cli for alternative install methods and auth options.

When to use

  • You need content from many pages on a site (e.g., all /docs/)
  • You want to download documentation for offline use
  • Step 4 in the workflow: search → extract → map → crawl → research

Quick start

# Basic crawl
tvly crawl "https://docs.example.com" --json

# Save each page as a markdown file
tvly crawl "https://docs.example.com" --output-dir ./docs/

# Deeper crawl with limits
tvly crawl "https://docs.example.com" --max-depth 2 --limit 50 --json

# Filter to specific paths
tvly crawl "https://example.com" --select-paths "/api/.*,/guides/.*" --exclude-paths "/blog/.*" --json

# Semantic focus (returns relevant chunks, not full pages)
tvly crawl "https://docs.example.com" --instructions "Find authentication docs" --chunks-per-source 3 --json

Options

Option Description
--max-depth Levels deep (1-5, default: 1)
--max-breadth Links per page (default: 20)
--limit Total pages cap (default: 50)
--instructions Natural language guidance for semantic focus
--chunks-per-source Chunks per page (1-5, requires --instructions)
--extract-depth basic (default) or advanced
--format markdown (default) or text
--select-paths Comma-separated regex patterns to include
--exclude-paths Comma-separated regex patterns to exclude
--select-domains Comma-separated regex for domains to include
--exclude-domains Comma-separated regex for domains to exclude
--allow-external / --no-external Include external links (default: allow)
--include-images Include images
--timeout Max wait (10-150 seconds)
-o, --output Save JSON output to file
--output-dir Save each page as a .md file in directory
--json Structured JSON output

Crawl for context vs. data collection

For agentic use (feeding results to an LLM):

Always use --instructions + --chunks-per-source. Returns only relevant chunks instead of full pages — prevents context explosion.

tvly crawl "https://docs.example.com" --instructions "API authentication" --chunks-per-source 3 --json

For data collection (saving to files):

Use --output-dir without --chunks-per-source to get full pages as markdown files.

tvly crawl "https://docs.example.com" --max-depth 2 --output-dir ./docs/

Tips

  • Start conservative--max-depth 1, --limit 20 — and scale up.
  • Use --select-paths to focus on the section you need.
  • Use map first to understand site structure before a full crawl.
  • Always set --limit to prevent runaway crawls.

See also

Installs

Installs 4.0K
Global Rank #423 of 600

Security Audit

ath High
socket Safe
Alerts: 0 Score: 90
snyk Critical
zeroleaks Low
Score: 82
EU EU-Hosted Inference API

Power your AI Agents with the best open-source models.

Drop-in OpenAI-compatible API. No data leaves Europe.

Explore Inference API

GLM

GLM 5

$1.00 / $3.20

per M tokens

Kimi

Kimi K2.5

$0.60 / $2.80

per M tokens

MiniMax

MiniMax M2.5

$0.30 / $1.20

per M tokens

Qwen

Qwen3.5 122B

$0.40 / $3.00

per M tokens

How to use this skill

1

Install tavily-crawl by running npx skills add tavily-ai/skills --skill tavily-crawl in your project directory. Run the install command above in your project directory. The skill file will be downloaded from GitHub and placed in your project.

2

No configuration needed. Your AI agent (Claude Code, Cursor, Windsurf, etc.) automatically detects installed skills and uses them as context when generating code.

3

The skill enhances your agent's understanding of tavily-crawl, helping it follow established patterns, avoid common mistakes, and produce production-ready output.

What you get

Skills are plain-text instruction files — not executable code. They encode expert knowledge about frameworks, languages, or tools that your AI agent reads to improve its output. This means zero runtime overhead, no dependency conflicts, and full transparency: you can read and review every instruction before installing.

Compatibility

This skill works with any AI coding agent that supports the skills.sh format, including Claude Code (Anthropic), Cursor, Windsurf, Cline, Aider, and other tools that read project-level context files. Skills are framework-agnostic at the transport level — the content inside determines which language or framework it applies to.

Data sourced from the skills.sh registry and GitHub. Install counts and security audits are updated regularly.

EU Made in Europe

Chat with 100+ AI Models in one App.

Use Claude, ChatGPT, Gemini alongside with EU-Hosted Models like Deepseek, GLM-5, Kimi K2.5 and many more.

Customer Support