Globales Ranking · von 601 Skills
byted-seedance-video-generate AI Agent Skill
Quellcode ansehen: bytedance/agentkit-samples
CriticalInstallation
npx skills add bytedance/agentkit-samples --skill byted-seedance-video-generate 60
Installationen
Video Generate Skill
This skill generates videos using Doubao Seedance 1.0/1.5 models.
Trigger Conditions
- User wants to generate videos from text descriptions
- User wants to create videos based on images (first/last frame)
- User wants to create videos with reference materials (images, videos, audio)
- User asks for video generation capabilities
Usage
Environment Variables
Before using this skill, ensure the following environment variables are set:
ARK_API_KEYorMODEL_VIDEO_API_KEYorMODEL_AGENT_API_KEY: API key for the video generation serviceMODEL_VIDEO_API_BASE: API base URL (optional, has default)MODEL_VIDEO_NAME: Model name (optional, has default)
Function Signature
async def video_generate(
params: list,
batch_size: int = 10,
max_wait_seconds: int = 1200,
model_name: str = None,
) -> Dict:Parameters
params (list[dict])
A list of video generation requests. Each item is a dict with the following fields:
Required per item:
video_name(str): Name/identifier of the output video fileprompt(str): Text describing the video to generate. Supports Chinese and English.
Optional per item - Input Materials:
first_frame(str): URL for the first frame imagelast_frame(str): URL for the last frame imagereference_images(list[str]): 1-4 reference image URLs for style/content guidancereference_videos(list[str]): 0-3 reference video URLs (mp4/mov, 2-15s each, total ≤15s)reference_audios(list[str]): 0-3 reference audio URLs (mp3/wav, 2-15s each, total ≤15s)
Optional per item - Video Output Parameters:
ratio(str): Aspect ratio. Options: "16:9" (default), "9:16", "4:3", "3:4", "1:1", "2:1", "21:9", "adaptive"duration(int): Video length in seconds. Range: 2-12s depending on modelresolution(str): Video resolution. Options: "480p", "720p", "1080p"frames(int): Total frame count. Must be in [29, 289] and follow format 25 + 4ncamera_fixed(bool): Lock camera movement. Default: falseseed(int): Random seed for reproducibility. Range: [-1, 2^32-1]watermark(bool): Whether to add watermark. Default: falsegenerate_audio(bool): Whether to generate audio. Only Seedance 1.5 supports thistools(list[dict]): Tool configuration, e.g.,[{"type": "web_search"}]
Input Modes
- Text-to-Video: Only provide prompt, no images/videos
- First Frame Guidance: Provide first_frame for starting image
- First + Last Frame Guidance: Provide both for transition video
- Reference Images: Provide reference_images for style/content guidance
- Multimodal Reference: Combine reference_images, reference_videos, reference_audios
Return Value
Script Return Info
The video_generate.py script will return these info:
{
"status": "success" | "partial_success" | "error",
"success_list": [{"video_name": "video_url"}],
"error_list": ["video_name"],
"error_details": [{"video_name": "...", "error": {...}}],
"pending_list": [{"video_name": "...", "task_id": "cgt-xxx", ...}]
}Based on the script return info, the final response returned to the user consists of a description of the video generation task and the video URL(s). You may download the video from the URL, but the video URL should still be provided to the user for viewing and downloading.
Note: the URL is the 'url' in the success_list of script return info.
The URL must return in two ways:
Final Return Info
For OpenClaw
First, save/download the generated video to an allowed directory:
- Recommended:
~/.openclaw/workspace/
- Recommended:
Use the
message toolto send the video with these parameters:{ "action": "send", "message": "Optional text description", "media": "/root/.openclaw/workspace/generated-video.mp4" }- The tool will automatically: read the local file, upload to the channel server as a web URL, and send as a video message
Verify success: Check that the tool returns
{"ok": true}to confirm the video was sent successfullyNormal Text not message tool: After generation, show list of videos with Markdown format, for example:
<video src="https://example.com/video1.mp4" width="640" controls>video-1</video>Very important:
The video URL must be an original online resource link starting withhttps://, not a local video address you have downloaded.
For Normal
You should return three types of information:
- File format, return the video file (if you have some other methods to send the video file) and the local path of the video, for example:
local_path: /root/.openclaw/workspace/skills/video-generate/xxx.mp4 - After generation, show list of videos with Markdown format, for example:
<video src="https://example.com/video1.mp4" width="640" controls>video-1</video>
<video src="https://example.com/video2.mp4" width="640" controls>video-2</video>Code Implementation
See scripts/video_generate.py for the full implementation.
Example Usage
# Text-to-Video
python scripts/video_generate.py -p "小猫骑着滑板穿过公园" -n cat_park -r 16:9 -d 5 --resolution 720p
# First Frame Guidance
python scripts/video_generate.py -p "小猫跳起来" -n cat_jump -f "https://example.com/cat.png" -r adaptive -d 5
# First + Last Frame Guidance
python scripts/video_generate.py -p "平滑过渡动画" -n transition \
-f "https://example.com/start.png" \
-l "https://example.com/end.png" \
-d 6
# Reference Images (style/content guidance)
python scripts/video_generate.py -p "[图1]戴着眼镜的男生和[图2]柯基小狗坐在草坪上" -n styled \
--ref-images "https://example.com/boy.png" "https://example.com/dog.png" \
-r 16:9 -d 5
# Multimodal Reference (video + audio)
python scripts/video_generate.py -p "将视频中的人物换成[图1]中的男孩" -n multimodal \
--ref-images "https://example.com/boy.png" \
--ref-videos "https://example.com/source.mp4" \
--ref-audios "https://example.com/voice.wav" \
-d 5
# With Audio Generation (Seedance 1.5 only)
python scripts/video_generate.py -p "女孩抱着狐狸,可以听到风声和树叶沙沙声" -n with_audio \
-f "https://example.com/girl_fox.png" \
--generate-audio \
-m doubao-seedance-1-5-pro-251215 \
-d 6 --resolution 1080p
# Query task status
python scripts/video_generate.py -q "cgt-20260222165751-wsnw8"
# Use specific model
python scripts/video_generate.py -p "A futuristic city" -m doubao-seedance-1-5-pro-251215
# No watermark
python scripts/video_generate.py -p "A beautiful landscape" --no-watermarkCommand Line Options
| Option | Short | Description |
|---|---|---|
--prompt |
-p |
Text description of the video (required) |
--name |
-n |
Video name identifier (default: video) |
--model |
-m |
Model name (default: doubao-seedance-1-0-pro-250528) |
--ratio |
-r |
Aspect ratio (default: 16:9) |
--duration |
-d |
Video duration in seconds (2-12) |
--resolution |
Video resolution: 480p, 720p, 1080p | |
--first-frame |
-f |
First frame image URL |
--last-frame |
-l |
Last frame image URL |
--ref-images |
Reference image URLs (space-separated, 1-4 images) | |
--ref-videos |
Reference video URLs (space-separated, 0-3 videos) | |
--ref-audios |
Reference audio URLs (space-separated, 0-3 audios) | |
--generate-audio |
Generate audio (Seedance 1.5 only) | |
--seed |
Random seed for reproducibility | |
--no-watermark |
Disable watermark | |
--timeout |
-t |
Max wait time in seconds (default: 1200) |
--query-task |
-q |
Query task status by task_id |
Model Fallback
If you encounter a model-related error (like ModelNotOpen), you can downgrade to these models:
doubao-seedance-1-5-pro-251215doubao-seedance-1-0-pro-250528
Error Handling
- IF the script raises the error "PermissionError: ARK_API_KEY or MODEL_VIDEO_API_KEY or MODEL_AGENT_API_KEY not found in environment variables", inform the user that they need to provide the
ARK_API_KEYorMODEL_VIDEO_API_KEYorMODEL_AGENT_API_KEYenvironment variable. Write it to the environment variable file in the workspace. If the file already exists, append it to the end. Ensure the environment variable format is correct, make the environment variable effective, and retry the video generation task that just failed.
Notes
- Keep prompt concise (recommended ≤ 500 characters)
- For first/last frame, ensure aspect ratios match your chosen ratio
- Reference images: 1-4 images, formats: jpeg/png/webp/bmp/tiff/gif
- Reference videos: 0-3 videos, formats: mp4/mov, total duration ≤ 15s
- Reference audios: 0-3 audios, formats: mp3/wav, total duration ≤ 15s
- Multimodal requires at least one image or video (audio-only not supported)
- Audio generation is only supported by Seedance 1.5 pro
- If polling times out, use
--query-taskwith the returned task_id
Installationen
Sicherheitsprüfung
Quellcode ansehen
bytedance/agentkit-samples
Mehr aus dieser Quelle
Power your AI Agents with
the best open-source models.
Drop-in OpenAI-compatible API. No data leaves Europe.
Explore Inference APIGLM
GLM 5
$1.00 / $3.20
per M tokens
Kimi
Kimi K2.5
$0.60 / $2.80
per M tokens
MiniMax
MiniMax M2.5
$0.30 / $1.20
per M tokens
Qwen
Qwen3.5 122B
$0.40 / $3.00
per M tokens
So verwenden Sie diesen Skill
Install byted-seedance-video-generate by running npx skills add bytedance/agentkit-samples --skill byted-seedance-video-generate in your project directory. Führen Sie den obigen Installationsbefehl in Ihrem Projektverzeichnis aus. Die Skill-Datei wird von GitHub heruntergeladen und in Ihrem Projekt platziert.
Keine Konfiguration erforderlich. Ihr KI-Agent (Claude Code, Cursor, Windsurf usw.) erkennt installierte Skills automatisch und nutzt sie als Kontext bei der Code-Generierung.
Der Skill verbessert das Verständnis Ihres Agenten für byted-seedance-video-generate, und hilft ihm, etablierte Muster zu befolgen, häufige Fehler zu vermeiden und produktionsreifen Code zu erzeugen.
Was Sie erhalten
Skills sind Klartext-Anweisungsdateien — kein ausführbarer Code. Sie kodieren Expertenwissen über Frameworks, Sprachen oder Tools, das Ihr KI-Agent liest, um seine Ausgabe zu verbessern. Das bedeutet null Laufzeit-Overhead, keine Abhängigkeitskonflikte und volle Transparenz: Sie können jede Anweisung vor der Installation lesen und prüfen.
Kompatibilität
Dieser Skill funktioniert mit jedem KI-Coding-Agenten, der das skills.sh-Format unterstützt, einschließlich Claude Code (Anthropic), Cursor, Windsurf, Cline, Aider und anderen Tools, die projektbezogene Kontextdateien lesen. Skills sind auf Transportebene framework-agnostisch — der Inhalt bestimmt, für welche Sprache oder welches Framework er gilt.
Chat with 100+ AI Models in one App.
Use Claude, ChatGPT, Gemini alongside with EU-Hosted Models like Deepseek, GLM-5, Kimi K2.5 and many more.