#601

Global Rank · of 601 Skills

byted-text-to-speech AI Agent Skill

View Source: bytedance/agentkit-samples

Safe

Installation

npx skills add bytedance/agentkit-samples --skill byted-text-to-speech

68

Installs

Byted-Text-to-Speech Skill

基于火山引擎豆包语音合成(HTTP Chunked/SSE 单向流式-V3)将文本转为语音并保存为音频文件。

何时使用

当用户有以下需求时,优先使用本 skill:

  • 需要把一段文字转成语音、朗读音频
  • 需要生成配音、旁白、播报、有声读物片段
  • 需要将代码注释、文档、文章等内容转为音频便于收听
  • 需要生成多语言语音(中文、英文等)
  • 用户提到「文字转语音」「TTS」「语音合成」「朗读」「配音」「念出来」「读给我听」
  • 用户没有明确说"语音合成",但任务本质上需要将文本内容转为可播放的音频时

使用前检查

优先检查是否已配置以下凭证:

  • MODEL_SPEECH_API_KEY

如果缺少凭证,打开 references/setup-guide.md 查看开通、申请和配置方式,并给予用户开通建议

脚本参数

参数 简写 必填 说明
--text -t 要合成的文本内容
--output -o 输出音频文件路径(默认自动生成)
--speaker -s 发音人,默认 zh_female_vv_uranus_bigtts音色列表
--format 音频格式:mp3(默认)、pcmogg_opus
--sample-rate 采样率,如 16000、24000(默认 24000)
--speech-rate 语速 [-50, 100],100 代表 2.0 倍速,-50 代表 0.5 倍速,默认 0
--pitch-rate 音调 [-12, 12],默认 0
--loudness-rate 音量 [-50, 100],100 代表 2.0 倍音量,-50 代表 0.5 倍音量,默认 0
--bit-rate 比特率,对 mp3 和 ogg_opus 格式生效(如 64000、128000),默认 64000
--filter-markdown 过滤 markdown 语法(如 **你好** 读为"你好"),默认关闭
--enable-latex 启用 LaTeX 公式播报(使用 latex_parser v2,自动开启 markdown 过滤),默认关闭

返回值说明

脚本输出 JSON,包含:

  • status: "success""error"
  • local_path: 本地音频文件路径
  • format: 音频格式
  • error: 失败时的错误信息

请将 local_path 或可访问的音频 URL 返回给用户,便于播放或下载。

错误处理

  • 若报错 PermissionError: MODEL_SPEECH_API_KEY ... 需在环境变量中配置:提示用户在 API Key 管理 获取并配置 MODEL_SPEECH_API_KEY,写入 workspace 下的环境变量文件后重试。
  • 若返回 4xx/5xx 或业务错误码:根据错误信息提示用户检查文本内容、发音人 ID 及账号是否已开通豆包语音服务。

故障排查

  • 缺少凭证:打开 references/setup-guide.md
  • 需要查 API 参数、字段、错误码:打开 references/docs-index.md
  • 如果脚本返回权限错误,优先检查服务是否已开通、凭证是否有效,给予用户明确的操作指引

参考资料

按需打开以下文件,不必默认全部加载:

  • references/setup-guide.md:服务开通、凭证申请、环境变量配置
  • references/docs-index.md:API 文档索引、参数说明、音色列表、错误码速查

示例

# 基本用法
python scripts/text_to_speech.py -t "欢迎使用火山引擎语音合成服务。"

# 指定发音人与输出格式
python scripts/text_to_speech.py -t "这是一段测试语音。" -s zh_female_vv_uranus_bigtts -o output.mp3 --format mp3

# 指定语速与采样率
python scripts/text_to_speech.py -t "语速和音调可调。" --speech-rate 10 --sample-rate 16000

Installs

Installs 68
Global Rank #601 of 601

Security Audit

ath Safe
socket Safe
Alerts: 0 Score: 90
snyk Low
EU EU-Hosted Inference API

Power your AI Agents with the best open-source models.

Drop-in OpenAI-compatible API. No data leaves Europe.

Explore Inference API

GLM

GLM 5

$1.00 / $3.20

per M tokens

Kimi

Kimi K2.5

$0.60 / $2.80

per M tokens

MiniMax

MiniMax M2.5

$0.30 / $1.20

per M tokens

Qwen

Qwen3.5 122B

$0.40 / $3.00

per M tokens

How to use this skill

1

Install byted-text-to-speech by running npx skills add bytedance/agentkit-samples --skill byted-text-to-speech in your project directory. Run the install command above in your project directory. The skill file will be downloaded from GitHub and placed in your project.

2

No configuration needed. Your AI agent (Claude Code, Cursor, Windsurf, etc.) automatically detects installed skills and uses them as context when generating code.

3

The skill enhances your agent's understanding of byted-text-to-speech, helping it follow established patterns, avoid common mistakes, and produce production-ready output.

What you get

Skills are plain-text instruction files — not executable code. They encode expert knowledge about frameworks, languages, or tools that your AI agent reads to improve its output. This means zero runtime overhead, no dependency conflicts, and full transparency: you can read and review every instruction before installing.

Compatibility

This skill works with any AI coding agent that supports the skills.sh format, including Claude Code (Anthropic), Cursor, Windsurf, Cline, Aider, and other tools that read project-level context files. Skills are framework-agnostic at the transport level — the content inside determines which language or framework it applies to.

Data sourced from the skills.sh registry and GitHub. Install counts and security audits are updated regularly.

EU Made in Europe

Chat with 100+ AI Models in one App.

Use Claude, ChatGPT, Gemini alongside with EU-Hosted Models like Deepseek, GLM-5, Kimi K2.5 and many more.

Customer Support