|
1 | 1 | <div align="center"> |
2 | 2 |
|
3 | | -## 🎙️ VibeVoice: A Frontier Long Conversational Text-to-Speech Model |
| 3 | +## 🎙️ VibeVoice: A Frontier Open-Source Voice AI |
4 | 4 | [](https://microsoft.github.io/VibeVoice) |
5 | 5 | [](https://huggingface.co/collections/microsoft/vibevoice-68a2ef24a875c44be47b034f) |
6 | 6 | [](https://arxiv.org/pdf/2508.19205) |
|
16 | 16 | </picture> |
17 | 17 | </div> |
18 | 18 |
|
| 19 | +<div align="left"> |
19 | 20 |
|
20 | | -**2025-09-05: VibeVoice is an open-source research framework intended to advance collaboration in the speech synthesis community. After release, we discovered instances where the tool was used in ways inconsistent with the stated intent. Since responsible use of AI is one of Microsoft’s guiding principles, we have disabled this repo until we are confident that out-of-scope use is no longer possible.** |
| 21 | +<h3>📰 News</h3> |
21 | 22 |
|
| 23 | +<img src="https://img.shields.io/badge/Status-New-brightgreen?style=flat" alt="New" /> |
| 24 | +<img src="https://img.shields.io/badge/Feature-Realtime_TTS-blue?style=flat&logo=soundcharts" alt="Realtime TTS" /> |
| 25 | + |
| 26 | +<strong>2025-12-03: 📣 We open-sourced <strong>VibeVoice‑Realtime‑0.5B</strong>, a real‑time text‑to‑speech model that supports streaming text input.</strong> |
22 | 27 | <br> |
| 28 | +<a href="https://github.com/user-attachments/assets/c4fb9be1-e721-41c7-9260-5890b49c1a19" target="_blank">▶️ Watch demo video</a> |
| 29 | + • |
| 30 | +<a href="https://github.com/user-attachments/assets/9aa8ab3c-681d-4a02-b9ea-3f54ffd180b2" target="_blank">🎧 Listen to generated example</a> |
| 31 | + |
| 32 | +</div> |
| 33 | + |
| 34 | +2025-09-05: VibeVoice is an open-source research framework intended to advance collaboration in the speech synthesis community. After release, we discovered instances where the tool was used in ways inconsistent with the stated intent. Since responsible use of AI is one of Microsoft’s guiding principles, we have disabled this repo until we are confident that out-of-scope use is no longer possible. |
| 35 | + |
| 36 | + |
| 37 | +### Overview |
23 | 38 |
|
24 | 39 | VibeVoice is a novel framework designed for generating **expressive**, **long-form**, **multi-speaker** conversational audio, such as podcasts, from text. It addresses significant challenges in traditional Text-to-Speech (TTS) systems, particularly in scalability, speaker consistency, and natural turn-taking. |
25 | 40 |
|
26 | | -A core innovation of VibeVoice is its use of continuous speech tokenizers (Acoustic and Semantic) operating at an ultra-low frame rate of 7.5 Hz. These tokenizers efficiently preserve audio fidelity while significantly boosting computational efficiency for processing long sequences. VibeVoice employs a [next-token diffusion](https://arxiv.org/abs/2412.08635) framework, leveraging a Large Language Model (LLM) to understand textual context and dialogue flow, and a diffusion head to generate high-fidelity acoustic details. |
| 41 | +VibeVoice currently includes two model variants: |
27 | 42 |
|
28 | | -The model can synthesize speech up to **90 minutes** long with up to **4 distinct speakers**, surpassing the typical 1-2 speaker limits of many prior models. |
| 43 | +- **Long-form multi-speaker model**: Synthesizes conversational/single-speaker speech up to **90 minutes** with up to **4 distinct speakers**, surpassing the typical 1–2 speaker limits of many prior models. |
| 44 | +- **Realtime streaming TTS model**: Produces initial audible speech in ~**300 ms** and supports **streaming text input** for single-speaker **realtime** speech generation; designed for low-latency generation. |
| 45 | + |
| 46 | +A core innovation of VibeVoice is its use of continuous speech tokenizers (Acoustic and Semantic) operating at an ultra-low frame rate of 7.5 Hz. These tokenizers efficiently preserve audio fidelity while significantly boosting computational efficiency for processing long sequences. VibeVoice employs a [next-token diffusion](https://arxiv.org/abs/2412.08635) framework, leveraging a Large Language Model (LLM) to understand textual context and dialogue flow, and a diffusion head to generate high-fidelity acoustic details. |
29 | 47 |
|
30 | 48 |
|
31 | 49 | <p align="left"> |
|
0 commit comments