READ: Real-time and Efficient Asynchronous Diffusion for Audio-driven Talking Head Generation

 

Haotian Wang1, Yuzhe Weng1, Jun Du1, Haoran Xu2, Xiaoyan Wu2, Shan He2, Bing Yin2, Cong Liu2, Jianqing Gao2, Qingfeng Liu12
NERCSLIP, University of Science of Technology of China    iFLYTEK

Abstract

The introduction of diffusion models has brought significant advances to the field of audio-driven talking head generation. However, the extremely slow inference speed severely limits the practical implementation of diffusion-based talking head generation models. In this study, we propose READ, the first real-time diffusion-transformer-based talking head generation framework. Our approach first learns a spatiotemporal highly compressed video latent space via a temporal VAE, significantly reducing the token count to accelerate generation. To achieve better audio-visual alignment within this compressed latent space, a pre-trained Speech Autoencoder (SpeechAE) is proposed to generate temporally compressed speech latent codes corresponding to the video latent space. These latent representations are then modeled by a carefully designed Audio-to-Video Diffusion Transformer (A2V-DiT) backbone for efficient talking head synthesis. Furthermore, to ensure temporal consistency and accelerated inference in extended generation, we propose a novel asynchronous noise scheduler (ANS) for both the training and inference process of our framework. The ANS leverages asynchronous add-noise and asynchronous motion-guided generation in the latent space, ensuring consistency in generated video clips. Experimental results demonstrate that READ outperforms state-of-the-art methods by generating competitive talking head videos with significantly reduced runtime, achieving an optimal balance between quality and speed while maintaining robust metric stability in long-time generation.

Method

Method Image
Figure 1: The framework of READ. During training, we first pre-train the SpeechAE for speech feature temporal compression, shown in (b). Then we train the total framework using the asynchronous forward process, shown in (c). During inference, we conduct the asynchronous motion-guided reverse process by ANS, also shown in (c).

In this research, we introduce READ, the first real-time diffusion-transformer-based talking head generation framework. Our framework incorporates a pre-trained temporal VAE with a high spatiotemporal compression ratio of 32×32×8 pixels per token. To achieve better audio-visual alignment in the compressed latent space, we pre-train a Speech Autoencoder (SpeechAE) by self-supervising to generate temporally compressed speech latent codes that preserve essential acoustic information corresponding to the compressed video latent space. Then, an Audio-to-Video Diffusion Transformer (A2V-DiT) is designed to generate video latents under speech latent conditions efficiently. The whole training and inference process of our framework is under the proposed Asynchronous Noise Scheduler (ANS), which implements an asynchronous add-noise forward process and an asynchronous motion-guided reverse process to effectively generate long-time videos.

Real-time Audio-Driven Talking Head Generation

 

Expressive Talking Head Generation

Vocal Source: TED Speech, Female, Shouting
Vocal Source: TED Speech, Male, Speaking
Vocal Source: TED Speech, Female, Talking

Multiple Styles Generation

Shouting: TED Speech
Speaking: TED Speech
Talking: Talk on Higher Education

Cross Actors Generation

Effectiveness of Asynchronous Noise Scheduler (ANS)

Reference Image for ANS Comparison
Reference Image
Without ANS (Temporal Inconsistency)
With ANS (Temporal Consistency)

Long-time Generation Performance

Long-time Generation Results
Long-time Generation Results

Different Languages Performance (e.g. Chinese, French, and Portuguese)

Vocal Source: Chinese, Talk
Vocal Source: French, Speech
Vocal Source: Portuguese, Speech

Comparison with SOTA Methods

 

Overall Comparison

Dataset Method Runtime(s) FID (↓) FVD (↓) Sync-C (↑) Sync-D (↓) E-FID (↓)
HDTF Hallo 212.002 15.929 315.904 6.995 7.819 0.931
EchoMimic 124.105 18.384 557.809 5.852 9.052 0.927
Sonic 83.584 16.894 245.416 8.525 6.576 0.932
AniPortrait 76.778 17.603 503.622 3.555 10.830 2.323
AniTalker 13.577 39.155 514.388 5.838 8.736 1.523
Ours 4.421 15.073 235.319 8.658 6.890 0.955
MEAD Hallo 212.002 52.300 292.983 6.014 8.822 1.171
EchoMimic 124.105 65.771 667.999 5.482 9.128 1.448
Sonic 83.854 47.070 218.308 7.501 7.831 1.434
AniPortrait 76.778 54.621 531.663 1.189 13.013 1.669
AniTalker 13.577 95.131 621.528 6.638 8.184 1.553
Ours 4.421 46.444 224.738 7.672 8.080 1.043

Case Study on Audio-Driven Generation