Fine-grained Noise Control for Multispeaker Speech Synthesis

Karolos Nikitaras, Georgios Vamvoukakis, Nikolaos Ellinas, Konstantinos Klapsas, Konstantinos Markopoulos, Spyros Raptis, June Sig Sung, Gunu Jho, Aimilios Chalamandaris and Pirros Tsiakoulis

Abstract: This paper presents a method for phoneme-level prosody control of F0 and duration on a text-to-speech setup, which is based on prosodic clustering. An autoregressive attention-based model is used, incorporating a prosody encoder module which is fed discrete prosodic labels. A text-to-speech (TTS) model typically factorizes speech attributes such as content, speaker and prosody into disentangled representations. Recent works aim to additionally model the acoustic conditions explicitly, in order to disentangle the primary speech factors, i.e. linguistic content, prosody and timbre from any residual factors, such as recording conditions and background noise. This paper proposes unsupervised, interpretable and fine-grained noise and prosody modeling. We incorporate adversarial training, representation bottleneck and utterance-to-frame modeling in order to learn frame-level noise representations. To the same end, we perform fine-grained prosody modeling via a Fully Hierarchical Variational AutoEncoder (FVAE) which additionally results in more expressive speech synthesis. Experimental results support our claims and ablation studies verify the importance of each proposed component. Audio samples are available in our demo page.

1) Frame-level Noise Control

2) Phoneme-level Prosody Control