Skip to the content.

Paper: arxiv

Abstract

Recently, phonetic posteriorgrams (PPGs) based methods have been quite popular in non-parallel singing voice conversion systems. However, due to the lack of acoustic information in PPGs, style and naturalness of the converted singing voices are still limited. To solve these problems, in this paper, we utilize an acoustic reference encoder to implicitly model singing characteristics. We experiment with different auxiliary features, including mel spectrograms, HuBERT, and the middle hidden feature (PPG-Mid) of pretrained automatic speech recognition (ASR) model, as the input of the reference encoder, and finally find the HuBERT feature is the best choice. In addition, we use contrastive predictive coding (CPC) module to further smooth the voices by predicting future observations in latent space. Experiments show that, comparing with the baseline models, our proposed model can significantly improve the naturalness of converted singing voices and the similarity with the target singer. Moreover, our proposed model can also make the speakers with just speech data sing.

Audio Samples

Source Samples

2 female singing audio and 2 male singing audio are presented as the source samples. The samples are from different singers.

  Samples
F1
F2
M1
M2

In Domain Singing Voice Conversion

Target Singers

The following audio samples are from the target female and male singers from NUS-48E singing corpus.

Target Samples
Female
Male

Converted Samples

Target Female

  F1 F2 M1 M2
Baseline1
Baseline2
Prop_mel
Prop_ppg-mid
Prop_hub
Prop_hub_cpc

Target male

  F1 F2 M1 M2
Baseline1
Baseline2
Prop_mel
Prop_ppg-mid
Prop_hub
Prop_hub_cpc

Cross Domain Singing Voice Conversion

Target Speakers

The following audio samples are from the target female and male speakers from VCTK speech corpus.

Target Samples
Female
Male

Converted Samples

Target Female

  F1 F2 M1 M2
Baseline1
Baseline2
Prop_hub_cpc

Target male

  F1 F2 M1 M2
Baseline1
Baseline2
Prop_hub_cpc