Wavehax

Aliasing-Free Neural Waveform Synthesis Based on 2D Convolution and Harmonic Prior for Reliable Complex Spectrogram Estimation

Reo Yoneyama1, Atsushi Miyashita1, Ryuichi Yamamoto1,2, Tomoki Toda1

1Nagoya University, Japan, 2LY Corporation, Japan

Submitted to IEEE/ACM Transactions on Audio Speech and Language Processing

Abstract

Neural vocoders often struggle with aliasing in latent feature spaces, caused by time-domain nonlinear operations and resampling layers. Aliasing folds high-frequency components into the low-frequency range, making aliased and original frequency components indistinguishable and introducing two practical issues. First, aliasing complicates the waveform generation process, as the subsequent layers must address these aliasing effects, increasing the computational complexity. Second, it limits extrapolation performance, particularly in handling high fundamental frequencies, which degrades the perceptual quality of generated speech waveforms. This paper demonstrates that 1) time-domain nonlinear operations inevitably introduce aliasing but provide a strong inductive bias for harmonic generation, and 2) time-frequency-domain processing can achieve aliasing-free waveform synthesis but lacks the inductive bias for effective harmonic generation. Building on this insight, we propose Wavehax, an aliasing-free neural WAVEform generator that integrates 2D convolution and a HArmonic prior for reliable Complex Spectrogram estimation. Experimental results show that Wavehax achieves speech quality comparable to existing high-fidelity neural vocoders and exhibits exceptional robustness in scenarios requiring high fundamental frequency extrapolation, where aliasing effects become typically severe. Moreover, Wavehax requires less than 5% of the multiply-accumulate operations and model parameters compared to HiFi-GAN V1, while achieving over four times faster CPU inference speed.

Method

Method
This diagram provides an overview of Wavehax. The kernel width of the 1D convolution is set to 7, while the kernel size of the depthwise convolution is set to 7 × 7. The numbers of hidden channels, denoted as C and C', are set to 32 and 64, respectively. The number of frequency bins, F, is set to 241, calculated as half of the discrete Fourier transform points plus one. T and N represent the number of time steps in the waveforms and time frames in the features, respectively.

Audio samples

Audio samples from the speech reconstruction experiment with a limited training F0 range on the JVS corpus [1]. The models are the same as those in Table III of the paper, and the reconstruction is conditioned on the mel-spectrogram (and prior signals based on F0). Models with an asterisk (*) next to their names are equipped with anti-aliased nonlinear operations [2].

Model 003_parallel100_VOICEACTRESS100_050 041_falset10_VOICEACTRESS100_005 010_parallel100_VOICEACTRESS100_077 065_whisper10_VOICEACTRESS100_003
Natural
PWG [3]
Noise prior
PWG
Sine prior
PWG
Harmonic prior
PWG*
Noise prior
PWG*
Sine prior
PWG*
Harmonic prior
HiFi-GAN [4]
HiFi-GAN
Sine prior
HiFi-GAN
Harmonic prior
HiFi-GAN*
HiFi-GAN*
Sine prior
HiFi-GAN*
Harmonic prior
Vocos [5]
Vocos
Sine prior
Vocos
Harmonic prior
Wavehax
Noise prior
Wavehax
Sine prior
Wavehax
Harmonic prior

Citation

Submitting

References

  • [1] S. Takamichi, K. Mitsui, Y. Saito, T. Koriyama, N. Tanji, and H. Saruwatari, JVS corpus: free Japanese multi-speaker voice corpus, arXiv preprint, 1908.06248, Aug. 2019.
  • [2] S. gil Lee, W. Ping, B. Ginsburg, B. Catanzaro, S. Yoon, BigVGAN: A Universal Neural Vocoder with Large-Scale Training, in Proc. ICLR, 2023.
  • [3] R. Yamamoto, E. Song, and J.-M. Kim, Parallel Wavegan: A Fast Waveform Generation Model Based on Generative Adversarial Networks with Multi-Resolution Spectrogram, in Proc. ICASSP, 2020, pp. 6199-6203.
  • [4] J. Kong, J. Kim, and J. Bae, HiFi-GAN: Generative Adversarial Networks for Efficient and High Fidelity Speech Synthesis, in Proc. NeurIPS, vol. 33, 2020, pp. 17022-17033.
  • [5] H. Siuzdak, Vocos: Closing the gap between time-domain and Fourierbased neural vocoders for high-quality audio synthesis, in Proc. ICLR, 2024.
expand_less