Subscribe to Our Newsletter

Success! Now Check Your Email

To complete Subscribe, click the confirmation link in your inbox. If it doesn’t arrive within 3 minutes, check your spam folder.

Ok, Thanks

LINE's AI Research Papers Shine at ICASSP 2023 Conference

Philip Lee profile image
by Philip Lee
LINE's AI Research Papers Shine at ICASSP 2023 Conference
Source: LINE Corporation

LINE Corporation has had eight research papers selected for presentation at the prestigious International Conference on Acoustics, Speech, and Signal Processing (ICASSP) 2023.

The conference, hosted by the IEEE Signal Processing Society, will be held June 4-10 in Rhodes, Greece.

Of the eight papers, six were lead-authored by LINE, a significant increase from the three papers presented last year.

The remaining two papers were co-authored in collaboration with other research groups.

These papers present innovative methods for generating natural-sounding speech using emotional speech synthesis, speech separation, and more.

One of the papers presents an end-to-end emotional text-to-speech (TTS) system that uses pitch information from human speech.

When applied to expressive emotional speech synthesis, traditional end-to-end approaches often suffer from quality degradation.

The proposed method explicitly models pitch information, enabling a stable generation of natural-sounding speech even for speakers with very high or low pitch values, which were previously difficult to model.

Another paper addresses the challenge of recovering individual speech segments from overlapping speakers, using diffusion models as a solution.

Unlike conventional deep learning-based speech separation techniques, the proposed method produces more natural-sounding and pleasant speech by relying on a generative model.

The diffusion model-based speech separation solution outperforms traditional methods regarding the non-intrusive speech quality metric DNSMOS.

As LINE continues to develop AI-driven services, it has also focused on conducting AI research and development.

The company has presented influential speech recognition and synthesis research at top speech processing conferences.

For example, LINE researchers have developed Parallel WaveGAN, capable of rapidly producing high-quality speech, and Self-Conditioned CTC, which is the most accurate among non-autoregressive automatic speech recognition models.

In addition, LINE researchers won first place in the international DCASE2020 competition in environmental audio analysis.

Going forward, LINE aims to improve its existing services and create new functions by proactively advancing fundamental research in AI, benefiting both investors and users in the technology and business sectors.

Philip Lee profile image
by Philip Lee

Subscribe to The Pickool

Success! Now Check Your Email

To complete Subscribe, click the confirmation link in your inbox. If it doesn’t arrive within 3 minutes, check your spam folder.

Ok, Thanks

Read More