Loading…
PLEASE NOTE: All On-Demand sessions will not actually take place at the date and time below, we will be sending out access to these sessions the Monday before the conference starts and they will be available to watch on your own time for a month after the conference!

**This Agenda is Subject to Change**
Back To Schedule
Thursday, February 10 • 12:30pm - 2:30pm
*Workshop 3: Speech communication and situational awareness in loud environments – How deep learning could improve face-to-face communication while wearing hearing protection

Sign up or log in to save this to your schedule, view media, leave feedback and see who's attending!

Feedback form is now closed.
Many hearing protection devices (HPDs) offer ‘hear-through’ for natural communication. This can be useful when the environment changes between loud noises and communication situations and potentially increases acceptance for wearing HPDs. However, the benefit of hear-through is still very limited in noisy conditions because both speech and noise are transmitted. With the advent of machine learning technologies such problems could be overcome. This contribution introduces advanced algorithms to segregate speech and noise for applications in HPDs. These algorithms use deep neural networks to perform so-called “blind source separation” to output an estimated version of the speech and of the noise. We will illustrate the potential of such algorithms by systematic analyses and audio examples for several industrial noises. In some applications, only the estimated speech could be transmitted to maximize intelligibility. In other applications, it is desirable not to entirely remove the noise to maintain situational awareness. In such cases, the estimated speech and noise can be “re-mixed” at more favorable speech-to-noise ratios. The processing can be implemented to maintain natural interaural level differences, which are critical for acoustic localization. The importance of these interaural parameters and exemplary limitations of current HPDs to provide these parameters will be discussed.

Speakers
avatar for Jan Rennies-Hochmuth, Dr.rer.nat.

Jan Rennies-Hochmuth, Dr.rer.nat.

Head of Group Personalized Hearing Systems, Fraunhofer Institute for Digital Media Technologies IDMT



Thursday February 10, 2022 12:30pm - 2:30pm CST
Virtual