Assessment of auditory function and abilities in life-like listening scenarios
Project C5 in the Collaborative Research Center SFB 1330 “Hearing Acoustics: Perceptive Principles, Algorithms and Applications” (HAPPAA)
Aim
We study auditory mechanisms for communication in life-like listening scenarios. Toward this aim, the project addresses the following:
- Technology: Improve real-time room acoustic simulation and auralization tools (rtSOFE) and verify them for hearing research
- Content: Develop life-like listening scenarios with multiple sound sources, verify them against the acoustics of the simulated real space, and develop ways to share the scenes with the community to foster reproducible research
- Experimental methods: Interaction in life-like listening situations differs strongly from established psychophysical experimental approaches: We develop novel test methods to assess hearing and communication ability in ongoing and interactive situations using a variety of approaches: interactive reporting, end-of-trial reporting, head and body motion analysis, facial analysis, interaction analysis.
- Auditory function: We study the impact of dynamic changes of the spatial configuration of sound sources on the detection (unmasking) of tones and speech, and the contribution of visual, facial and gestural information.
Demonstration of our interactive audio-visual VR and our underground scene
Key findings
1. Movement of the listener strongly influences speech perception. During movement, speech intelligibility can be decreased, which has not been previously reported (Hládek & Seeber, under review 2021; Hládek & Seeber, 2019).
2. The rtSOFE room acoustics simulation and auralization software has been extended to multi-source auralization and binaural rendering, verified and published.
3. Life-like acoustic and visual models of the underground station “Theresienstraße” in Munich were created. Auralization with rtSOFE was verified acoustically and in a speech test with listeners against the real space. The models were open-source published along with extensive in-situ acoustic measurements, background sound recordings and documentation to render a complete audiovisual scene (Hládek, Ewert & Seeber, 2021).
4. We created a novel ‘in-movement’ speech perception test based on individualized movement patterns of participants to study the evolution of speech perception during movement and vestibular/proprioceptive influences (see 1. and Hládek & Seeber, Forum Acusticum 2020).
5. We measured and modelled spatial unmasking of moving tonal sources in the free-field: Even slow movement of 30°/sec reduces unmasking. Dynamic masking effects in rooms were modelled with a fast binaural processing stage followed by an integration (Kolotzek, Aublin & Seeber, 2021).
6. A control approach for room acoustic auralizations from the Unreal Engine was developed. The room acoustic simulation software, rtSOFE, can now be seamlessly used with the Unreal Engine to create professional and interactive visualizations with accurate, real-time acoustic rendering using a loudspeaker system or a VR headset (Enghofer, Hládek & Seeber, 2021).
Team members involved
L'uboš Hládek, PhD
Carlos Eduardo Landeau Bobadilla, M.Eng.
Funding
Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) – Projektnummer 352015383 – SFB 1330 C5.
Publications
van de Par, Steven; Ewert, Stephan D.; Hladek, Lubos; Kirsch, Christoph; Schütze, Julia; Llorca-Bofí, Josep; Grimm, Giso; Hendrikse, Maartje M. E.; Kollmeier, Birger; Seeber, Bernhard U.: Auditory-visual scenes for hearing research. under review, 2021 [mehr…]
Hladek, Lubos; Ewert, Stephan D.; Seeber, Bernhard U.: Communication Conditions in Virtual Acoustic Scenes in an Underground Station. 2021 Immersive and 3D Audio: from Architecture to Automotive (I3DA), IEEE, 2021 [mehr…] [Volltext ( DOI )]
Hladek, L.; Seeber, B.U.: Speech intelligibility in reverberation is reduced during self-rotation. under review, 2021 [mehr…] [Volltext ( DOI )]
Pulella, Paola; Hladek, L'ubos; Croce, Paolo; Seeber, Bernhard U.: Auralization of acoustic design in primary school classrooms. 2021 IEEE International Conference on Environment and Electrical Engineering and 2021 IEEE Industrial and Commercial Power Systems Europe (EEEIC / I&CPS Europe), IEEE, 2021 [mehr…] [Volltext ( DOI )]
Enghofer, F.; Hladek, L.; Seeber, B.U.: An 'Unreal' Framework for Creating and Controlling Audio-Visual Scenes for the rtSOFE. Fortschritte der Akustik - DAGA '21, 2021 [mehr…] [Volltext (mediaTUM)]
Hládek, L.; Seeber, B.U.: Self-rotation behavior during a spatialized speech test in reverberation. Fortschritte der Akustik - DAGA '21, 2021 [mehr…] [Volltext (mediaTUM)]
Kolotzek, N.; Aublin, P.G.; Seeber, B.U.: The effect of early and late reflections on binaural unmasking. Fortschritte der Akustik - DAGA '21, 2021 [mehr…] [Volltext (mediaTUM)]
Kolotzek, N.; Aublin, P.G.; Seeber, B.U.: Fast processing explains the effect of sound reflection on binaural unmasking. Professur für Audio-Signalverarbeitung, 2021, [mehr…]
Hládek, L.; Seeber, B.U.: The effect of self-motion cues on speech perception in an acoustically complex scene. Forum Acusticum, 2020 [mehr…]
Enghofer, F.; Hladek, L.; Seeber, B.U.: An 'Unreal' Framework for Creating and Controlling Audio-Visual Scenes for the rtSOFE. Fortschritte der Akustik -- DAGA '20, 2020, 257 [mehr…]
Ewert, S.D.; Hladek, L.; Enghofer, F.; Schutte, M.; Fichna, S.; Seeber, B.U.: Description and implementation of audiovisual scenes for hearing research and beyond. Fortschritte der Akustik -- DAGA '20, 2020, 364-365 [mehr…]
Hládek, L.; Seeber, B.U.: Speech perception during self-rotation. Joint Conference on Binaural and Spatial Hearing, 2020 [mehr…]
Hládek, L.; Seeber, B.U.: The effect of self-orienting on speech perception in an acoustically complex audiovisual scene. Fortschritte der Akustik -- DAGA '20, 2020, 91-94 [mehr…] [Volltext (mediaTUM)]
Kolotzek, N.; Seeber, B.U.: Localizing the end position of a circular moving sound source near masked threshold. Forum Acusticum, 2020 [mehr…]
Kolotzek, N.; Seeber, B. U.: Localization of circular moving sound sources near masked threshold. 2020 [mehr…]
Hladek, L.; Seeber, B.U.: Behavior and Speech Intelligibility in a Changing Multi-talker Environment. Proc. 23rd International Congress on Acoustics, ICA 2019, 2019, 7640-7645 [mehr…] [Volltext (mediaTUM)]
Kolotzek, N.; Seeber, B.U.: Spatial unmasking of circular moving sound sources in the free field. Proc. 23rd International Congress on Acoustics, integrating 4th EAA Euroregio 2019, 2019, 7640-7645 [mehr…] [Volltext (mediaTUM)]