Why Virtual Acoustics?
Most psychoacoustic and audiological testing to date has been done with simple stimuli such as tones and noises, or with speech, played via headphones or a few loudspeakers in a test booth. While these approaches have led to significant insights, more research is needed to understand and alleviate the problems hearing impaired persons face in everyday listening situations, particularly when hearing aids or cochlear implants are used. A good way to study this is by testing situations problematic for patients, for instance when multiple sounds are present simultaneously (the so-called “cocktail party effect”) or when reverberation smears the sound.
We have developed a system to create such difficult everyday listening situations in the laboratory, thereby allowing full control over the sound. The system makes a wide array of tests of spatial hearing and audio-visual interaction possible. Because participants sit in the free field of an anechoic chamber, they listen unencumbered by earphones and the sound field is identical regardless of whether they listen with their own ears or with hearing devices. This makes accurate comparisons between normal hearing and hearing impaired listeners possible, and enables testing of hearing devices in realistic situations. Since the system simulates rooms with all of their reflections, it can also be used to study the impact of reverberation on normal hearing listeners, for example in concert halls or classrooms.
The Simulated Open Field Environment (SOFE)
The Simulated Open Field Environment (SOFE) is the virtual acoustics facility developed and used by the Audio Information Processing group at TUM. The SOFE consists of software for simulating room acoustics, equalizing loudspeakers, convolving sound with room impulse responses and playing the sounds, plus the hardware for presenting auralizations to listeners (Seeber et al., 2010).
Software: Simulation and Rendering
Our room simulation software uses the image source method. Originally developed by Allen and Berkley for rectangular rooms (JASA, 1979), the method was extended by Borish for rooms of arbitrary geometry (JASA, 1984). In this method, the geometries of sound reflections are calculated by repeatedly mirroring the sound source location across the boundaries of the room.
The simulation software has been developed completely in-house, allowing for fine control over the simulation algorithms and settings. A high-performance version of the software has been developed that is suitable for real-time simulations, where source and receiver positions can be manipulated by the user while the simulation is running. The system updates with very low latency, providing a realistic impression of movement to a listener.
The software component of SOFE also includes rendering to a given loudspeaker array, calculating the signals to be played by each loudspeaker channel. This offers the flexibility of using a variety of different rendering methods, including Nearest Loudspeaker, Vector-Based Amplitude Panning, Wave Field Synthesis, Higher Order Ambisonics, or other hybrid methods.
Hardware: Presentation
The first incarnation of SOFE was created by Prof. Seeber in the Auditory Perception Lab at UC Berkeley (SOFE v1) and the second was installed at the MRC Institute of Hearing Research in Nottingham (v2). The horizontal setup at TUM (v3) consists of a horizontal ring of 96 loudspeakers, with only 3.75 degrees between adjacent channels. Automatic calibration generates equalization filters via a recursive measurement procedure, which achieves an impressive level of accuracy of 0.3 dB between speakers and frequency response deviations of less than 1.5 dB from 200 Hz to 12 kHz.
For localization studies, users can employ our ProDePo-method (Seeber, 2002). In this method, subjects use a trackball to control a laser pointer that projects onto a curtain or a paper ring sitting just above the loudspeakers (SOFE v3), to indicate the perceived location of a sound source. The lab also includes a touch-screen interface for interacting with a testing GUI, and the capability to process microphone signals in real time using Simulink, which is made use of in studies involving hearing aids.
A new laboratory with a fully anechoic chamber has been constructed at TUM (SOFE v4), hosting the real-time Simulated Open Field Environment (rtSOFE) for audiovisual interaction and hearing research. This anechoic facility minimizes the influence of the room on acoustic playback and measurement, and contains 61 loudspeakers, including elevated channels (located both above and below the listener) to more accurately reproduce sound sources from all directions in three-dimensional space. It also contains projection screens for 3D video project and a motion tracking system.
Research Projects
Perceptual Effects of Update Delay in Room Auralization
Real-time room auralization requires frequent room simulation updates, a computationally intensive process. When simulating room acoustics with the image source method, the number of reflections increases geometrically with increasing order (order being the number of surfaces a sound encounter on its path from source to receiver). This project examines how sound localization is affected when a source moves to a new location, and the resulting simulation is only updated a finite order, leaving higher order reflections from the old position. The goal is to quantify the simulation accuracy needed for a real-time room acoustic simulation. This study also aims to shed light on how later reflections influence localization in rooms, and thus how the precedence effect functions in reverberant environments.
Funded by the Bernstein Center for Computational Neuroscience, Munich.
The anechoic measurement facility is co-funded by the German Research Foundation.
Staff
Current:
Norbert F. Bischof
Matthieu Kuntz
Past:
Clara Hollomey
Samuel Clapp
Gabriel Gomez
Selected Publications
Kuntz, M.; Bischof, N.F.; Seeber, B.U.: "Sound field synthesis for psychoacoustic research: In situ evaluation of auralized sound pressure level." The Journal of the Acoustical Society of America 154 (3), 2023.
Seeber, B.U.; Kerber, S.; Hafter, E.R.: "A System to Simulate and Reproduce Audio-Visual Environments for Spatial Hearing Research", Hearing Research, 260(1-2), 1–10, 2010.
Seeber, B.: "A New Method for Localization Studies", Acta Acustica United with Acustica, 88(3), 446–450, 2002.
Hafter, E.; Seeber, B.: "The Simulated Open Field Environment for auditory localization research". Proc. ICA 2004, 18th Int. Congress on Acoustics, Kyoto, Japan, 4.-9.04.2004, Volume V. Int. Commission on Acoustics. Pages 3751-3754, 2004.