HRTF magnitude spectra for a typical subject at elevations 0°and 30°. X-axis: frequency in kHz ; y-axis: normalized magnitude in dB. Source: Acoustic Lab, Physics Dept., School of Science, South China University of Technology, Guangzhou.
[ C H R O N I C L E ]
Close your eyes! Wait for a sound and try to locate its source. Most of the time, you can do it without trouble; this small miracle is possible because you have ears and a brain with a talent for maths. Both ears on either side of your head capture the sound signal. However, the distance between them gives rise to differences in the signal caught; there is a tiny gap in amplitude and in time. The ear closest to the sound hears it a little earlier and a little louder. The brain exploits these minute differences and analyses them to locate the source of the sound.
A bit of elementary geometry is enough to understand that the above could be called into question; if the source of the sound is the same distance from the ears, they will capture exactly the same signal (with the same amplitude and no time lag). In this case, i.e. if the source is in the saggital plane - the vertical plane along the bridge of the nose - our auditive system operates differently. To understand it, let that the ear is not a single point but a pavilion made of cartilage. The sound signal bounces off this relief, differently according to the location and frequency, before being captured and analysed. During our development, the brain learned to process these deformed signals and deduce where they are coming from. It can be hard to believe. I would encourage you therefore, to take a look at the Scilabus website, and the experiment conducted by Viviane Lalande, where she tests our ability to locate sounds after changing the shape of the ears using modelling clay.
Now imagine that you want to locate the source of a sound you hear in headphones, in other words you want to experience 3D sound. The shape of the signal to be injected into the headphones needs to be calculated so that your brain interprets it like a real signal in space; this means reconstituting the “harmonic filter” (also known as the head-related transfer function or HRTF) that your ear does naturally. This is where the mathematics come in! Knowing the shape of the ears (after measuring on the individual concerned) and the special features of the sound to broadcast, you just have to solve the sound wave diffusion equations to find the signal supposed to hit your ear drums and broadcast it directly into the headphones.
Unfortunately, these equations, studied first by Jean Le Rond d'Alembert (1717-1783), are not exactly easy to solve. Numerical calculation methods are required to obtain a sufficiently accurate approximation. For the listener whose ears have been measured, we can reproduce a simulation of the HRFT filter with great effort. Application requirements are huge and demand very elaborate mathematics (and computing); the calculation needs to be made in real time to adapt the diffusion to the position and orientation of the listener. Any additional practical constraint (taking into account the reverberation on the walls of a 3D space to be simulated) requires even further mathematical sophistication!
If you visit the site of the X-Audio research team at the Centre for Applied Mathematics at the École Polytechnique, you will quickly see the leisure applications (cinema, video games, etc.). Stéphane Lesueur’s experiment, however, is even more impressive. This rollerblade champion is blind but with a 3D audio headset prototype and an algorithm developed by this team, a sound can be broadcast to him indicating the direction of the track. Thus, he is able to skate without a guide. Magic once again thanks to mathematics!
Roger Mansuy teaches at Louis-Grand Lycée in Paris, and is a member of the French Committee for Maths Teaching (CFEM).