Research
  Jonas Braasch - Architectural Acoustics Rensselaer







Visit the webpage of the Communication Acoustics and Aural Architecture Research Lab for further information.

SOUND-SOURCE TRACKING DEVICE TO TRACK MULTIPLE TALKERS FROM MICROPHONE ARRAY AND LAVALIER MICROPHONE DATA

Jonas Braasch, Nicholas Tranby

Many algorithms have been developed to localize audio signals based on the differences in the sound as it arrives at spatially disparate microphones in a larger array or arrays of microphones. While many of these systems perform well with one sound source, tracking multiple sound sources in parallel remains to be a real challenge. In the project, which is presented here, the task was to localize talkers, and then reproduce their voices – which were recorded at close distance with lavalier microphones – spatially correct using a loudspeaker rendering system. The localization process was based on time delay differences between various channels of a small-aperture pyramidal five-microphone array. In addition to this common practice, the information gained from the presence of the talker-worn microphones was utilized to estimate the signal-to-noise ratio (SNR) between each talker and the concurrent talkers. An algorithm was designed to select time-frequency bins that showed a high SNR for robust localization of the various talkers and to identify the talkers of the localized sources. It was found that correlating the talker-worn microphones with the microphone array allows for a greater accuracy and precision of localization than with only the microphone array.


 


J. Braasch, N. Tranby: A sound-source tracking device to track multiple talkers from microphone array,and lavalier microphone data, 19th International Congress on Acoustics, Sept. 2-7, 2007, Madrid, Spain, paper: ELE-03-009 (6 pages).

Auditory Environments based on Virtual Microphone Control (ViMiC)

In auditory virtual environments it is often required to position an anechoic point source in three-dimensional space. When sources in such applications are to be displayed using multichannel loudspeaker reproduction systems, the processing is typically based upon simple amplitude-panning laws. This paper describes an alternative approach based on an array of virtual microphones. In the newly designed environment, the microphones, with adjustable directivity patterns and axis orientations, can be spatially placed as desired. The system architecture was designed to comply with the expectations of audio engineers and to create sound imagery similar to that associated with standard sound recording practice.


 

J. Braasch (2005), A loudspeaker-based 3D sound projection using Virtual Microphone Control (ViMiC), Convention of the Audio Eng. Soc. 118, May 2005, Preprint 6430.

J. Braasch, W. Woszczyk, A “Tonmeister” approach to the positioning of sound sources in a multichannel audio system, in: 2005 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA 05), Mohonk Mountain House, New Paltz, New York, October 16–19, 2005.

J. Braasch, W. Woszczyk, An immersive audio environment with source positioning based on virtual microphone control, 119th AES Convention, Oct. 7–10, 2005, New York, NY, USA, Preprint 6546.



Sharing Acoustic Spaces over Telepresence

This paper describes a system which is used to project musicians in two or more co-located venues into a shared virtual acoustic space. The sound of the musicians is captured using spot mics. Afterwards, it is projected at the remote end using spatialization software based on virtual microphone control (ViMiC) and an array of loudspeakers. In order to simulate the same virtual room at all co-located sites, the ViMiC systems communicate using the OpenSound Control protocol to exchange room parameters and the room coordinates of the musicians.


 

J. Braasch, D. Valente, N. Peters (2007) Sharing Acoustic Spaces over Telepresence using Virtual Microphone Control, Convention of the Audio Eng. Soc., October 5-8, 2007, New York

copyright (c) 2008 - Jonas Braasch