and Aural Architecture Research Laboratory
Robust Distributed Intelligent System for Telematic Applications
Complex communication for co-located performers within telepresence applications across networks is still impaired compared performers sharing one physical location. This impairment must be significantly reduced to allow the broader community to participate in complex communication scenarios. To achieve this goal, an avatar in the form of a musical conductor with forms of artificial intelligence will coordinate between co-located musicians. Improvised Contemporary Live Music of a larger ensemble, serving as a test bed, is arguably one of the most complex scenarios one could think of, because it requires engaged communication between individuals within a multiple-source sound field that also has to be considered as a whole. The results are expected to inspire solutions for other communication tasks.
The avatar system will actively coordinate co-located improvisation ensembles in a creative way. To achieve this goal, Computational Auditory Scene Analysis (CASA) systems, to allow robust feature recognition, and Evolutionary algorithms, for the creative component, will be combined, to form the first model of its kind. The research results are expected to be significant by themselves and are not bound to telematic applications. With regard to the latter, the proposed system will have a clear advantage over a human musician/conductor, while intelligent algorithms are clearly lacking behind human performance in most other applications, especially when it comes to creativity.
J. Braasch (2009) The Telematic Music System: Affordances for a New Instrument to Shape the Music of Tomorrow, Contemporary Music Review, 28(4): 421-432
J. Braasch, C. Chafe, P. Oliveros, D. Van Nort (2009) Mixing Console Design Considerations for Telematic Music Applications, Proc. 127th Audio Engineering Society Convention, Preprint 7942
J. Braasch (2009) Importance of visual cues in networked music performances, J. Acoust. Soc. Am. (conference abstract), 125, pp. 2516
D. Van Nort, J. Braasch, P. Oliveros (2009) A system for musical improvisation combining sonic gesture recognition and genetic algorithms, in: Proceedings of the SMC 2009 - 6th Sound and Music Computing Conference, 23-25 July 2009, Porto, Portugal, 131-136.
Link to Projectwebpage
This Project is currently funded by the National Science Foundation
Sensory Substitution Algorithms for Tactile Music Communication in the
Web and Physical Spaces
The goal of this project is to design a sensory substitution system that allows severely hearing impaired and deaf people to listen to and perform music.
Link to Projectwebpage
Egloff, D., Braasch, J., Robinson, P., Van Nort, D., Krueger, T. (2011) A vibrotactile music system based on sensory substitution (A), J. Acoust. Soc. Am. 129, 2582
Van Nort, D., Braasch, J. & Oliveros, P. (2010), Sound texture analysis based on a dynamical systems model and empirical mode decomposition, in ‘Proc. of the Convention of the Audio Eng. Soc.’, Vol. 129, San Francisco, CA. Paper Number 8251
based on Virtual Microphone Control (ViMiC)
In auditory virtual environments it is often required to position an anechoic point source in three-dimensional space. When sources in such applications are to be displayed using multichannel loudspeaker reproduction systems, the processing is typically based upon simple amplitude-panning laws. This paper describes an alternative approach based on an array of virtual microphones. In the newly designed environment, the microphones, with adjustable directivity patterns and axis orientations, can be spatially placed as desired. The system architecture was designed to comply with the expectations of audio engineers and to create sound imagery similar to that associated with standard sound recording practice.
J. Braasch (2005), A loudspeaker-based 3D sound projection using Virtual Microphone Control (ViMiC), Convention of the Audio Eng. Soc. 118, May 2005, Preprint 6430.
J. Braasch, W. Woszczyk, A “Tonmeister” approach to the positioning of sound sources in a multichannel audio system, in: 2005 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA 05), Mohonk Mountain House, New Paltz, New York, October 16–19, 2005.
J. Braasch, W. Woszczyk, An immersive audio environment with source positioning based on virtual microphone control, 119th AES Convention, Oct. 7–10, 2005, New York, NY, USA, Preprint 6546.
Spaces over Telepresence
This paper describes a system which is used to project musicians in two or more co-located venues into a shared virtual acoustic space. The sound of the musicians is captured using spot mics. Afterwards, it is projected at the remote end using spatialization software based on virtual microphone control (ViMiC) and an array of loudspeakers. In order to simulate the same virtual room at all co-located sites, the ViMiC systems communicate using the OpenSound Control protocol to exchange room parameters and the room coordinates of the musicians.
J. Braasch, D. Valente, N. Peters (2007) Sharing Acoustic Spaces over Telepresence using Virtual Microphone Control, Convention of the Audio Eng. Soc., October 5-8, 2007, New York
|| SOUND-SOURCE TRACKING DEVICE TO TRACK
MULTIPLE TALKERS FROM MICROPHONE ARRAY AND LAVALIER MICROPHONE DATA
Jonas Braasch, Nicholas Tranby
Many algorithms have been developed to localize audio signals based on the differences in the sound as it arrives at spatially disparate microphones in a larger array or arrays of microphones. While many of these systems perform well with one sound source, tracking multiple sound sources in parallel remains to be a real challenge. In the project, which is presented here, the task was to localize talkers, and then reproduce their voices – which were recorded at close distance with lavalier microphones – spatially correct using a loudspeaker rendering system. The localization process was based on time delay differences between various channels of a small-aperture pyramidal five-microphone array. In addition to this common practice, the information gained from the presence of the talker-worn microphones was utilized to estimate the signal-to-noise ratio (SNR) between each talker and the concurrent talkers. An algorithm was designed to select time-frequency bins that showed a high SNR for robust localization of the various talkers and to identify the talkers of the localized sources. It was found that correlating the talker-worn microphones with the microphone array allows for a greater accuracy and precision of localization than with only the microphone array.
J. Braasch, N. Tranby: A sound-source tracking device to track multiple talkers from microphone array,and lavalier microphone data, 19th International Congress on Acoustics, Sept. 2-7, 2007, Madrid, Spain, paper: ELE-03-009