Recently Virginia Tech and international engineering consultants ARUP partnered with Harman to create the world’s first full-scale big data exploration facility. This four-story, $15 million theater and high-tech laboratory is a highly adaptable space for research and experimentation in big data exploration, immersive environments, mulit-media performances, audio and visual installations and experiential investigations of all types. This environment can be used by scientists, engineers, composers and artists–anyone who wishes to explore the creative discipline of spatial sound. Conceived and designed by ARUP, much of the Cube’s acoustics, audiovisual, and 3D audio systems are made possible by technologies from HARMAN’s JBL Professional and BSS Audio.
The audio for The Cube goes a long way to support virtual reality experience by allowing for experimentation with wave field synthesis, ambisonics, and Vector-based amplitude panning (VBAP). In essence, sound can be placed anywhere in the space and push the limits of what’s been traditionally possible with 3D audio.
VBAP VECTOR-BASED AMPLITUDE PANNING
A vector-based reformulation of amplitude panning is derived, which leads to simple and computationally efficient equations for virtual sound source positioning. Using the method, vector-based amplitude panning (VBAP), it is possible to create two or three-dimensional sound fields where any number of loudspeakers can be placed arbitrarily. This method provides artists a way to dynamically move sounds around a multimedia space with whatever speakers are available. Several of the new 3D cinema sound formats utilize this to provide additional realism in their film sound designs.
VBAP allows you to pan between three or more speakers, not just one-to-one.
Ambisonics is a full-sphere sound technique. Unlike traditional sound techniques, which just cover the horizontal plane, ambisonics covers sound sources above and below the listener. Its transmission channels contain a speaker-independent representation of a sound field called B-format, which is then decoded to the listener’s speaker setup. This extra step allows the producer to think in terms of source directions rather than loudspeaker positions, and offers the listener a considerable degree of flexibility as to the layout and number of speakers used for playback.
WAVE FIELD SYNTHESIS
Wave field synthesis is a spatial audio rendering technique which produces “artificial” wave fronts. Contrary to traditional spatialization which changes depending on the position of the listener, sound in wave field synthesis seems to originate from a virtual starting point.
HARMAN’s Paul Chavez explained, “If you have a helicopter that was 20 feet outside the room, the speakers in the room represent the sound as though it had traveled from that helicopter and through the walls by reproducing the expanding sound waves at the speaker array boundary.”
All of this is made possible using 124 JBL SCS-8 two-way 8 inch coaxial full-range surround sound speakers. 64 of which are suspended on the walls at regular intervals below the first-story catwalk, with another 60 above the ring. Three Dante-compliant BLU-806 signal processors from BSS are incorporated to provide the proper calibration of the speaker system. There are also 10 JBL LSR 6328P loudspeakers on stage, at ear height for performance type events HARMAN’s director of system applications, Paul Chavez, said, “Spatial sound is a compelling frontier for audio innovation and is likely to positively impact how and what we listen to in the car, the home and in large venues. The Cube is among the elite facilities for exploration and learning, and we are extremely pleased to be involved.”