It might be difficult to perceive spatial music if you’ve never heard it, but the easiest way to describe this three-dimensional sonic technology is this: it sounds like real life. Or at least close to it.
When you listen to a recorded piece of music – whether it’s mono, stereo or 5.1 – it’s static. Even if you’re surrounded by speakers, you’re getting the sound from a single perspective. But if you’re at a concert in the first row to the left, then move up to the balcony or jump on stage, you’ll hear things differently – maybe you’re further away from the drummer, or you’re close enough that you can hear something otherwise buried in a mix. The music will sound unique based on where you’re standing.
In a collaborative effort between the Schulich School of Music’s Graduate Program in Sound Recording and Eidos-Montreal, the video game studio best-known for the Deus Ex series and Shadow of the Tomb Raider, Schulich professors Wieslaw Woszczyk and Richard King are bringing their considerable experience in recording to implement ambisonics – the spherical sound capturing method for spatial music – in video games.
“With a video game, you have the freedom of interacting,” says Woszczyk, who among many career highlights worked with Brian Eno, Philip Glass and Harry Belafonte. “This is what we hope to do: We want the player to have many more possibilities to engage as a listener, using video game tools and technology and not being fixed in your position like a movie. The video game audience is more prepared to accept virtual reality and we can bring music into that concept beyond what’s available before.”
Woszczyk and King are working specifically on the music side, but a practical example of how spatial recording could revolutionize gaming is how they could make the simple sound of a machine gun – an integral component of popular shooting games – much more immersive. Right now sound is horizontal, but with spatial sound capturing vertically as well, you could hear bullets whizzing by you.
“A machine gun firing in games always sounds the same,” says Woszczyk. “There’s rarely any variance in it. It only changes a little bit based on environment, but it’s not very convincing, and very often it’s an added effect. It’s not being there. We want to reconstruct, as much as possible, the sensation of being there.”
For this project, the pair will work with a composer who will write a piece of dynamic spatial music for a video game environment. Eventually there will be a stage where Woszczyk and King will play the game and test the audio, which is something they’re anticipating with curiosity since neither has much experience with modern gaming.
Facilities at the Centre for Interdisciplinary Research in Music Media and Technology (CIRMMT) and other parts of Schulich are also ready to embrace this developing technology.
“Soon after 2000 we were designing the new building facilities, and we had to decide on our biggest lab and control rooms – what’s the shape and size?” says Woszczyk. “The standard in the industry was a low-ceiling, surround approach. But we built all the rooms, including CIRMMT labs, with high ceilings because we felt there would be elevation speakers, there will be a height dimension and 3D sound, and we wanted to be ready for it.”
“We have a room that’s two floors underground, and the ceiling is over 45 feet high, so a lot of real estate to work with,” adds King.
King, a multiple Grammy Award winner for his music engineering work, started in surround sound in the early nineties and has been working in adding height channels and 3D sounds for the last decade. He is excited about letting the listener finally control the space. Music will be a difficult test in evolving spatial audio beyond a technologically impressive gimmick.
“One of the easiest things to create artificially in an immersive manner is ambience, so applause for instance,” King says. “Then from there it’s spoken word, because the air is very sensitive to spoken word, the artifacts are so clear because can understand natural voice. But the toughest thing is music, with complex overtones, that will be the trickiest thing for us to figure out, how to crossfade between closer and more distant perspective without introducing any phasing problems in the sound.”
As the technology improves and more artists get their hands on it, Woszczyk thinks it could fully add another dimension to the art form of sound altogether.
“For composers, performers, directors and storytellers, that spatial architecture and dynamic exploration of music may be a new language for storytelling. I have a feeling composers like Strauss and Shostakovich had that spatial thinking in their minds. The music is so rich in movement and changes in orchestration from one group to another, but it’s all been frozen in some way, it has not been fully exploited. It’s just melodies, rhythms and harmonies, but what about spatial movement? What about dynamic presence of music around you? Being thrown into a river or ocean, you swim and you’re fully immersed, you feel the currents – we’re getting to that level of music engagement.”