24/05/17 – Initial Final Project Notes

First Attempt

The original plan for my final project was to use audio recordings of my grandfather during the later stages of his life describing his life story. The plan was to generate an invisible architecture on which to mount this material, to make this large and unwieldy piece of media (fourteen hours in total) more easily navigable and digestible. However, after working with this material for some time I came to the very difficult realisation that although I really liked the material I was working with, I was not enjoying the work at all. On a number of occasions Martin said “If you don’t enjoy what you’re working on, we won’t enjoy what you submit.” This sounded alarm bells in my head, and I began to realise that the prospect of spending another 3+ months working on this particular project filled me with a kind of dread, and so I made the difficult decision of changing my project.

Setting out in a New Direction

After making the decision to abandon the previous direction the project had taken I knew I had to get off the ground quickly in order to not daunt myself with the prospect of starting from scratch. I decided I wanted to work on some sort of game environment, as I have enjoyed working on projects such as this before and am keen to pursue a career in this area, so figured this could be my chance to develop an impressive portfolio piece. From having worked on a number game projects before (see Game Work) I have had my reservations about FMOD audio middleware. While it is a really great system for beginners to help get their heads around dynamic audio and the different possibilities that affords, I had always felt that there was some degree of voodoo which it required to work. That being said, I have implemented some complex and dynamic sound environments within, however I always had little frustrations along the way with the way its various components worked.

I decided, therefore, that rather than dive head first into deciding the content and shape of this new project, I would begin learning a new piece of software and see what ideas this generated. Due to my reservations about using FMOD again, I decided I would attempt to learn Wwise. This decision was also based on the knowledge that most game audio positions require knowledge of both pieces of middleware. So, I began by working through the online Wwise 101 certification course, which I found incredibly helpful. My initial interactions with Wwise were very satisfactory and well explained, and I found myself experimenting beyond the boundaries of the certification course very quickly.

Initial Experimentation

While working through the Wwise 101 course I found myself keen to experiment with the Cube project it  came with (seemingly an open source Quake clone), but also to experiment with Wwise integration into Unity, something not covered in the certification course.

I noticed while browsing through the available effect plugins in Wwise that it came with a time stretching utility. This interested me as it is something which is (to my knowledge at the time of writing) lacking in FMOD. Additionally I had used Paul Stretch in a previous project, and had been particularly pleased with the results (see Decompositions of the World at War).

Early on I created a Unity scene in order to test out Wwise integration as I was learning. I decided to use some of the audio I had generated from my previous attempt at this project as stand in audio, to play around with. I created a scene with 31 red and blue spheres, and made them orbit around a series of points in order to create some movement in the scene. At the centre of each sphere I placed Wwise events (essentially sound emitters), each loaded with a different story from my grandpa’s life and placed a listener at the centre of the scene. Initially, I struggled to even make these events play. Because of the amount of these files and their sizes, the generic Wwise settings within Unity were unable to render all the sounds. It was later, at the end of the certification that I learned how to optimise a project, reducing the size of each file and then increasing the allocated memory to Wwise within Unity as projected by the Profiler Layout in Wwise (I find this feature vastly superior to FMOD. The team can state to the sound design how much memory is available for audio, and the designer can then pass this information to Wwise in the SoundBank Layout, letting them know exactly how wiggle room on the project they have).

As I progressed through the certification I learned about RTPCs (real-time parameter controls, similar to parameters in FMOD), and so in my Unity session I began experimenting with them. I gave the Actor-Mixer object (which contained each sound emitter) both time stretch and pitch shift plugins, with RTPCs set to near-linearly increase the downwards pitch shifting and time stretching as the source moved further from the listener. These were set over a range of 150 units (which correspond directly to the default unit of measurement in Unity), and I also created a distance attenuation curve which each event would use (a ShareSet) which would attenuate the volume and apply a low pass filter linearly over a range of 150 units.

Once I had optimised the project and ensured it wasn’t running too intensively, I ran the project successfully, and found I had created an interesting scene. Certainly there was little to be moved forward with this particular experiment once it was successfully completed, however I learned a lot of useful things about the relationships between Unity and Wwise, and particularly found useful the live profiling of a project in Unity through Wwise. Below is a video demonstrating the Unity scene.

Continued Experimentation

As I mentioned above, while I worked through the Wwise 101 course I found myself experimenting with the example project beyond the boundaries of the course, attempting to use the skills I was learning in new and interesting ways. One thing which delighted me about Wwise was the ability to use control surfaces in conjunction with Soundcaster and Mixer sessions to audition and live-mix sounds in game. I decided to play around with this functionality and with the time stretching and pitch shifting I had done in my test Unity scene, within Cube. I tried a few different approaches for this new test, and my initial hopes of putting the effects as close to the top of the Master-Mixer hierarchy were dashed when I discovered that for some reason (I’m sure there is a very good reason for this, but it was not immediately apparent to me), the Master Audio Bus (at the very top of the hierarchy – Wwise is very much hierarchy based) could have a pitch shift plugin assigned to it, but not a time stretch. Instead I had to go lower down the hierarchy and affix these effects to the four main Actor-Mixers for Cube (Items, Main Character, Monsters, Weapons). These effects all shared a Game Parameter called Time, which worked on the X axis of the the effects’ RTPCs. I mapped a dial on my control surface to the Simulation Value of Time (in this instance using a value of 100 rather than 150), opened Cube, connected it to the Wwise session and was able to control the playback rate and pitch of all sounds in the game using the dial on my control surface. This is demonstrated in the video below.

I realised after I had uploaded this video that there is a sound which is unaffected by the time stretch and pitch shift, and that is the ‘BodyHit’ sound, where the play is hit by an enemies weapon. This is because that particular SFX Sound (akin to an FMOD Single Sound) uses its own effects, which override the parent effects on the Main Character Actor-Mixer.

To increase the level of complexity and control in this experiment I could create four separate parameters, one for each of the Actor-Mixers, in order to independently change their playback rates and pitches. However, I have not done this yet, as even playing the game and experimenting with the effects of all sounds mapped to one dial requires one more hand than I have.

Approaching Purpose

Through working on these small experiments I started thinking about other games which involve time manipulation, and the effect this has on the audio world of the game. I started to realise that this was the direction I wanted to move in with this project, and so felt like I was beginning to narrow the scope of focus.

I thought about the ways these time stretching experiments distorted meaning, and was reminded of the film Once Upon a Time in the West (Leone, 1968). In particular it made me think of the abstract recurring flashbacks which appear throughout the film, and which eventually coalesce into pivotal revelations about the relationship between the protagonist and antagonist. I thought that this could be something which could be explored through time stretching, where a message is obscured by these temporal distortions, until eventually, through proximity (perhaps, or some other variable) its meaning is revealed. With this in mind, and after encouragement from Owen, I went away and crafted a 5 minute radio drama around this theme, using the idea of black holes and time dilation as a way to explain these temporal interventions. This piece can be heard below.

I found I wasn’t totally pleased with this piece as I rushed it, but there are moments of design which I thought were interesting.

A Working Title

I met with Owen and part of the cohort to go through our project ideas. I outlined where I had got to so far, detailing the above experiments. He was struck by my interest in the flashback scenes from Once Upon a Time in the West, and spoke about how this was a stretched out equivalent to Sound Advance, where a sound is heard, before its on-screen source is identified. He suggested that an exploration of the limits of this using time stretching could be an interesting direction, and while I definitely agree, I do feel like I also want to explore time manipulation in the hands of the user as well as the narrative. As such we sketched out a working title for my project which is as follows:

Elastic time in ludic dreams: augmenting the possibilities of sound advance using temporal intervention on sound environments in interactive scenes.

Work Continues

I intend to continue working with Wwise, and under Owen’s direction producing smaller sound sketches (~30 seconds) which outline different styles of temporal intervention. I also have an idea for one of the opening chapters or subheadings of my report, in which I will examine notable examples of time manipulation in video games, in order to better understand how I wish to use this in my project. This will come under the title A Brief History of Ludic Time, and expect this form the basis of my next blog post. I will additionally begin to examine sound advance and its limits, although my thoughts on this are less defined than those I have on the more ludic applications of these temporal interventions.