What’s been going on
Its been a while since I updated my progress here, and a bunch has changed.
I made one final prototype developing on the ideas I’d been exploring using a piece of music and bizarre time stretch behaviours. This piece maps the unpredictable audio behviour onto visual behaviour, and uses two worlds a ‘normal’ and ‘chaotic’ to enhance the distinction between the two. I particularly enjoy the impact these visual and auditory behaviours have, having shown it to several of my peers on the course, and usually elicited feedback such as ‘trippy’ or ‘what the fuck?’ I definitely want to maintain that level weirdness in the interactions with the sonic and visual elements.
Here’s a video of this prototype:
I showed this build to Owen and we talked at length about ways to proceed. He gleaned from it that I am making some kind of weird sampler, and something which straddles the lines between game, instrument and composition. His suggestions for new ways to enhance it were reducing the scale of the sound elements down to a more atomic level, from musical phrases to individual percussion hits and see how I felt about it. He also advised implementing more high level control, because as it stands it is more the world which changes by itself than the user effecting meaningful change upon it. Finally he suggested if I was interesting in having sounds playing backwards (which I am), I should explore Heavy Compiler and Pure Data integration. I was a little daunted at first, and set myself one day to experiment with it, and if it didn’t work out I would move on.
Well, one day turned into three, and sure enough I eventually managed to create something close to what I’d set out to do. Heavy compiler can be used in Wwise as either a sound source generator plugin or an effect plugin. My initial plan had been to create a patch which was bundled with the desired reversible audio file, which loaded that file into a buffer and had controllable read speed. However, Heavy Compiler does not support the full range of Pure Data objects, omitting incredibly useful objects such a Soundfiler, and Readsf~, therefore this initial plan of creating a sound source generator from a patch bundled with an audio file seemed to be unfeasible. Furthermore, the maximum upload size for any patch is 1MB, therefore the audio files which could be bundled with the patch were very limited by this.
So, I decided to attempt the second method, creating an effect plugin instead. The idea was to put the desired reversible sound into Wwise, and create an effect which would pass the audio into it, record that audio into a buffer, and then loop playback from the buffer, with adjustable playspeed and direction. It took me a few days a lot of help from my colleagues, but we finally got the patch up and running. Here’s an image of the patch as it stands:
Audio is brought into the patch through the adc~ object in the top left corner and directed into the tabwrite~ objects. Simultaneously to the audio being played in Wwise (and therefore piping into the patch) the ‘r onoff @hv_param 0 1 0’ object is sent a one, which begins writing the received audio to the tabwrite~ objects and send a bang to the delay object. The time of the delay is the length of the sample (as specified for each individual track using this effect) divided by the sample rate (in this case typically 44100 Hz) * 1000, to give the length of the sample in milliseconds. Therefore, the delay bangs once the sample has finished, it sends a ‘stop’ message to the tabwrite~ objects, stopping them from writing any more into the buffer. Meanwhile, a phasor object is used to read through the tables the tabwrite~ objects have written to, using the tabread4~ objects, once the delay has banged. It’s read speed and direction determined by the playSpeed @hv_param object. The sound is then passed through a high pass filter to remove any unwanted low frequency lumps or bumps. It is then passed back out the dac~ object and on through Wwise’s signal chain.
This patch has its upsides and its downsides. As Owen pointed out it is essentially a loop pedal. However, I have had a surprisingly large amount of fun with it, as will be evidenced below.
Atoms for the piece
Continuing to take Owen’s advice, I created a new Wwise project, and used some more atomic sounds, instead of musical phrases. That’s mostly true. I did incorporate a number of tape loops I had made at a workshop with Yann Seznec a few weeks ago as a melodic dimension, although not all of these are musical. In terms of atoms, I decided, rather than adding traditional percussion, to add recordings of several clocks I had made recently. These included a chime from a large is mantlepiece clock, and ticks from the clock in Old College. I then gave each the playback speed control patch. I did the same for the tape loops, but I also split them between two blend containers (four in each) with crossfades at the edge of each, and then I put the two blend containers within a third blend container. The idea being, each of the first two containers could be searched through, like a radio, and a desired sample selected, however the third container meant you could play with the balance of the two samples, or even exclude one entirely:
I’m not entirely sure this is the way I want the user to be able to access these sounds, but for now its a workable enough structure.
High level control
Once I had both playspeed and a temporary structure in place I decided to implement more high level, user defined control, as Owen had recommended. In the Wwise project there were 18 separate RTCP’s in use. Some were not for user control, such as the one which sends onoff@hv_param a 1 on playback, however, over ten of them were. For testing I decided to use two midi controllers, an Akai LPD8 and a homemade cigarbox controller, as that gave me a combined total or 14 rotary knobs to use. Still convined Unity would be my platform of delivery, I scanned the net and found a custom Unity package, which would allow to use midi data in the engine (I was surprised it wasn’t natively supported). I used Keijiro’s MidiJack package, which I got up and running fairly quickly, and then tested the project out.
I first of all performed this piece:
I showed it to a friend, how asked for it to be turned off halfway through. Fair enough. It’s a little intense. So, I decided to take another stab at it and perform a more measured, ambient piece with it:
Thoughts on further work
I showed this piece to the same friend and he liked it. I was also pleased with it, however I quickly realised that there needed to be more variety in the sounds used within (there are only 11 sounds in total in there at the moment, so not bad going), for example using a bird call, or human voice, and different textured percussion, as well as possibly creating more tape loops. I am also very keen to begin mapping this to visual behaviour, and am interested in using Unity 5.6’s newly added Video Player component (although apparently it doesn’t play video in reverse – story of my life), perhaps in conjunction with something like the prototype at the top of this post, making some kind of insane zoetrope or something like that. I am also very keen to add in some kind of gameplay element, or in engine control, as it strikes me that everything I have done so far could have been done much easier in Max/MSP… Therefore I really have to make Unity shine. First though I need to add in new sounds, potentially tweak the reversing patch, and general expand upon what I’ve made. Another thing I’m very keen to do it to get more of my peers to play with it, noting their reactions and feedback, although perhaps I will wait until there are more sounds and visual behaviour.