14/07/2017 – A laying out of thoughts on the current work

Language

The term ‘player’ is used to denote the user/manager of the system. This is intended to combine (perhaps facilely) associations of instrument, and game, as well as notions of playfulness.

The ‘system’: that which comprises the sum of decisions and rules created in the Wwise , Pure Data, and Unity components. Could be considered the artists composition. The system replaces/becomes the instrument.

The ‘media’: That which is enacted up on by the player through the system.

The ‘performance’: The result of the media enacted on by the player through the system. The player’s composition is their interpretation of my composition.

Radio and Tuning – audio components of the system and methods of use (currently direct, though ideally more indirect and ellusive). Tuning implies both tuning the radios and tuning the station on the radio.

Zoetrope and Flipbook – visual components of the system.

Intention

I began the experiments for this current creation with only vague intentions. From the outset, visible in my first, barebones prototype (orbs with Grandpa’s voice and time stretches), I knew I wanted to work with time manipulation. Experiments with Max/MSP’s Groove~ and degrade~ objects, Paul Stretch and my fondness for bit-crushed sounds have cemented a fondness for resampling, time-stretching, aliasing. Temporal interventions. I experimented with non-linear deconstruction of a pre-existing composition in real-time (first prototype proper), and then moved on to stripping the media back to atomic components and using temporal intervention to create a collage of new rhythms, frequencies, timbres etc. These were then pushed up against visuals which were experiencing spatial, temporal and stylistic (through image effects) interventions, mapped to sonic interventions. This is all retrospect of course. At the time it was more automatic, or rather, guided by the instinct of ‘if its pleasing it remains’.

System

A subversion of Unity and Wwise, pushing them in not fully intended directions.

Wwise

is given additional play speed and directional control through a custom Pure Data plugin. Its actor-mixer hierarchy is given sets of ‘radios’ which can be tuned to different samples, each sample can be tuned to a different play rate and direction.

Unity

is the host platform for the piece. It hosts the elements of the system it is not part of. It is the hub control is passed into, which is then routed into itself and Wwise.

It generates and manages the visual elements, projecting videos on zoetropes and flipbooks.  It also ustilises image effects to augment these visual elements.

It is the place where the audio and visuals meet and generate congruence and dissonance, based in part on the player’s exertions (of ego or arbitrary organisation) upon it, and on the constraints of the system and media.

Control

One to many and many to one mappings

Perhaps mappings where each speed knob controls every radio (with the exception of the one related to it?). The control has a trigger threshold, where if a large change is made suddenly, the whole thing will lurch into something new, but it will permit small movements and adjustments. Perhaps this would allow for the removal of the radio knobs…

At the moment volume/amplitude is fixed, although there are certain mappings to the frequency, as it gets lower it gets harder to hear. Perhaps volume could be part of a many to one mapping?

Media

Automatically gathered media, dislodged in the temporal (and therefore frequency and timbral) realm as to be remixed, blurring the boundaries between discrete and ambient, field recording and found sound.

Audio

A fairly automatically gathered collection of pre-existing and pre-generated sounds. Currently, two radios which tune to sets of approximately five second long tape loops from various old cassettes, some musical, some more mundanely domestic (unknown family Christmas). These radios also contain one longer percussive sample. Four other radios contain sets of elements aimed to be either percussive or ambient qualities (with the custom play speed and directional controls these becomes, to some extent interchangeable). These samples are all recorded and range from discrete recordings of clock ticks and chimes, cane swishes, typewriter keys, to longer ten second field recordings of the sea and birdsong (of lone birds and multiple).

Sound source is determinate but system makes them, by degrees indeterminate. A single sound can be (currently) ten different sounds. Effects could expand this. Tie visual effects to audio effects. Bloom to delay perhaps.

Video

Again, a rather automatically generated collection of found videos from archive.org, such as old cartoons and educational videos, as well as videos from my iPhone of the sea, fire etc.

It is possible to stream video into the system directly from the internet. It would be possible, with the right URLs and the right code, to randomly select a video from, say, archive.org.

A further thought, though this is venturing into the realms of the outlandish, would be to create a pd patch for unity which recorded portions of audio from these videos as they were streamed in, and the player then remixed the audio and visuals as they came in. This would be a plugin on audio buses, which handled each video player. This would potentially result in the culling of Wwise from the system. It would also expand the sonic and visual limits of the performance. This could either be the next prototype of this project, or the continuation of this work. Most likely the later. A small amount of experimentation in this direction made me realise just how different at least the sonic component would be. It raises the question of the need for zoetropes. And if the entirety of archive.org’s video collection could be made available, why not the whole internet or a sizeable and random enough chunk. Such a proposition raises questions of self-censorship and suddenly it is a new and altogether murkier beast entirely.

Relationship between the two

The relationship between audio and visual elements is a tricky one. There is resonance between the found/automatically gathered elements of the media. The behaviours and the mappings between them remain in flux, with the mapping between play speed and direction of audio elements and visual rotation or play speed having been somewhat untethered. The concept of many-to-one or one-to-many mappings is yet to be further explored. As stated above, one way to increase this would be mappings between visual and audio effects. This is a good idea as it seems to be a creative layer on the visual side of the system, which is somewhat in deficit on the audio side.

Interface

The way in which the player or players play the instrument/ intervene with the media through the system. This is obviously directly linked to the controls designed within the system. Mappings are manifest here, and through gesture the player discovers the behaviours and possibilities of the system and creates their composition. The interface itself has a role in how it elaborates or obfuscates control to the player. Are there thresholds, beyond which the system will behave badly? Will it remap itself just once the player has gotten the hang of it? On the one hand it is preferred that the player divide their attention equally and solely between audio and visual, allowing one to form the other. In this ideal, the interface interrupts the connection to the visuals. But the player is then castigated or surprised to look up and discover change has occurred (it is important though that these changes not seem too arbitrary or they will lose interest in it). Is the physical object important in and of itself? How can it relate to the media and the system? It seems the moment to gather this component automatically has passed, then indeterminacy must be substituted for resonances with other elements of the work. Perhaps subversion, where the object is subverted and the interface itself acts as a point of resistance in the composition, forcing the player to return to the drawing board.

Performance

The player tunes the radios and the samples within them, finding balance or creating intentional dissonance between the timbre, rhythm, and frequency elements on offer, and forming congruence or dissonance with the visual elements (depending on how their process of rationalisation works).

First performance: The player is met with the chaos of the system thrown random variables. Through interaction, the player begins to understand the mappings of control, sound and visual.

Subsequent performances: Familiarity creeps in. The player becomes aware of areas of control they favour.

Rationalisation. The player rationalises the output of the system into the audio visual performance following the dictates of the ego or arbitrary organisation (setting all knobs to north).

What is the role of the performer/player beyond rationalisation? Surely that comes near to last.

They begin by assessing. This can be first, or happen simultaneously with the next step which is intervening. They lock into a loop of intervention and assessment. They consider individual sonic components, examining and experimenting with their control, they zoom out to the whole tone of the piece. They consider the connections between sonic and visual elements. Try to find some balance between the two. I suppose there is an attempt to stabilise. Perhaps it is at this moment, when it has been the same for too long, it will radically change.

I know what sounds I like to make on it, therefore I am on the road to learning to play it.

Composition

Indeterminate. Determined by the performance, the result of the player enacting upon the media through the system based on the feedback from the current state or media through system.

Is the system the score? It’s more like a composition. And also a method for performance/composition.

There are two composers.

I composed the system and defined its permutations, the player composes the immediate composition through performance, push and pull with the system. They compose audio and visual based on what I have composed (the limits I have imposed).

My composition was created with the tools of game (and game sound) design and the affordances of interactive and non linear sound, and vision. This has a tendency to steer in a certain way in terms of input and output, in both middleware and game engine, however where possible I pushed against these, using very simple processes (making an object spin to create a zoetrope, looping and changing play speed) to create things not primarily intended for either middleware or game engine.

Audience

The player, or players. The people watching and listening, but not enacting. These occupy two distinct experiential spheres: The player employs a more focused (reduced?) mode of listening, is directly engaging with the performance through the system, learning mappings and making decisions as to its trajectory; The passive audience takes in more of the over-tone of the piece(?)

Stuff in Eco essay about this. The performer and the passive observer both under take forms of greater and smaller reinterpretation, so in a sense anyone engaging with it is performing it. Rationalising it.

 

 

Power and subversion

“Work in movement”.