06/07/17 – In search of an aesthetic

There’s no need for a title here.

Once again it’s been a while since I jotted this stuff down. I need to be careful to, there’s a lot happening in my brain about this at the moment, new ideas take prominence each day.

So to wrap up where I left off the last post. I added more sounds to the ‘percussion’ section, divided them up into blend containers as well. I tweaked the reverse playback patch and got the panner patch working. I then started working in Unity, finding their new Video Player component easy to work with, although sadly (and perhaps predictably) it’s playback speed cannot be set lower than zero (so no reverse playback). I had had an idea about making a kind of zoetrope in Unity (spinning is my C# specialty), so made a static camera in a scene, but an object dead centre of that with a spin script on it, then fitted four rectangular objects large enough to fill the camera’s view, as children of the spinning object on the camera (a simple orbit hack). Using the midiJack script things I then tied the rotation speed of the zoetrope to the playback speed of one of the tape loops. I put video players on the 3D objects and loaded them with several clips from http://www.archive.org. I tweaked the audio and visuals for a while, adding a second, transparent, primarily coloured zoetrope, slightly smaller than the first, mapped to the second tape loop playspeed controller. I continued to advance this idea, adding proportionally larger zoetropes loaded with different videos behind the first, and mapping midi button presses to switch on or off these additional ones, almost to change channel (which come to think of it is an interesting idea).

Ultimately, here’s what I came up with:

I have found in playing around with this, that first of all its far more fun to use than to watch being used. I also definitely want there to be a stronger relationship between visuals and audio, like in the previous piece. Additionally, me and Hannah mucked around with it, each of us using a separate controller. And that was very fun. I began imagining it as perhaps as two player (in this context I suppose this is a pun between player of games, and player of music) experience, and started envisaging an unconventional custom built controller. Perhaps built into a wooden suitcase or something, and the is capable of reconfiguring itself, and defying the user. And continuing the thought of meshing visuals with sound, equally mapping changes between the two players positions. Ideally controlling brightness and contrast, as well as playspeed and rotation.

I met Martin yesterday after having skyped with him the week before. When we had skyped I showed him the video of the pervious prototype, and he enjoyed it. Was quite possibly impressed, but it’s always hard to tell with him. I have definitely read that wrong before. Anyway. Yesterday we met in one of the studios and I got him to play with this new prototype. He played with it for a long time, pushing at it and being pulled by it, trying to find the places his ear wanted to lean him, being challenged when he thought he knew how to get there, but steering away from it.

We talked at length about the aesthetic goals of the work. About the rise of systems in place of, or alongside traditional compositions, about the subversion of systems. He noted that this was successfully creating Max/MSP style behaviour in Unity, therefore subverting the goals of the system I was using and by doing so challenging conceptions about Unity and Max. I told him how I really liked both prototypes, and admired that both occupy two main but separate hierarchies in Wwise, and so create a ‘full’ project, but by taking advantage of and subverting their locations. I suppose I’m subverting a great many parts of this work.

We talked about the media in the piece. About what its trying to say. For example he wondered if one of the faces in one of the videos was Donald Trump. What does the media in it say? What does the system that’s creating this ‘new work’ say? It is subverted and misused, why?

One thought about the media, is that there is some kind of congruence between the videos plucked at random from Archive.org, and the tape loops. Both disjointed relics of history and technology. They harmonise, but how does that harmony fit into the overall work?

I mentioned to Martin the issue of not wanting it to sound like itself. He was into that idea. One way to make it not sound like itself would be to add more media, however this is not a solution, but a dilution. On a long enough timeline the problem still remains. One partial solution would be to mess around with the graph on the playback speed controllers, making it stepped instead of linear. This is to sidestep the audible gesture of turning the knob. The knob could remain linearly mapped to the rotation speed of the zoetrope for example, and then there would still be a mapping of that gesture, but a schism between the auditory and visual mappings.

Another line of thought we traced was about how to integrate the two pieces. He suggested at most basic they could be present as two separate wholes, almost two tracks on an album. However I like the idea of there being a transition from one to the other which shocks the player, and a remapping of the controller to coincide with this, where the controller challenges, or even controls the player. But these transitions all need to be done under the auspices of the aesthetic, of what the thing is trying to say. Is this a commentary on ubiquity of media in 2017? On the breakdown of the relationship between the performer, piece and audience?

Briefly continuing thoughts about the first piece again, what music should it contain? What if I tried putting in something more recognisable, which it then remixes? Then you have one half where the player mixes some order out of chaos, and one half where (imagining the piece as the prototype stands) the system makes chaos out of order. The unknown from the known. Of course there was a deficit of control in the first prototype, that’s worth hanging onto. But perhaps it could be worked to my advantage.

In a message to Hannah this morning (she’s been super good at giving these things a serious think), I tried to outline all the parts of this work which can ‘say’ something:

  • The media used as input (audio and visual, its role and source).
  • The system which uses the media (and which is itself used and abused).
  • The relationship between user and interface.
  • The composition(s) that’s created.
  • The relationship between user and sonic and visual piece as its composed.
  • The effect (or lack) of the piece on an uninvolved audience.

Of course there are many more angles to look at it from, but these seem to be primary concerns in terms of locating an aesthetic.

There is also the question of how to make it not sound like itself.

I like the angle of the relationship to an audience, and perhaps the idea that this has a limited external impact, because it’s success lies in the push and pull between user and interface, sound and vision. I imagined variations on this idea where an audience generates a pieces for one person. Say perhaps along the lines of Twitch Plays Pokemon but where the users are not aware. Say for instance using data from twitter of when users are active across the globe, mapping these datasets to changes in audio in a piece for one. It’s an intriguing notion. Perhaps an expansion on the idea of Radio Garden.

I’ve probably forgotten some stuff, but also probably rambled enough for now. I’ll do some practical work, then write more. Most likely.

I implemented some visual effects. Result is insane.