Candyman, Cove Park, and the documenting of a project unfolding.

In the summer of 2019 I became aware of the GEM extension for Pure Data, and began to think about the possibilities of building on the work I had begun with H Y P E R T R O P E, solely in Pd.

Concurrently with this, inspired by a dream, I had been experimenting with VHS tapes: cutting a sliver from one side of the tape (assuming this was where the audio was located, as in cinema film), the same width as audio cassette tape, and mounting it into a cassette tape housing. The resulting audio generated was imperfect, weird, and delightful. Spurred on, I purchased a copy of Candyman (1992), and began working my way through, using this processes. I generated a lot of interesting material from this, and had a lot of interesting ideas. In particular, I was taken with the idea of developing a live performance using an audio-visual sampler built in Pd, with which I would perform a live remix, or “haunting” of the film. I generated a proof-of-concept video (using audio loops created from the VHS, and footage cut from the trailer of the film):

I then approached a venue and was encouraged that the person responsible for bookings was into the idea, and a fan of the movie himself, however he required that I look into licensing in order to show the film. After approaching the rights holders, the long and the short of it turned out that I would not be able to use the film or its audio in a performance.

This dispirited me for a few months. However, around the start of December 2019 I submitted an application for a place on an artists residency at Cove Park in Argyle. In my application I had stated I intended to work on producing more audio visual works like these pieces I had developed that year:

However, as the date drew nearer, I began to think (at the encouragement of my partner) that the work I had done on the Candyman project need not be in vain, and there was scope for converting the bones of the project into something which used original content.

At Cove Park I was staying in one of their ‘cubes’, essentially two converted shipping containers adjoining to create both a living and working space, and situated in the remote and beautiful hills, facing loch long:

Unlike the previous residency I had done, there was only a few other artists in attendance, so I quickly found a productive rhythm of working through the morning and afternoon, and socialising in the evening.

The work came together quickly. I set out designing a system which would allow the user to record and loop up to three audio samples (of a maximum length of 10 seconds) at a time. Each time a new sample was generated a random video would be selected from a pre-determined pool of videos. There would only ever be two videos playing at once, with the audio dynamics of the last created loop determining the cross fade between the two videos. The play speed of each video is determined by the playspeed of its associated sample (which is user controlled), and there are visual effects (motion blur, RGB colour shifts) which are also tied to these variables. Behold, a demonstration of the first prototype:

As can been seen in the above video, the audio is recorded through the laptop’s microphone (I opted not to bring my sound card to the residency), and then playback rate is controlled through a custom MIDI controller.

One of the artists also in residency while I was there was Dan Shay, a film maker and visual artist based in Glasgow. Dan was curious about what I was working on, and after showing him where I was at with it, proposed providing me with some visual material, taken from a residency he’d done sailing to St. Kilda. We then set up a small performance in one of the spaces in the main building at Cove Park, and I performed a piece we titled Sinc for the other artists present, and my partner who had come over for the weekend to visit me. Here are several examples of what was happening:

The above video is the audio-visual output of the system. This was then projected on a screen in the room.

Footage take by my partner during the performance

It was thrilling seeing the project grow arms and legs, and really validating to get such positive feedback from the people I’d shown it to, however many questions were arising. Particularly, the role randomisation plays in the sampler. On the one hand that allows the system to subvert user comprehension, which is something I am keen on, however it results in an absence of intended narrative, and means the device becomes merely a “machine for the generation of affect”. Is there some way to structure these elements in different ways to allow for more focused or structured performances? I wonder if in some way I am afraid of the effort or vision that would require, so have so far stayed the path with randomness, disorder, and the free-association juxtapositions the device creates.

Additionally there are questions about how performance with this device works. At Cove Park, I was situated so that the audience could see me and what I was doing (in terms of generating sounds and controlling the system). How important is that? Is that vital, or trivial? I suppose in some ways it depends on the nature of the sounds being added.

On the final day of the residency, I filmed my partner paining with water-colours. I then used these new videos to create additional visual components for the system. I created this short video using my ‘foley-esque’ approach to sound generation, and a combination of the water-colour videos, and Dan’s nautical ones:

Returning to the notion of the role I would play in the performance, and in an effort to move away from the foley sounds, I devised a version where the visual components were solely water-colour videos, and the audio content was generated by a synthesiser. The way the audio was configured, I couldn’t hear the synth as it was going into the system, so I would randomly pick a present, then hammer away on some keys for a few seconds to generate a loop. I made the following short videos with this process:

Although these videos help to showcase the range of content this device can generate, it also highlights the problems inherent to something so open ended. Even now I have trouble describing what it is exactly, and what it’s capable of. I worry that imposing more structure upon the system, its input or output, will detract from its endless possibilities of things it can generate, however I’m also aware that in such an unstructured state it is missing focus, concision and forward movement (hence the project having been on ice since Cove Park).

So that brings us roughly up to the present day. Aside from sorting out some minor technical problems with it, I haven’t worked on the project in months. Although I can feel the dust stirring, who knows when it will take hold of me again. I feel like the insecurities I have with regards to next steps are largely why it is in its current restful state. This feels like a big, personal project, so I’m in no hurry to rush it, but also I don’t want to lose the threads completely. Especially when there are so many aspects of it so uncertain or unanswered. For example, what should it be called? How does naming the thing even work? Does the system have one name, and each performance another? For now the system is called City of Dreadful Night, a line taken from Angela Carter’s The Passion of New Eve, which I was reading while at Cove Park. A friend suggested calling the system ‘the hummingbird machine’, which is something I’m also fond of, however, until it has grown more I’ll resist naming it further.