Something I’ve noted between this reading and the past few readings on the ‘liveness’ of live coding is that many of us does not really have a choice in how we approach our performances yet. The ideal ‘live’ coding as presented by the author is a spectrum of options — those that create everything from scratch in their performances, those that ‘pre-gram’ some parts of the performance and those that simply reorganize functions.

I think that even in our drum circle sessions, it is clear that it is difficult to start from scratch/ only pre-gram some parts, and most of us have thus far preferred to pre-gram and rehearse the performance. It is thus more of a ‘coding performance’ than a ‘live coding performance’. I am not saying this is a bad thing — as the differences are indistinguishable for the untrained eye ( probably only the professor could tell if someone was done ad-hoc or if it was pre-planned simply by looking at it ), but simply a reflection of the experience we have with the live coding environments thus far. It might be possible for us as students to approach the ‘from scratch’ end of the live coding spectrum in the future, but for now, it is something that I can only look at and think about. It’s akin to the journey of a person learning the piano — they would learn how to read and play sheet music first before learning to write their own.

Video Demo

Sound Composition

Both sound and visual design were built around the main idea of our project — a horror-esque live coding performance. It was definitely unconventional for live coding performances, but for some reason the idea stuck as our group was super excited about audience reactions moreso than the performance itself.

We borrowed many sound elements from horror movie soundtracks — ambient noises, ominous drums, and banging doors. The performance starts with some ambient noise (‘ab’ sound played with a tweaked class example), thumping sounds (jvbass) and crows (crows) chirping at the background. We transition to a few different main ‘melodies’, a haunting vibraphone sound (supervibe), footsteps (a custom sample) and a few other sound samples that sounded unnerving to us.

The buildup happens through the ‘footsteps’ custom sample we have. We speed up the footsteps as we fade out the sound elements to make it sound like someone is running away from something. At it’s peak, we transition to the jumpscare, before fading to a scene thats reminiscent of TV static. The sound for the TV static scene is our interpretation of TV white noise through TidalCycles, and we made good use of the long sample examples that were provided to us in class.
d2 $ jux rev $ slow 8 $ striateBy 32 (1/4) $ s "sheffield" #begin 0.1 # end 0.3 # lpf 12000 # room 0.2 # cut 1;

Visual Composition

The visuals are based off an image of a hallway with a door at the end. The main idea behind it was to zoom in towards the door as the jumpscare approaches. We intensified the image with Hydra by adding dark spots, saturating the image, and creating a flickering effect as the performance progressed. At the end, we end with a shot of a zoomed in door before transitioning to the jumpscare.

Getting the jumpscare video to work ( please don’t visit the video.src unprepared ) took us a surprising amount of time, as we were struggling to find a place to upload the video. Uploading the video on Google Drive didn’t seem to work because of some CORs errors, and uploading to YouTube didn’t work either. We finally found a host that worked which was imgur.

var video = document.createElement('video');
video.loop = false video.crossOrigin = "anonymous";
video.autoplay = true;
video.src = 'https://i.imgur.com/uFj91DQ.mp4';
video.oncanplaythrough = function() {
s0.init({ src: video, dynamic: true });
src(s0).out()
}

After the jumpscare, we transition to TV static which reminded us of what you might see after being caught by a monster in a video game or what you would see on TV.


Work Division

Jun and Aadhar primarily worked on TidalCycles while Maya worked on Hydra, which was the same arrangement we had as the first drum circle project last week. However, we found ourselves giving a lot more feedback to each other on each part of the project instead of isolating our parts we were working on, and we think that helped create a more cohesive project. A big part of our project was repeatedly practicing and coordinating our synchronous changes for the jumpscare, as TidalCycles needed to trigger the jumpscare sound while in Hydra you would need to trigger the jumpscare video which needed to happen at the same time for the performance to feel coherent. We decided to use the ‘scale’ of Hydra as an internal indicator among ourselves to signal when we needed to get ready to trigger the jumpscare. When the scale of the room reaches [32], both parts of the performance would trigger the jumpscare sequence.

First, I thought the reading was relatively hard to follow because of how many unfamiliar names and references were used, and seemingly random nouns and sentences would be bolded, making them feel important to know before you could understand the reading. It was not an encouraging read to see huge chunks of bolded text that were just names of artists, bands, and pieces that I have never heard of.

What I gained from the reading was a dive into the term ‘artist-musician’, and how interdisciplinary works are becoming more common with time. It is interesting that even today, the term ‘interdisciplinary’ is still used as a buzzword of sorts to highlight an artist, as interdisciplinarity is usually regarded as a favorable addition to any project. The author lists examples of successful interdisciplinary works and notes that “in the pop music scene of recent years, in particular, the dual profession of artist-musician/musician-artist is no longer anything of note”. I agree to a certain extent, that today it might be harder to find a ‘pure’ artist / musician that is not influenced by any other fields.

I liked the comparison of different live coding platforms to languages, and I agreed with the author that the platform/language you work with will heavily dictate how you approach things. When I was working with Cascade for my research project, I found myself leaning towards making simple HTML shapes as that was what Cascade afforded me on the surface. With Hydra, I found myself making heavy use of the oscillators as it was what is available. Could I make anything I visualized in my imagination in both platforms? Probably! But they would take so much work, and in a way, being limited by the constraints/affordances of Cascade/Hydra allowed me to think more of making the most out of what I am given, instead of being forced to answer a question as abstract as ‘make anything you want’.

I found it funny how the author emphasized the ephemeral nature of live coding, especially the “live coders [that] celebrate not saving their code”. In our class, I found myself and many of my classmates “pre-gramming” (as called by the author) as much as we could to make the performance as smooth as possible. Perhaps this is just a side-effect of being new to the platform, or having little knowledge, but I’m still fearful of doing the ‘live’ part of live coding, and would rather trust the pre-gramming approach.

As the author compares computers to instruments, I wonder then if a syntax error in the live coding performance could be treated similarly to a mistake as playing the wrong note? However, I think the comparison is not fair for both sides. I find live coding platforms like TidalCycles to be an abstraction layer between the creator, and the music itself. With a guitar or piano, there is no such layer between the creator and the music, and you can easily map your physical sense of self ( of your fingers/hands ) to the music being produced. There is a big physical disconnect with live coding platforms as it depends heavily on visuals to make sure that you’re playing the right notes. You have to look at the code, process that information with your eyes, and then further process what that block of code will sound like. Live coding loses access to one of the best features of being human — your proprioception, that even with my eyes closed I can make music just by feel if I play an instrument well enough. I suppose you could argue that you can type with your eyes closed but I feel that it’s a bit of a stretch for making music…

Idea

The main idea behind my composition project was a combination of my previous TidalCycles audio and Hydra visuals demo for Week 2 and Week 3. I really liked the aesthetic of the sunset-beach vibe I had in my Hydra visuals, so I knew I wanted my audio to accompany this feeling of a relaxing, laid-back vibe.

Audio:

My previous TidalCycles demo was me trying to recreate Oh Honey by The Delegations, and I expanded on this idea by mashing it up with Groovin’ by The Young Rascals. The two songs I chose to mashup were songs from the 1960-1970s, and they were R&B/Blues/Soul songs which fit the aesthetic that I wanted to go for.

To mash up the songs, I listened to both songs and picked out elements I liked from both of them. I liked the main theme of Oh Honey more, so I took the bassline, guitars and chords from it. I liked the beat of Groovin’ more, so I took the maracas and clave beat pattern from it, and also included the melody line. Combining the two melodies did not work as well as I expected, as the two songs were in different keys. I attempted to transpose Groovin’s melody into Oh Honey’s G major key, but for some reason it didn’t sound good after the transposing so I kept Groovin’ in it’s original key even though it was off-key as I thought it sounded better. Combining the non-melody beats were much easier as they were non-key specific — drums, maracas, claves.

Visuals:

My previous Hydra demo was similar with a theme of a sunset/ocean view, but I expanded on the visuals from the previous demo by using a modified oscillator shader that starts off with larger bands that becomes smaller towards the end, adding sea reflections, and adding a sky to the background.

The visuals were controlled by three MIDI patterns from TidalCycles. The sun would groove to the melody, the sea to the drum pattern, and the sea reflections were affected by the chord patterns. I left the sky untouched, as I felt that moving too many elements at once might be too chaotic and difficult to discern the beats of the song. In the final version, I think it was easy to identify which element corresponded with which element in the music which I was pretty proud of.

Overall

Overall, I was really proud of my buildup and fadeout of the song, as the beat patterns worked really well when you introduce them one by one or fade them out one by one. I was a little disappointed with the drop, as I couldn’t find a way to make it sound more energetic without making it too muddy and messy, so I left it as it is.

Cascade

Self-described as “a live coding parasite to Cascading Style Sheets”, this live coding platform is built on top on of CSS, one of the main elements of any webpage. As CSS is a styling language, and Cascade uses CSS as input for creating music, this creates a very nice side effect — you get the visuals as part of creating the audio.


How it works

How is used to CSS to create music? Simply put, some key CSS properties were chosen to be interfaced with sound. Those specific CSS properties were chosen because they have strong graphical effects, and have little dependencies with other CSS properties. For example, background-color impacts the visuals heavily, so it is chosen. Font-size is not a good candidate for this purpose because it can be reliant on parent properties ( em/rem font sizes ) and it does not affect the visuals much.

A summary picture of the documentation is attached below.



Limitations

At first use, I enjoyed how the audio/visuals tied-in so easily with each other. Unlike what we had to do in Hydra/TidalCycles, there is no need for separately creating each element. By just creating a beat that sounded decent, I also had a bonus of the visuals already being created for me, and it’s synced with the music.This was both a boon and a limitation. As the audio is directly dependent on the visuals, it meant you are limited in what you want to express and you are tied-in to what options are available in Cascade. As it uses CSS as an interface, it meant that if you want to create some complex visuals you will have to perform some complicated DOM manipulations on your own to create something that other live coding platforms offer in a single function.

Slides ( accessible only to NYU emails! )

The basic idea I got from the reading was that currently composed-music is too boring and predictable, and she develops this idea of ‘random corruption’ to make music more interesting and makes it a point to differentiate it from ‘random generation’. I think that’s fair enough. As I was reading her article, I was about to bring out my pitchfork as I didn’t believe in random generation for music, as I believe that as expressive as music is, there are rules and certain guidelines you can play by to make music more interesting. E.g in piano composition, a common trick to make a LH chord + RH melody sound more interesting is to arpeggiate the LH chord. Does that make it predictable? Sure, maybe, but it’s just one of the ways of approaching music composition and there are many other ways of doing it, but it has sounded nice for hundreds of years and it will be difficult for randomly generated notes to approach the history and impact of this approach.

However that wasn’t the point she was making so I was thankful and put my pitchfork down. Random corruption is a much more palatable idea to me, and when Prof Aaron demonstrated the randomly dropped notes in class last Thursday I understood that there are some uses to it. However, this still relies on an composed piece, with random corruption removing information from the composition, and not adding entropy/random information to it ( random generation ). A composed piece is order brought together by a person from the chaos of the notes in the world. Removing some information from this order will still sound nice. Adding completely random noise to it will not. That is an important distinguishing feature between the two.

After writing this critique, I went on to listen to Laurie Spiegel’s album — The Expanding Universe (1980) and I liked it! It sounded rather compositional with possibly a few hints of randomly dropped notes ( or at least that’s what I thought ). It was released before this article was written, so I’m not sure if she was already inspired by this idea of this information theory model or if it was still in the works. I listened to her newer album after, Obsolete Systems (2001), and it definitely felt more ‘random generation’ and I liked it less because of that.