Video Demo

Sound Composition

Both sound and visual design were built around the main idea of our project — a horror-esque live coding performance. It was definitely unconventional for live coding performances, but for some reason the idea stuck as our group was super excited about audience reactions moreso than the performance itself.

We borrowed many sound elements from horror movie soundtracks — ambient noises, ominous drums, and banging doors. The performance starts with some ambient noise (‘ab’ sound played with a tweaked class example), thumping sounds (jvbass) and crows (crows) chirping at the background. We transition to a few different main ‘melodies’, a haunting vibraphone sound (supervibe), footsteps (a custom sample) and a few other sound samples that sounded unnerving to us.

The buildup happens through the ‘footsteps’ custom sample we have. We speed up the footsteps as we fade out the sound elements to make it sound like someone is running away from something. At it’s peak, we transition to the jumpscare, before fading to a scene thats reminiscent of TV static. The sound for the TV static scene is our interpretation of TV white noise through TidalCycles, and we made good use of the long sample examples that were provided to us in class.
d2 $ jux rev $ slow 8 $ striateBy 32 (1/4) $ s "sheffield" #begin 0.1 # end 0.3 # lpf 12000 # room 0.2 # cut 1;

Visual Composition

The visuals are based off an image of a hallway with a door at the end. The main idea behind it was to zoom in towards the door as the jumpscare approaches. We intensified the image with Hydra by adding dark spots, saturating the image, and creating a flickering effect as the performance progressed. At the end, we end with a shot of a zoomed in door before transitioning to the jumpscare.

Getting the jumpscare video to work ( please don’t visit the video.src unprepared ) took us a surprising amount of time, as we were struggling to find a place to upload the video. Uploading the video on Google Drive didn’t seem to work because of some CORs errors, and uploading to YouTube didn’t work either. We finally found a host that worked which was imgur.

var video = document.createElement('video');
video.loop = false video.crossOrigin = "anonymous";
video.autoplay = true;
video.src = 'https://i.imgur.com/uFj91DQ.mp4';
video.oncanplaythrough = function() {
s0.init({ src: video, dynamic: true });
src(s0).out()
}

After the jumpscare, we transition to TV static which reminded us of what you might see after being caught by a monster in a video game or what you would see on TV.


Work Division

Jun and Aadhar primarily worked on TidalCycles while Maya worked on Hydra, which was the same arrangement we had as the first drum circle project last week. However, we found ourselves giving a lot more feedback to each other on each part of the project instead of isolating our parts we were working on, and we think that helped create a more cohesive project. A big part of our project was repeatedly practicing and coordinating our synchronous changes for the jumpscare, as TidalCycles needed to trigger the jumpscare sound while in Hydra you would need to trigger the jumpscare video which needed to happen at the same time for the performance to feel coherent. We decided to use the ‘scale’ of Hydra as an internal indicator among ourselves to signal when we needed to get ready to trigger the jumpscare. When the scale of the room reaches [32], both parts of the performance would trigger the jumpscare sequence.

Hydra (Nicholas):

We decided to give our drum circle project a floaty underwater-esque energy. When messing around with video inputs, I found a GIF of jellyfish swimming in the Hydra docs as a starting point.

One of the main things I hated about this GIF was the fact that it didn’t create a perfect loop. Having it as the focal point of the visual part of the performance didn’t look good. As a result, I decided to put it in a wave of color derived from an oscillator with many types of modulation applied.

Having the colors come over the screen in a wave allowed for the looping to be more seamless and created the opportunity for us to add something in the background. Buzz Lightyear was the perfect candidate for this because of this perfectly looping, ominous, and a bit funny GIF.

Placing the sad Buzz Lightyear GIF below the flood of colors adds to Buzz’s expression of hopelessness. I found this a bit funny, and by effectively combining 2 different sources to manipulate, I think it logistically also gave more opportunities to mess around with and improvise during the performance.

Tidalcycles (Ian, Chenxuan, Bato):

Rather than splitting our roles into strict parts, we decided to freely work on the audio together and see what came out of it. This unrestrained kind of jamming had yielded satisfactory results for our previous meetings, and we thought it would be best to run with what had been effective for us. That’s the spirit of live coding, after all—spontaneous inspiration!

We started by laying out a simple combination of chords alternating between two patterns using the supersaw synth, which sounded quite airy and gave off house music vibes. Ian added a nice panning effect to it, which gave it a sense of dimension and made it sound more dynamic. On top of this, we added a catchy melody that fit with the chords using the “arpy” sample, which was then made glitchy with the jux (striate) function.

We also experimented with a few drum/bass patterns that went along with the melodies. An attempt that went particularly well saw us pair the “drum” samples with a random number generator AND a low pass filter with a sine oscillator. This combination allowed for a lot of constant variation in both the literal and spatial sound for the drums, which kept things adequately erratic and exciting.

We tried to match the atmosphere of the audio with the visuals. This meant that we chose instruments that fit a more floaty-ish vibe—the supersaw synth was a surprisingly decent choice—and gave it more of that underwater feel by adding some reverb (the “room” effect) and audial dimension (the “pan” effect). The drum line with the oscillating lpf values also lent the overall audio a fluid, mesmerizing quality that we felt went along with the aquatic theme.

SOUND COMPOSITION

For the sound composition, we used the chopped sample from Magnetic-ILLIT as the main reference. Inspired by the arpeggio in its intro, we chopped off different parts of this sample to create a new composition with original rhythms. The music first starts by reverbed the piano sound, using TidalCycle functions like jux rev and off, and slowly builds up by adding elements like drums and hi-hats. The main melody is composed of different ‘beep’ sounds that were sliced from different parts of the arpeggio sample, thereby creating more glitchy aesthetics. We mostly focused on creating a combination of sounds that go well together by carefully curating different samples and trying different techniques to handle the long sample.

VISUALS COMPOSITION

For the visuals, we first started with an image, as we did for our first group project. We then worked on adding different effects that transformed our piece from a red building to a variety of different shapes and effects.

We ran into some struggles with the image loading, and thanks to Professor, we got this code, that helps load the image on the screen:

image = document.createElement('img')
image.crossOrigin = "anonymous"
image.src = "https://blog.livecoding.nyuadim.com/wp-content/uploads/photo-1711873317754-11f6de89f7ae-scaled.jpg"
loaded = () => {
   s0.init({ src: image })
   src(s0).out()
   console.log("Image loaded");
}
if (image.complete) {
   loaded()
} else {
   image.addEventListener('load', loaded)
}

Don’t forget to let flock load before running anything! >> You can check from the browser console.

We started by using .modulatePixelate(osc(10), 200).saturate(0)

Where the modulatePixelate showed interesting effects and the saturate(0) removes the colors which we reintroduce later with Colorama by choosing our colors and gradients that are different from the ones in the image. We also use mult and blend to add styles remove parts of the screen and change the effect, repeat for our main audio interaction between other effects.

We’ve played with gradually reducing the numbers in the pixelate function to simplify our visuals. Our plan is to incorporate it as the final function, making it easier for the next group to take over from us.

WORK DIVISION

The work was divided by nature into audio and visuals, when we first met, with Fatema and Marta working with visuals and Jeongin working with audio, we found that that’s where our interests lay and how it worked best for our group. Marta and I worked on different visuals and met to try various effects and see different possibilities for how we could develop different styles. Then we worked with Jeongin to align our effects, adding different interactions, and ccv audio effects to align the visuals with the audio and form a set style for our live-coding piece.

Concept:

The project came together after deciding we wish to create a faster track, compared to our previous one. After Noah laid down some break beats and Aakarsh laid some textural pads, we decided on an atmospheric breakcore sound. Raya and Juanma took the visuals from this starting point and created an eclectic mix of old cartoon footage with hydra glitch effects.

Sound Composition:

The first sound is Sophie’s “It’s Okay To Cry” put through a striateBy to create an ambient pad. Then a voice line from Silent Hill 2 goes through a similar process. A superzow arp with long sustain kicks in creating a bleeding, noisy, driving sound. A trem adds rhythm to this sustained blended arp. A jungle bass and Amen break cut up kick in now. A superpwm with a smaller sustain kicks in afterwards.

The whole song goes double time now. The break is replaced with a faster, glitchier cut-up. A rhythmic chopped up voice memo replaces the superpwm arp. These are transitioned in with a gradual manual increase of a low pass filter. The superpwm comes back now, crushed to add more granularity to the sound texture.

The song eventually returns to the original tempo, cutting out the vocal fragments, while the rest of the instrumentals gradually fade out.

Visual Composition:

The visual composition initializes video playback from a URL of “The Mechanical Cow” and first starts b&w. Then applies a modulation effect to create a slightly scaled black and white version of the video. Then intensifies the grayscale effect, blends it with the original output. Then the colorful variation blocks introduce color dynamics, kaleidoscopic effects, and rotation modulations, controlled by time-based functions, enhancing the visual complexity. Then we increase the intensity of visual effects with higher frequency oscillations, complex colorama applications, and increased modulations that respond dynamically to the sin of the time. For the final layer we applied color changes and scaling based on the audio’s frequency spectrum, combining with masked oscillations to produce rhythmic visuals ideal for accompanying drum beats.

Code:

d1 $ jux rev $ slow 16 $ striateBy 64 (1/8) $ sound "sophie:0" # room "0.9" # size "0.9" # up "-2" # gain 0.8
d2 $ jux rev $ slow 16 $ striateBy 64 (1/8) $ sound "scream:0" # room "0.9" # size "0.9" # up "1"

d5 $ slow 8 $ s "jungbass" <| n (run 20) # krush 8 


xfadeIn 2 64 $ fast 1 $ chunk 4 (|- note 5) $ jux rev $ 
  arp "<converge diverge>" $ note "<cs3'min9'11'o>"
  # sound "superzow" # gain 0.7  # tremr "8" # tremdp 1
  # sustain 1 # room 0.3 # sz 0.5  
  # lpf 1000 # lpq 0.2 # crush 2

xfadeIn 3 64 $ fast 1 $ jux rev $ 
  arp "<pinkydown>" $ note "<cs3'min9'11'o>"
  # sound "superpwm" # gain 0.8 
  # sustain 0.2 # room 0.3 # sz 0.5  
  # hpf 500 # hpq 0.2 # crush 2

setcps(0.7)

hush

d6 $ slow 4  $  s "amencutup" <| n (shuffle 8 $ run 32) # speed "{1,2,3}%8" # gain 1 # room 0.4 # sz 0.6 # krush 4 # lpf 6000
d6 $ slice 8 "[<0 1> 2] [<3*8 4 0> <4 3 1>]" $ s "breaks152" # gain 1.35 # legato 1 # room 0.3 # sz 0.6 # lpf 6000 # krush 2
d3 $ slow 2 $ slice 8 "1 [1*2] 2 3 2 [4*2] [~ 3] [5 [5*2]]" $ s "vmemo:2" # gain 1.7 # legato 1 # room 0.3 # sz 0.9 # krush 4 # lpf 3000

d6 $ silence
d3 $ silence


d10 $ ccv "120 30 110 40" # ccn "1" # s "midi"
d11 $ slow 4 $ ccv "[<0 50> 127] [<0 50> <177 30>]" # ccn "0" # s "midi" 
d12 $ ccv "127 60 127 90" # ccn "2" # s "midi"
d13 $ ccv "20 40 60 80 100 120 127 0" # ccn "3" # s "midi"
s2.initVideo('https://upload.wikimedia.org/wikipedia/commons/c/ce/The_Mechanical_Cow_%281927%29_silent_version.webm')

src(s2).out(o0)

//b&w 1
src(s2).modulate(src(s2), [0,1]).scale(0.8).out(o0)

//b&w 2
src(s2).color(-1.5, -1.5, -1.5).blend(o0).rotate(-0.5, -0.5).modulate(shape(4).rotate(0.5, 0.5).scale(2).repeatX(2, 2).modulate(o0).repeatY(2, 2)).out(o0)

//colorful variation 
src(s2).blend(osc(5, 0.5, ()=>cc[2]*0.02)
    .kaleid([3,4,5,7,8,9,10].fast(0.1))
    .color(0.5, 0.3)
    .colorama(0.4)
    .rotate(0.009,()=>Math.sin(time)* -0.00001 )
    .modulateRotate(o0,()=>Math.sin(time) * 0.003)
    .modulate(o0, 0.9)
    .scale(0.9))
    .out(o0)

//colorful variation 2
src(s2).modulate(src(s2), ()=>cc[2])
  .blend(osc(5, 0.5, 0.1)
              .kaleid([3,4,5,7,8,9,10].fast(0.1))
  .color(0.5, 0.3)
             .colorama(0.4)
             .rotate(0.009,()=>Math.sin(time)* -0.0001)
              .modulateRotate(o0, ()=>Math.sin(time) *0.003)
              .modulate(o0,0.9)
              .scale(0.9)
             )
    .out(o0)

//more distortion (add colorama details, osc 10)
src(s2).modulate(src(s2), ()=>cc[2])
  .blend(osc(10, 0.5, ()=> 0.1 + 0.9*Math.sin(time*0.05))
              .kaleid([3,4,5,7,8,9,10].fast(0.1))
  .color(0.5, 0.3)
             .colorama(() => 0.5 + 0.5 * Math.sin(time))
             .rotate(0.009,()=>Math.sin(time)* -0.0001)
              .modulateRotate(o0, ()=>Math.sin(time) *0.003)
              .modulate(o0,0.6)
              .scale(0.9)
             )
    .out(o0)

//super distortion
src(s2).rotate(0).modulate(src(s2), ()=>cc[0])
  .blend(osc(10, 0.5, ()=> 0.1 + 0.9*Math.sin(time*0.05))
              .kaleid([3,4,5,7,8,9,10].fast(0.1))
  .color(0.5, 0.3)
             .colorama(() => 0.5 + 0.5 * Math.sin(time))
             .rotate(0.009,()=>Math.sin(time)* -0.0001)
              .modulateRotate(o0, ()=>Math.sin(time) *0.003)
              .modulate(o0,0.6)
              .scale(0.9)
             )
    .out(o0)

src(s2)
.mult(osc(20,-0.1,1).modulate(noise(3,1)).rotate(0.7))
.posterize([3,10,2].fast(0.5).smooth(1))
.modulateRotate(o0)
.out()

//vibrant circle layer
src(s2).add(noise(2, 1)).color(0, 0, 3).colorama(0.4).out()

//vibrant circle layer with MIDI
src(s2).add(noise(()=>cc[1], 1)).color(0, 0, 3).colorama(0.4).out()

//Transition
src(s2).add(noise(()=>cc[1]*0.3, 1)).scale(()=> a.fft[2]*5).color(0, 0, 3).colorama(0.4).out(o0)

//drum vibes
src(s2)
.color(() => a.fft[2]*2,0, 1)
.modulate(noise(() => a.fft[0]*10))
.scale(()=> a.fft[2]*5)
.layer(
  src(o0)
  .mask(osc(10).modulateRotate(osc(),90,0))
  .scale(() => a.fft[0]*2)
  .luma(0.2,0.3)
)
.blend(o0)
.out(o0)

hush()

Work Distribution:

The entire project came together pretty much simultaneously, the visuals were shaped by the audio and vice-versa. Hence, everyone contributed from a technical level to design choices on whatever level possible. More specifically, Aakarsh worked on the synths and pads. Noah came up with the drums and rhythm. Raya worked on the hydra part to create the visual layers on top of the video. Juanma came up with the video and worked on the MIDI synching.

Hoffmann and Naumann trace the roots of artist-musicians back to figures like Leonardo da Vinci, establishing a long-standing tradition of interdisciplinary genius that challenges the modern compartmentalization of artistic professions. This historical lens invites a contemplation on the essence of creativity itself—is it not the spirit of inquiry and boundless exploration that defines true artistry, irrespective of medium? The concept of the “all-round artist” resonates with my understanding of art as a fluid expression of human experience, unbounded by rigid categorizations. It prompts one to consider how contemporary artists might draw upon this tradition to navigate and transcend the increasingly blurred lines between disciplines.

The move towards abstraction in both art and music reflects a shift from representational to conceptual modes of expression. The authors highlight the role of abstraction in fostering a form of universal communication:

“The main focus of modernist art was therefore on the basic elements (color forms tones etc.) and the basic conditions (manner and place of presentation) of artistic production.”​​ So the question arises, in what ways does the abstraction in music influence the abstraction in visual arts, and vice versa?

The exploration of synesthesia and the case studies of Kandinsky and Schoenberg exemplify the profound interplay between seeing and hearing, revealing how artists and musicians have sought to create immersive and multisensory experiences. This intersection fascinates me, as it encapsulates the quest for a holistic artistic expression that engages all senses, thereby amplifying the impact and reach of the artwork.

The role of art schools in fostering interdisciplinary and multidisciplinary work underscores the importance of educational environments in shaping the artists of the future. As someone who values the transformative power of education, I see art schools as crucial incubators for challenging traditional boundaries and nurturing the next generation of artist-musicians. This prompts further reflection on how curricula and institutional structures might evolve to better support this cross-pollination of ideas and techniques.

In conclusion, I believe Hoffmann and Naumann’s work encourages us to reconsider the fluid boundaries between artistic disciplines, urging a deeper appreciation for the complex dialogues that have shaped the evolution of art and music.

With an increasing push to opt for specialization in the current economic environment, Artist-Musicians, Musician-Artists by Justin Hoffmann, Sandra Naumann serves to be a calming point for my anxieties. The text served as a great and compact overview of multidisciplinary traditions spanning from the 20th century to early 21st century. Enamored by their descriptions for both artists I had heard about and hadn’t, I was prompted to check out a lot of these pieces while reading. Their highlighting of fashion icons like Vivienne Westwood and club spaces in LES were of particular interest to me, as I had been exploring intersections of fashion and club spaces with media arts in my writings at university over the last few years. The multi-modality of our major, Interactive Media, had always seemed a given to me since I had been engaging with various forms of media together even before I started this degree. The nature of this rabbit hole had always seemed a natural path, as starting out with music led me to start making album covers which led me to start making music videos, and this sequence kept building up till I ended up with video games, installations, websites or short films. However, realizing that this amorphousness of mediums is a relatively new turn of events which was spearheaded by experimental practitioners, I have gained a new appreciation and understanding of the depth of our medium

It was interesting to see the background / history of how the definition or range of art and music were pushed further. I think the part that stood out most to me was the turning point where art schools provided the environment in which students could try new things, becoming “the point of origin for interdisciplinary and multidisciplinary work.”

It’s also interesting to see how pop started, with “the principle that a good punk song only needed three chords applied just as much as the do-it-yourself attitude.” I think this is a mindset that still prevails, even in our course right now. It’s noteworthy that the combination of a good environment and a shift in people’s perspectives opened the field to new definitions of musicians and artists, and what they could do.