Visuals (Aadhar & Chenxuan)

The idea was to combine hydra visuals with an animation overlayed on top. Aadhar drew a character animation of a guy falling, which was used on top to drive the story and the sounds. Blender was used to draw the frames and render the animation.

The first issue that came up with overlaying the animation was turning the background of the animated video transparent. We tried really hard to get a video with a transparent background into hydra, but that didn’t work because the background showed up black at the end no matter what we did. Then, we used hydra code itself to turn the background transparent, using its luma function that was relatively easier to do.

Then, because we had a video that was 2 minutes long, we couldn’t get all of it into hydra. Apparently, hydra only accepts 15-second-long clips. So we had to chop it up into eight 15-second-long pieces and trigger each video at the right time to make the animation flow. However, it wasn’t as smooth as we thought it would be. It took a lot of rehearsals for us to get used to triggering the videos at the right time – which didn’t come off even till the end. The videos were looping before we could trigger the next one (which we tried our best to cover and the final performance really reflected it). Other than the animation itself, different shaders were used to create the background of the thing. 

Chenxuan was responsible for this part of the project. We created a total of six shaders. Notably, the shader featuring the colorama effect appears more two-dimensional, which aligns better with the animation style. This is crucial because it ensures that the characters and the background seem to exist within the same layer, maintaining visual coherence.

However, we encountered several issues with the shaders, primarily due to a variety of errors during loading. Each shader seems to manifest its unique problem. For example, some shaders experience data type conflicts between integers and floats. Others have issues with multiple declarations of ‘x’ or ‘y’ variables, which cause conflicts within the code.

Additionally, the shaders display inconsistently across different platforms. On Pulsar, they perform as expected, but on Flok, the display splits into four distinct windows, which complicates our testing and development process. 

Code for the visuals can be found here.

Audio (Ian)

The audio is divided into three main parts: one before the animation, one during the animation, and one after the animation. The part before the animation features a classic TidalCycles buildup—the drums stack up, a simple melody using synths comes in, and variation is given by switching up the instruments and effects. This part lasts for roughly a minute, and its end is marked by a sample (Transcendence by Nujabes) quietly coming in. Other instruments fade out as the sample takes center stage. This is when the animation starts to come in, and the performance transitions to the second part.

The animation starts by focusing on the main character’s closed eyes. The sample, sounding faraway at first, grows louder and more pronounced as the character opens their eyes and begins to fall. This is the first out of six identifiable sections within this part, and this continues until a moment in which the character appears to become emboldened with determination—a different sample (Sunbeams by J Dilla) comes in here. This second part continues until the first punch, with short samples (voice lines from Parappa the Rapper) adding to the conversation from this point onwards.

Much of the animation features the character falling through the sky, punching through obstacles on the way down. We thought the moments where these punches occur would be great for emphasizing the connection between the audio and the visuals. After some discussion, we decided that we would achieve this by switching both the main sample and the visuals (using shaders) with each punch. Each punch is also made audible through a punching and crashing sound effect. As there are three punches total, the audio changes three times from the aforementioned second part. These are the third to fifth sections (one sample from Inspiration of My Life by Citations, two samples from 15 by pH-1).

The character eventually falls to the ground, upon which the animation rewinds quickly and the character falls back upwards. A record scratch sound effect is used to convey the rewind, and a fast-paced, upbeat sample (Galactic Funk by Casiopea) is used to match the sped-up footage. This is the sixth and final section of this part. The animation ends by focusing back on the character’s closed eyes, and everything fades out to allow for the final part to come in.

The final part seems to feature another buildup. A simple beat is made using the 808sd and 808lt instruments. A short vocal(ish?) sample is then played a few times with varying effects, as if to signal something left to be said—and indeed there is.

Code for the audio and the lyrics can be found here.

The spirit of Live Coding:

Embrace the spontaneity

Today’s me might fail me

But there’s always tomorrow

나도 기다려져 내일이!

Breakdown: Noah + Aakarsh worked on music mainly. Aakarsh made pt2,3,6,7,8 while Noah made pt 4,5. Nicholas made all the visual effects while the group decided the videos+text to be displayed mostly together. 

Audio Code

Audio

The music is inspired by various hyperpop, dariacore, digicore and other internet-originated microgenres. The album’s Dariacore, Dariacore 2, Dariacore 3 by artist Leroy were particularly the inspirations in mind. People on the internet jokingly describe Dariacore as maxxed out plunderphonics and the ADHD-esque hyper intensity of the genre couple with meme-culture infused pop sampling was what particularly attracted me and noah. While originally starting as  a Dariacore project, this 8-track project eventually ended up spanning multiple genres to provide narrative arcs and various downtempo-uptempo sections.  This concept is inspired by Machine Girl’s うずまき (Uzumaki) , a song that erratically cuts between starkly different music genres and emotional feels. We wanted our piece to have a combination of this song’s composition and a DJ set as our composition. Here’s a description of the various sections: Here’s a description of various sections:

Pt 1: Midwest Emo Family guy + Girls want Girls + 808 Mafia and Metro Boomin Producer tags

Pt 2: Girls Want Girls + Trap (?) drums

Pt 3: Toosie Slide + Jersey Club drums and sfx

Pt 4: Nujabes’ kodama[interlude] + 1980s News 12 Long Island Broadcast + Westside Gunn adlibs

Pt 5: Eightiesheadachetape’s the bowling alley + Playboi Carti’s Long Time – Intro + Pierre Bourne Producer tag + Think Break

Pt 6: Bad Bunny’s Dakiti + Captain Sparklez’s Revenge + Taylor Swift’s Love Story + Baile Funk Drums

Pt 7: Taylor Swift -> Justin Bieber’s Love Yourself Ambient

Pt 8: Playboi Carti’s Shoota + PinkPantheress’s Boy Is A Liar + Crystal Castles’s Kept + Lil Uzi’s Just Wanna Rock + Khaleeji Drums

Viz Code

Viz

For the visuals, we wanted to incorporate pop culture references and find the border between insanity and craziness. We use a combination of real people, anime, and NYUAD references to keep the viewer guessing what’ll come next. I tried to get around Hydra’s restrictions when it comes to videos by developing my own FloatingVideo class that enabled us to play videos in p5 that we could put over our visuals. I also found a lot of use in the blend and layer functions that allowed us to combine different videos and sources onto the canvas.

Hydra (Marta & Fatema)

For our visual side, we have decided to begin with vibrant visuals characterized by dynamic, distorted light trails. Our initial code included loading the image, modulating it with a simple oscillator, and then blending it with the original image, resulting in a blur effect. As we progressed, we integrated more complex functions based on various modulations.

As our project evolved, our goal was to synchronize our visuals more seamlessly with the music, increasing in intensity as the musical layers deepened. We incorporated a series of ‘mult(shape)’ functions to help us calm down the visuals during slower beats.

Finally, we placed all the visuals in an array and used CCV to update them upon the addition of each new layer of music. This enabled us to synchronize the transitions between the music and visuals. Additionally, we integrated CCs into the primary visual functions to enhance the piece with a more audio-reactive experience.

created an array of visuals that enabled swift transitions, all perfectly timed with sound triggers for perfect synchronization. Additionally, we integrated CC’s into the main visual functions to enhance the piece with a more audio-reactive experience.

Check out the final code for visuals.

Tidalcycles (Bato & Jeongin)

For our final composition, our group created a smooth blend of UK Garage and House music, set at a tempo of 128 BPM. The track begins with a mellow melody that progresses up and down in the E-flat minor scale. On top of this melody, we layered a groovy UK Garage loop, establishing the mood and setting the tone of the composition.

To gradually introduce rhythm to our composition, Jeong-In layered various drum patterns, adding hi-hats, claps, and bass drums one by one. On top of Jeong-In’s drums, we introduce another layer of classic UK Garage drum loop, which completes the rhythmic structure of the composition.

Furthermore, we incorporated a crisp bass sound, which gave the overall composition a euphoric vibe. After introducing this element, we abruptly cut off the drums to create a dramatic transition. At this point, we added a new melodic layer, changing the atmosphere and breaking up the repetitiveness of the track. Over this new layer, we reintroduced the previously used elements but in a different order and context, giving the composition a fresh perspective.

Additionally, we used a riser to smoothly transition into our drum loop and also incorporated a sea wave sound effect to make the sound more dynamic. We end the composition with a different variation of our base melody, utilizing the jux rev function.

Check out the final code for music.

Final Project Documentation: Jun Ooi, Raya Tabassum, Maya Lee, Juan Manuel Rodriguez Zuluaga

Video Link: https://drive.google.com/file/d/15Rmvtz3kL-ofHJQ6K3GiCsjlAorl1_80/view?usp=drive_link

An important element of the practice of Live Coding is how it challenges our current perspective and use of technology. We’ve talked about it through the lens of notation, liveliness(es), and temporality, among others. Our group, the iCoders, wanted to explore this with our final project. We discussed how dependent we are on our computers, many of us Macs. What is our input, and what systems are there to receive this? What does 4 people editing a Google document simultaneously indicate about the liveliness of our computer? With this in mind, we decided to construct a remix of sounds in Apple computers. These are some of the sounds we hear the most on our day to day, and we thought it would be fun to take them out of context, break their patters, and dance to them. Perhaps a juxtaposition between the academic and the non-academic, the technical and the artsy. We wanted to make it be an EDM remix because this is what we usually hear at parties, and believe that the style would work really well. We began creating and encountered important inspirations throughout the process.

During one of our early meetings, we had the vast libary of apple sounds, but were struggling a bit with pulling something together. We decided to look if someone had done something similar to our idea and found this video by Leslie Way which helped us A LOT. 

  • Mac Remix: https://www.youtube.com/watch?v=6CPDTPH-65o

Compositional Structure: From the very beginning we wanted our song to be “techno.” Nevertheless, once we found Leslie Way’s remix, we thought that the vibe that remix goes for is very fitting to Apples’ sounds. Upon testing and playing around with sounds a lot, we settled on the idea of having a slow, “cute” beginning using only (or mostly) Apple sounds. Here, the computer would get slowly overwhelmed with us from all the IM assignments and tabs open. The computer would then crash, and we would introduce a “techno” section. Then, we’d try to simulate a bit the songs that we’d been listening to. After many many many iterations, we reached a structure like this: 

The song begins slow, grows, there is a shutdown, it quickly grows again and there is a big drop. Then the elements slowly fade out, and we turn off the computer because we “are done with the semester.”

Sound: The first thing we did once we chose this idea was find a library of Apple sounds. We found many of them on a webpage and added those we considered necessary from YouTube. We used these, along with the Dirt-Samples (mostly for drums) to build our performance. We used some of the songs liked above to mirror the beats and instruments, but also a lot of experimentation. Here is the code for our Tidal Cycles sketch:  

hush

boot
one
two
three
four

do  -- 0.5, 1 to 64 64
  d1 $ fast 64 $ s "apple2:11 apple2:10" -- voice note sound
  d2 $ fast 64 $ s "apple2:0*4" # begin 0.2 # end 0.9 # krush 4 # crush 12 # room 0.2 # sz 0.2 # speed 0.9 # gain 1.1
  once $ ccv "6" # ccn "3" # s "midi"
  d16 $ ccv "10*127" # ccn "2" # s "midi";

shutdown
reboot
talk
drumss
once $ s "apple3" # gain 2
macintosh
techno_drums
d11 $ silence -- silence mackintosh
buildup_2
drop_2
queOnda
d11 $ silence -- mackin
d11 $ fast 2 $  striate 16 $ s "apple3*1"  # gain 1.3 -- Striate
back1
back2
back3
back4


panic
hush
-- manually trigger mackin & tosh to spice up sound
once $ s "apple3*1" # begin 0.32 # end 0.4 # krush 3 # gain 1.7 -- mackin
once $ s "apple3*1" # begin 0.4 # end 0.48 # krush 3 # gain 1.7 -- tosh


-- d14 $ s "apple3*1" # legato 1 # begin 0.33 # end 0.5 # gain 2
-- once $ s "apple3*1" # room 1 # gain 2


hush

boot = do{
  once $ s "apple:4";
  once $ ccv "0" # ccn "3" # s "midi";
}
one = do {
  d1 $ slow 2 $ s "apple2:11 apple2:10"; -- voice note sound
  d16 $ slow 2 $ ccv "<30 60> <45 75 15>" # ccn "2" # s "midi";
  once $  slow 1 $ ccv "1" # ccn "3" # s "midi";
}
two = do {
  d3 $ qtrigger $ filterWhen (>=0) $ s "apple2:9 {apple2:13 apple2:13} apple2:0 apple2:3" # gain 1.5 # hpf 4000 # krush 4;
  xfadeIn 4 2 $ slow 2 $ qtrigger $ filterWhen (>=0) $ s "apple2:7 apple2:8 <~ {apple2:7 apple2:7}> apple2:7" # gain 0.8 # krush 5 # lpf 3000;
  d16 $ ccv "15 {40 70} 35 5" # ccn "2" # s "midi";
  once $ ccv "2" # ccn "3" # s "midi";
}
three = do {
  xfadeIn 2 2 $ qtrigger $ filterWhen (>=0) $ s "apple2:0*4" # begin 0.2 # end 0.9 # krush 4 # crush 12 # room 0.2 # sz 0.2 # speed 0.9 # gain 1.1;
  xfadeIn 12 2 $ qtrigger $ filterWhen (>=2) $ slow 2 $ s "apple:7" <| note (arp "up" "f4'maj7 ~ g4'maj7 ~") # gain 0.8 # room 0.3;
  xfadeIn 6 2 $ qtrigger $ filterWhen (>=3) $ s "apple2:11 ~ <apple2:10 {apple2:10 apple2:10}> ~" # krush 3 # gain 0.9 # lpf 2500;
  d16 $ ccv "30 ~ <15 {15 45}> ~" # ccn "2" # s "midi";
  once $ ccv "3" # ccn "3" # s "midi";
}
four = do {
  -- d6 $ s "bd:4*4";
  d5 $ qtrigger $ filterWhen (>=0) $ s "apple2:2 ~ <apple2:2 {apple2:2 apple2:2}> ~" # krush 16 # hpf 2000 # gain 1.1;
  xfadeIn 11 2 $ qtrigger $ filterWhen (>=1) $ slow 2 $ "apple:4 apple:8 apple:9 apple:8" # gain 0.9;
  d16 $ qtrigger $ filterWhen (>=0) $ slow 2 $ ccv "10 20 30 40 ~ ~ ~ ~ 60 70 80 90 ~ ~ ~ ~" # ccn "2" # s "midi";
  once $ ccv "4" # ccn "3" # s "midi";
}
buildup = do {
  d11 $ silence;
  once $ ccv "5" # ccn "3" # s "midi";
  d1 $ qtrigger $ filterWhen (>=0) $ seqP [
    (0, 2, s "apple:4*1" # cut 1),
    (2, 3, s "apple:4*2" # cut 1),
    (3, 4, s "apple:4*4" # cut 1),
    (4, 5, s "apple:4*8" # cut 1),
    (5, 6, s "apple:4*16" # cut 1)
  ] # room 0.3 # speed (slow 6 (range 1 2 saw)) # gain (slow 6 (range 0.9 1.3 saw));
  d6 $ qtrigger $ filterWhen (>=0) $ seqP [
    (0, 2, s "808sd {808lt 808lt} 808ht 808lt"),
    (2,3, fast 2 $ s "808sd {808lt 808lt} 808ht 808lt"),
    (3,4, fast 3 $ s "808sd {808lt 808lt} 808ht 808lt"),
    (4,6, fast 4 $ s "808sd {808lt 808lt} 808ht 808lt")
  ] # gain 1.4 # speed (slow 6 (range 1 2 saw));
  d12 $ qtrigger $ filterWhen (>=0) $ seqP [
      (0, 1, slow 2 $ s "apple:7" <| note (arp "up" "f4'maj7 ~ g4'maj7 ~")),
      (1, 2, slow 2 $ s  "apple:7*2" <| note (arp "up" "f4'maj7 c4'maj7 g4'maj7 c4'maj7")),
      (2, 3, fast 1 $ "apple:7*4" <| note (arp "up" "f4'maj7 c4'maj7 g4'maj7 c4'maj7")),
      (3, 4, fast 1 $ s "apple:7*4" <| note (arp "up" "f4'maj7 c4'maj7 g4'maj7 c4'maj7")),
      (4, 6, fast 1 $ s "apple:7*4" <| note (arp "up" "f4'maj9 c4'maj9 g4'maj9 c4'maj9"))
    ] # cut 1 # room 0.3 # gain (slow 6 (range 0.9 1.3 saw));
  d16 $ qtrigger $ filterWhen (>=0) $ seqP [
      (0, 2, ccv "20"),
      (2, 3, ccv "50 80" ),
      (3, 4, ccv "40 60 80 10" ),
      (4, 5, ccv "20 40 60 80 10 30 50 70" ),
      (5, 6, ccv "20 40 60 80 10 30 50 70 5 25 45 65 15 35 55 75" )
    ] # ccn "2" # s "midi";
}
shutdown = do {
  once $ s "print:10" # speed 0.9 # gain 1.2;
  once $ ccv "7" # ccn "3" # s "midi";
  d1 $ silence;
  d2 $ qtrigger $ filterWhen (>=1) $ slow 4 $ "apple2:0*4" # begin 0.2 # end 0.9 # krush 4 # crush 12 # room 0.2 # sz 0.2 # speed 0.9 # gain 1.1;
  d3 $ silence;
  d4 $ silence;
  d5 $ silence;
  d6 $ silence;
  d7 $ silence;
  d8 $ silence;
  d9 $ silence;
  d10 $ silence;
  d11 $ silence;
  d12 $ silence;
  d13 $ silence;
  d14 $ silence;
  d15 $ silence;
}
reboot = do {
  once $ s "apple:4" # room 1.4 # krush 2 # speed 0.9;
  once $ ccv "0" # ccn "3" # s "midi";
}
talk = do {
  once $ s "apple3:1" # begin 0.04 # gain 1.5;
}
drumss = do {
  d12 $ silence;
  d13 $ silence;
  d5 $ silence;
  d6 $ fast 2 $ s "808sd {808lt 808lt} 808ht 808lt" # gain 1.2;
  d8 $ s "apple2:3 {apple2:3 apple2:3} apple2:3!6" # gain (range 1.1 1.3 rand) # krush 4 # begin 0.1 # end 0.6 # lpf 2000;
  d7 $ s "apple2:9 {apple2:13 apple2:13} apple2:0 apple2:3" # gain 1.3 # lpf 2500 # hpf 1500 # krush 3;
  d9 $ s "feel:5 ~ <feel:5 {feel:5 feel:5}> ~" # krush 3 # gain 0.8;
  d10 $ qtrigger $ filterWhen (>=0) $ degradeBy 0.1 $ s "bd:4*4" # gain 1.5 # krush 4;
  d11 $ qtrigger $ filterWhen (>=0) $ s "hh*8";
  xfadeIn 14 2 $ "jvbass ~ <{jvbass jvbass} {jvbass jvbass jvbass}> jvbass" # gain (range 1 1.2 rand) # krush 4;
  xfadeIn 15 1 $ "bassdm ~ <{bassdm bassdm} {bassdm bassdm bassdm}> bassdm" # gain (range 1 1.2 rand) # krush 4 # delay 0.2 # room 0.3;
  d10 $ ccv "1 0 0 0 <{1 0 1 0} {1 0 1 0 1 0}> 1 0" # ccn "4" # s "midi";  
  once $ ccv "8" # ccn "3" # s "midi";
}
dancyy = do {
  d1 $ s "techno:4*4" # gain 1.2;
  d2 $ degradeBy 0.1 $ fast 16 $ s "apple2:13" # note "<{c3 d4 e5 f2}{g3 a4 b5 c2}{d3 e4 f5 g2}{a3 b4 c5 d2}{e3 f4 g5 e2}{f3 f4 f5 f2}{a3 a4 a5 a2}{b3 b4 b5 b2}>" # gain 1.2;
}
macintosh = do {
  d11 $ s "apple3*1" # legato 1 # begin 0.33 # end 0.5 # gain 2;
  once $ s "apple:4";
  once $ ccv "7" # ccn "3" # s "midi";
}
techno_drums = do {
  once $ ccv "10" # ccn "3" # s "midi";
  d14 $ ccv "1 0 ~ <{1 0 1 0} {1 0 1 0 1 0}> 1 0" # ccn "2" # s "midi";
  d6 $ s  "techno*4" # gain 1.5;
  d7 $ s " ~ hh:3 ~ hh:3 ~ hh:3 ~ hh:3" # gain 1.5;
  d8 $ fast 1 $ s "{~ apple2:7}{~ hh}{~ ~ hh hh}{ ~ hh}" # gain 1.3;
  d9 $ fast 1 $ s "{techno:1 ~ ~ ~}{techno:1 ~ ~ ~}{techno:1 techno:3 ~ techno:1}{~ techno:4 techno:4 ~} " # gain 1.4;
  d4 $ "jvbass ~ <{jvbass jvbass} {jvbass jvbass jvbass}> jvbass" # gain (range 1 1.2 rand) # krush 4;
  d15 $ "bassdm ~ <{bassdm bassdm} {bassdm bassdm bassdm}> bassdm" # gain (range 1 1.2 rand) # krush 4 # delay 0.2 # room 0.3;
}
buildup_2 = do {
  d7 $ qtrigger $ filterWhen (>=0) $ seqP [
     (0, 11, s " ~ hh:3 ~ hh:3 ~ hh:3 ~ hh:3" # gain (slow 11 (range 1.5 1.2 isaw))),
     (11, 12, silence)
  ];
  d8 $ qtrigger $ filterWhen (>=0) $ seqP [
     (0, 11, s "{~ apple2:7}{~ hh}{~ ~ hh hh}{ ~ hh}" # gain (slow 11 (range 1.3 1 isaw))),
     (11, 12, silence)
  ];
  d9 $ qtrigger $ filterWhen (>=0) $ seqP [
     (0, 11, s "{techno:1 ~ ~ ~}{techno:1 ~ ~ ~}{techno:1 techno:3 ~ techno:1}{~ techno:4 techno:4 ~}" # gain (slow 11 (range 1.5 1.2 isaw))),
     (11, 12, silence)
  ];
  d4 $ qtrigger $ filterWhen (>=0) $ seqP [
     (0, 11, s "jvbass ~ <{jvbass jvbass} {jvbass jvbass jvbass}> jvbass" # gain (range 1 1.2 rand) # krush 4),
     (11, 12, silence)
  ];
  d13 $ qtrigger $ filterWhen (>=0) $ seqP [
     (0, 11, s "bassdm ~ <{bassdm bassdm} {bassdm bassdm bassdm}> bassdm" # gain (range 1 1.2 rand) # krush 4 # delay 0.2 # room 0.3),
     (11, 12, silence)
  ];
  d11 $ qtrigger $ filterWhen (>=0) $ seqP [
     (0, 1, s "apple3" # cut 1 # begin 0.3 # end 0.5 # gain 1.7),
     (1, 2, silence),
     (2, 3, s "apple3" # cut 1 # begin 0.3 # end 0.5 # gain 1.9),
     (3, 4, silence),
     (4, 5, s "apple3" # cut 1 # begin 0.3 # end 0.5 # gain 2.1),
     (5, 6, silence),
     (6, 7, s "apple3" # cut 1 # begin 0.3 # end 0.5 # gain 2.1),
     (7, 8, silence),
     (11, 12, s "apple3" # cut 1 # begin 0.3 # end 0.5 # gain 2.3)
  ];
  d1 $ qtrigger $ filterWhen (>=0) $ seqP [
      (0, 5, s "apple:4*1" # cut 1),
      (5, 7, s "apple:4*2" # cut 1),
      (7, 8, s "apple:4*4" # cut 1),
      (8, 9, s "apple:4*8" # cut 1),
      (9, 10, s "apple:4*16" # cut 1)
    ] # room 0.3 # gain (slow 10 (range 0.9 1.3 saw));
    d2 $ qtrigger $ filterWhen (>=0) $ seqP [
      (0, 5, s "sn*1" # cut 1),
      (5, 7, s "sn*2" # cut 1),
      (7, 8, s "sn*4" # cut 1),
      (8, 9, s "sn*8" # cut 1),
      (9, 11, s "sn*16" # cut 1)
    ] # room 0.3  # gain (slow 11 (range 0.9 1.3 saw)) # speed (slow 11 (range 1 2 saw));
   d16 $ qtrigger $ filterWhen (>=0) $ seqP [
      (0, 5, ccv "10"),
      (5, 7, ccv "5 10" ),
      (7, 8, ccv "5 10 15 20" ),
      (8, 9, ccv "5 10 15 20 25 30 35 40" ),
      (9, 10, ccv "40 45 50 55 60 65 70 75 80 85 90 95 10 110 120 127" )
    ] # ccn "6" # s "midi";
    once $ ccv "11" # ccn "3" # s "midi";
}
queOnda = do {
  d11 $ fast 4 $ s "apple3" # cut 1 # begin 0.3 # end 0.54 # gain 2;
  d14 $ ccv "1 0 ~ <{1 0 1 0} {1 0 1 0 1 0}> 1 0" # ccn "2" # s "midi"
} -- que onda!
drop_2 = do {
  d5 $ qtrigger $ filterWhen (>=0) $ s "apple2:2 ~ <apple2:2 {apple2:2 apple2:2}> ~" # krush 8 # gain 1.1;
  d7 $ qtrigger $ filterWhen (>=0) $ s "apple2:9 {apple2:13 apple2:13} apple2:0 apple2:3" # gain 1.6 # lpf 3500 # hpf 1000 # krush 3;
  d8 $ qtrigger $ filterWhen (>=0) $ s "apple2:3!6 {apple2:3 apple2:3} apple2:3" # gain (range 0.8 1.1 rand) # krush 16 # begin 0.1 # end 0.6 # lpf 400;
  d10 $ qtrigger $ filterWhen (>=0) $ degradeBy 0.1 $ s "apple2:0*8" # begin 0.2 # end 0.9 # krush 4 # room 0.2 # sz 0.2 # gain 1.3;
  d12 $ fast 1 $ s "{~ hh} {~ hh} {~ ~ hh hh} {~ hh}" # gain 1.3;
  d13 $ fast 1 $ s "{techno:1 ~ ~ ~}{techno:1 ~ ~ ~}{techno:1 techno:3 ~ techno:1}{~ techno:4 techno:4 ~} " # gain 1.4;
  d4 $ s "realclaps:1 realclaps:3" # krush 8 # lpf 4000 # gain 1;
  d15 $ qtrigger $ filterWhen (>=0) $ s "apple:0" <| note ("c4'maj ~ c4'maj7 ~") # gain 1.1 # room 0.3 # lpf 400 # hpf 100 # delay 1;
  d2 $ fast 4 $ striate "<25 5 50 15>" $ s "apple:4" # gain 1.3;
  d14 $ fast 4 $ ccv "1 0 1 0" # ccn "2" # s "midi";
  once $ ccv "12" # ccn "3" # s "midi";
  d10 $ ccv "1 0 1 0 1 0 1 0" # ccn "4" # s "midi";  
} -- Striate
back1 = do {
  d3 $ silence;
  d15 $ silence;
  d11 $ silence;
  d13 $ silence;
  d1 $ s "apple2:11 apple2:10"; -- voice note sound
  d16 $ slow 2 $ ccv "<30 60> <45 75 15>" # ccn "2" # s "midi";
}
back2 = do {
  d1 $ silence;
  d4 $ silence;
  d6 $ silence;
  d12 $ silence;
  d16 $ ccv "15 {40 70} 35 5" # ccn "2" # s "midi";
}
back3 = do {
  xfadeIn 2 3 $ silence;
  d16 $ ccv "30 ~ <15 {15 45}> ~" # ccn "2" # s "midi";
}
back4 = do{
  once $ ccv "0" # ccn "3" # s "midi";
  d11 $ qtrigger $ filterWhen (>=0) $ seqP [
     (1, 2, s "apple3:1" # room 1 # gain 2),
     (8, 9, s "apple:4" # room 3 # size 1)
  ];
  xfadeIn 7 1 $ s "apple2:9 {apple2:13 apple2:13} apple2:0 apple2:3" # gain 1.6 # lpf 3500 # hpf 1000 # krush 3 # djf 1;
  xfadeIn 2 1 $ silence;
  xfadeIn 10 1 $ silence;
  d5 $ silence;
  d8 $ silence;
  d9 $ silence;
}

d7 $ s "apple2:9 {apple2:13 apple2:13} apple2:0 apple2:3" # gain 1.6 # lpf 3500 # hpf 1000 # krush 3

d7 $ fadeOut 10 $ s "apple2:9 {apple2:13 apple2:13} apple2:0 apple2:3" # gain 1.6

drop_2
back1
back2
back3
back4

panic
 -- Macintosh





queOnda
panic

once $ s "apple3:1" # begin 0.04 # gain 1.2


d1 $ slow 2 $ "apple:4 {~  apple:7 apple:7 apple:8} {apple:9 apple:9} {apple:4  ~ ~ apple2:9}" # cut 1 # note "c5 g4 f5 b5"
d12 $ fast 1 $ s "{~  hh ~ ~}{hh ~}{~ hh ~ hh}{hh hh}" # gain 1.3
d15 $ s "hh:7*4"
d16 $ degradeBy 0.2 $ s "hh:3*8" # gain 1.4

d1 $ silence

d1 $ slow 2 $ "apple:4 {~  apple:7 apple:7 apple:8} {apple:9 apple:9} {apple:4  ~ ~ apple2:9}" # cut 1 # note "[c5 g4 f5 b5]"

d1 $ slow 2 $ "apple:4 {~  apple:7 apple:7 apple:8} {apple:9 apple:9} {apple:4  ~ ~ apple2:9}" # cut 1 # note "[c5 e5 a5 c5]"

d2 $ s  "techno*4"
d12 $ fast 1 $ s "{~ hh}{~ hh}{~ ~ hh hh}{ ~ hh}" # gain 1.3

d1 $ slow 2 $ "apple:4 {~  apple:7 apple:7 apple:8} {apple:9 apple:9} {apple:4  ~ ~ apple2:9}" # cut 1 # note "c5 g4 f5 b5" # speed 2

hush

d12 $ s "apple:4*4" # cut 1
    d12 $ hush


techno_drums

drop_2 = do
  d12 $ fast 1 $ s "{~ hh}{~ hh}{~ ~ hh hh}{ ~ hh}" # gain 1.3
  d13 $ fast 1 $ s "{techno:1 ~ ~ ~}{techno:1 ~ ~ ~}{techno:1 techno:3 ~ techno:1}{~ techno:4 techno:4 ~} " # gain 1.4
  d2 $ fast 4 $ striate "<7 30>" $ s "apple:4*1" # gain 1.3 -- Striate

drop_2
hush

-- MIDI
-- bassdm ~ <{bassdm bassdm} {bassdm bassdm bassdm}> bassdm
d14 $ ccv "1 0 ~ <{1 0 1 0} {1 0 1 0 1 0}> 1 0" # ccn "2" # s "midi"
d15 $ ccv "120 30 110 40" # ccn "1" # s "midi"
d14 $ fast 2 $ ccv "0 1 0 1" # ccn "2" # s "midi"
d13 $ fast 1 $ ccv "0 10 127 13" # ccn "6" # s "midi"
d16 $ fast 2 $ ccv "127 {30 70} 60 110" # ccn "0" # s "midi"

--d16 $ fast 2 $ ccv "0 0 0 0" # ccn "3" # s "midi"

-- test midi channel 4
d1 $ s " ~ ~ bd <~ bd>"
d16 $ ccv "0 1" # ccn "4" # s "midi"


-- choose timestamp in video example

-- https://www.flok.livecoding.nyuadim.com:3000/s/frequent-tomato-frog-61217bfc

--d8 $ s "[[808bd:1] feel:4, <feel:1*16 [feel:1!7 [feel:1*6]]>]" # room 0.4 # krush 15 # speed (slow "<2 3>" (range 4 0.5 saw))

Visuals: It was very important for us that our visuals matched the clean aesthetic of apple, and the cute and dancy aesthetic of our concept. We worked very hard on making sure that our elements aligned well with each other. In the end, we have three main visuals in the piece: 

  1. A video of tabs being open referencing multiple I.M classes
  2. The Apple Logo in a white screen – with glitch lines during the shut down
  3. An immitation of their iconic purple mountain wallpaper

We modify all of them accordingly so our composition feels cohesive. In order to build them we used P5.js (latter two) and Hydra. Here is the code we built:

function logo() {
  let p5 = new P5()
  s1.init({src: p5.canvas})
  src(s1).out(o0)
  p5.hide();
  p5.background(255, 255, 255);
  let appleLogo = p5.loadImage('https://i.imgur.com/UqV7ayC.png');
  p5.draw = ()=>{
    p5.image(appleLogo, (width - 400) / 2, (height - 500) / 2, 400, 500);
  }
}
function visualsOne() {
  src(o1).out()
  s0.initVideo('https://upload.wikimedia.org/wikipedia/commons/b/bb/Screen_record_2024-04-30_at_5.54.36_PM.webm')
  src(s0).out(o0)
  render(o0)
}
function visualsTwo() {
  src(s0)
  .hue(() => 0.2 * time)
  .out(o0)
}
function visualsThree() {
  src(s0)
  .hue(() => 0.2 * time + cc[2])  
  .rotate(0.2)
  .modulateRotate(osc(3), 0.1)
  .out(o0)
}
function visualsFour() {
   src(s0)
  .invert(()=>cc[3])
  .rotate(0.2)
  .modulateRotate(osc(3), 0.1)
  .color(0.8, 0.2, 0.5)
  .scale(() => Math.sin(time) * 0.1 + 1)
  .out(o0)
}
function visualsFive() {
   src(s0)
    .rotate(0.2)
    .modulateRotate(osc(3), 0.1)
    .color(0.8, 0.2, 0.5)
    .scale(()=>cc[1]*3)
    .out(o0)
}
function oops() {
   src(s0)
    .rotate(0.2)
    .modulateRotate(osc(3), 0.1)
    .color(0.8, 0.2, 0.5)
     .scale(()=>cc[1]*0.3)
     .scrollY(3,()=>cc[0]*0.03)
    .out(o0)
}
function shutdown() {
  osc(4,0.4)
          .thresh(0.9,0)
          .modulate(src(s2)
            .sub(gradient()),1)
            .out(o1)
      src(o0)
        .saturate(1.1)
        .modulate(osc(6,0,1.5)
          .brightness(-0.5)
          .modulate(
              noise(cc[1]*5)
              .sub(gradient()),1),0.01)
        .layer(src(s2)
          .mask(o1))
          .scale(1.01)
          .out(o0)
}
function glitchLogo() {
  let p5 = new P5()
  s1.init({src: p5.canvas})
  src(s1).out()
  p5.hide();
  p5.background(255, 255, 255, 120);
  p5.strokeWeight(0);
  p5.stroke(0);
  let prevCC = -1
  let appleLogo = p5.loadImage('https://i.imgur.com/UqV7ayC.png');
  p5. draw = ()=>{
    p5.image(appleLogo, (width - 400) / 2, (height - 500) / 2, 400, 500);
    let x = p5.random(width);
    let length = p5.random(100, 500);
    let depth = p5.random(1,3);
    let y = p5.random(height);
    p5.fill(0);
    let ccActual = (cc[4] * 128) - 1;
    if (prevCC !== ccActual) { 
      prevCC = ccActual;
    } else { // do nothing if cc value is the same
      return
    }
    if (ccActual > 0) { // only draw when ccActual > 0
      p5.rect(x, y, length, depth); 
    }
  }
}
//function macintosh() {
  // osc(2).out()
//}
function flashlight() {
  src(o1)
    .mult(osc(2, -3, 2)) //blend is better or add
    //.add(noise(2))//
    //.sub(noise([0, 2]))
    .out(o2)
  src(o2).out(o0)
}
function wallpaper() {
  s2.initImage("https://blog.livecoding.nyuadim.com/wp-content/uploads/appleWallpaper-scaled.jpg");
  let p5 = new P5();
  s1.init({src: p5.canvas});
  src(s1).out(o1);
  //src(o1).out(o0);
  src(s2).layer(src(s1)).out();
  p5.hide();
  p5.noStroke();
  p5.background(255, 255, 255, 0); //transparent background
  p5.createCanvas(p5.windowWidth, p5.windowHeight);
  let prevCC = -1;
  let colors = [
    p5.color(255, 198, 255, 135),
    p5.color(233, 158, 255, 135),
    p5.color(188, 95, 211, 135),
    p5.color(142, 45, 226, 135),
    p5.color(74, 20, 140, 125)
  ];
  p5.draw = () => {
    let ccActual = (cc[4] * 128) - 1;
    if (prevCC !== ccActual) { 
      prevCC = ccActual;
    } else { // do nothing if cc value is the same
      return
    }
    if (ccActual <= 0) { // only draw when ccActual > 0
      return;
    }
    p5.clear(); // Clear the canvas each time we draw
    // Draw the right waves
    for (let i = 0; i < colors.length; i++) {
      p5.fill(colors[i]);
      p5.noStroke();
      // Define the peak points manually
      let peaks = [
        {x: width * 0.575, y: height * 0.9 + p5.random((i - 1) * (height * 0.14), i * (height * 0.14))},
        {x: width * 0.6125, y: height * 0.74 + p5.random((i - 1) * (height * 0.14), i * (height * 0.14))},
        {x: width * 0.675, y: height * 0.54 + p5.random((i - 1) * (height * 0.14), i * (height * 0.14))},
        {x: width * 0.75, y: height * 0.7 + p5.random((i - 1) * (height * 0.14), i * (height * 0.14))},
        {x: width * 0.8125, y: height * 0.4 + p5.random((i - 1) * (height * 0.14), i * (height * 0.14))},
        {x: width * 0.8625, y: height * 0.5 + p5.random((i - 1) * (height * 0.14), i * (height * 0.14))},
        {x: width * 0.9, y: height * 0.2 + p5.random((i - 1) * (height * 0.14), i * (height * 0.14))},
        {x: width * 0.95, y: height * 0 + p5.random((i - 1) * (height * 0.14), i * (height * 0.14))},
        {x: width, y: height * 0 + p5.random((i - 1) * (height * 0.14), i * (height * 0.14))},
        {x: width, y: height * 0.18 + p5.random((i - 1) * (height * 0.14), i * (height * 0.14))}
      ];
      // Draw the shape using curveVertex for smooth curves
      p5.beginShape();
      p5.vertex(width * 0.55, height);
      // Use the first and last points as control points for a smoother curve at the start and end
      p5.curveVertex(peaks[0].x, peaks[0].y);
      // Draw the curves through the peaks
      for (let peak of peaks) {
        p5.curveVertex(peak.x, peak.y);
      }
      // Use the last point again for a smooth ending curve
      p5.curveVertex(peaks[peaks.length - 1].x, peaks[peaks.length - 1].y);
      p5.vertex(width * 1.35, height + 500); // End at bottom right
      p5.endShape(p5.CLOSE);
    }
    // Draw the left waves
    for (let i = 0; i < colors.length; i++) {
      p5.fill(colors[i]);
      p5.noStroke();
      // Define the peak points relative to the canvas size
      let peaks = [
        {x: 0, y: height * 0.1 + p5.random((i - 1) * (height * 0.14), i * (height * 0.14))},
        {x: width * 0.1 + p5.random(width * 0.025), y: height * 0.18 + p5.random((i - 1) * (height * 0.14), i * (height * 0.14))},
        {x: width * 0.1875 + p5.random(width * 0.025), y: height * 0.36 + p5.random((i - 1) * (height * 0.14), i * (height * 0.14))},
        {x: width * 0.3125 + p5.random(width * 0.025), y: height * 0.26 + p5.random((i - 1) * (height * 0.14), i * (height * 0.14))},
        {x: width * 0.5 + p5.random(width * 0.025), y: height * 0.5 + p5.random((i - 1) * (height * 0.12), i * (height * 0.12))},
        {x: width * 0.75, y: height * 1.2}
      ];
      // Draw the shape using curveVertex for smooth curves
      p5.beginShape();
      p5.vertex(0, height); // Start at bottom left
      // Use the first and last points as control points for a smoother curve at the start and end
      p5.curveVertex(peaks[0].x, peaks[0].y);
      // Draw the curves through the peaks
      for (let peak of peaks) {
        p5.curveVertex(peak.x, peak.y);
      }
      // Use the last point again for a smooth ending curve
      p5.curveVertex(peaks[peaks.length - 1].x, peaks[peaks.length - 1].y);
      p5.vertex(width * 0.75, height * 2); // End at bottom right
      p5.endShape(p5.CLOSE);
    }
  };
}
function buildup() {
  src(s2).layer(src(s1)).invert(()=>cc[6]).out();
}
function flashlight() {
  src(o1)
    .mult(osc(2, -3, 2)) //blend is better or add
    //.add(noise(2))//
    //.sub(noise([0, 2]))
    .out()
}
var visuals = [
  () => logo(),
  () => visualsOne(),
  () => visualsTwo(), // 2 
  () => visualsThree(),
  () => visualsFour(), // 4
  () => visualsFive(),
  () => oops(),  // 6 
  () => shutdown(),
  () => glitchLogo(),  // 8
  () => macintosh(),
  () => wallpaper(), // 10
  () => buildup(),
  () => flashlight() // 12
]
src(s0)
  .layer(src(s1))
  .out()
var whichVisual = -1
update = () => {
  ccActual = cc[3] * 128 - 1
  if (whichVisual != ccActual) {
    if (ccActual >= 0) {
      whichVisual = ccActual;
      visuals[whichVisual]();
    }
  }
}
render(o0)

// cc[2] controls colors/invert
// cc[3] controls which visual to trigger
// cc[4] controls when to trigger p5.js draw function

hush()

let p5 = new P5()
  s1.init({src: p5.canvas})
  src(s1).out()
  p5.hide();
  p5.background(255, 255, 255, 120);
  p5.strokeWeight(0);
  p5.stroke(0);
  let appleLogo = p5.loadImage('https://i.imgur.com/UqV7ayC.png');
  function setupMidi() {
    // Open Web MIDI Access
    if (navigator.requestMIDIAccess) {
      navigator.requestMIDIAccess().then(onMIDISuccess, onMIDIFailure);
    } else {
      console.error('Web MIDI API is not supported in this browser.');
    }
    function onMIDISuccess(midiAccess) {
      let inputs = midiAccess.inputs;
      inputs.forEach((input) => {
        input.onmidimessage = handleMIDIMessage;
      });
    }
    function onMIDIFailure() {
      console.error('Could not access your MIDI devices.');
    }
    // Handle incoming MIDI messages
    function handleMIDIMessage(message) {
      const [status, ccNumber, ccValue] = message.data;
      console.log(message.data)
      if (status === 176 && ccNumber === 4) { // MIDI CC Channel 4
        prevCC = midiCCValue;
        midiCCValue = ccValue;
        if (midiCCValue === 1) {
          prevCC = midiCCValue;
          p5.redraw();
        }
      }
    }
  }
  p5. draw = ()=>{
    p5.image(appleLogo, (width - 400) / 2, (height - 500) / 2, 400, 500);
    let x = p5.random(width);
    let length = p5.random(100, 500);
    let depth = p5.random(1,3);
    let y = p5.random(height);
    p5.fill (0);
    p5.rect(x, y, length, depth); // here I'd like to trigger this function via midi 4
  }
p5.noLoop()
setupMidi()

hush()

Contribution: Our team met regularly and had constant communication through Whatsapp. Initially Maya and Raya focused on building the visuals and Jun and Juanma focused on building the audio. Nevertheless, progress happened mostly during meetings where we would all come up with ideas and provide immediate feedback. For example, it was Juanma’s idea to do recreate their wallpaper.

Once we had a draft, the roles blurred a lot. Jun worked with Maya on incorporating MIDI values into the P5.js sketches, and with Juanma on organizing the visuals into an array so they could be triggered through Tidal functions. Raya worked on the video visuals. Juanma focused on the latter part of the sound and in writing the Tidal functions, while Jun focused on the earlier part and cleaning up the code. Overall we are very proud of our debut as a Live Coding band! We worked very well together, and feel that we constructed a product where our own voices can be heard. A product that is also fun. Hopefully you all dance! 🕺🏼💃🏼

First of all, I found some interpretations of Live Coding interesting. “Live Coding is shaped by different genealogies of ideas and practices, both philosophical and technological”, so one needs to have a very deep understanding of liveness. At the same time, the article mentions that liveness refers both to nonhuman “machine liveness”, which I think is one reason why people need to have a deep understanding of liveness, since they need to have a deep understanding of “nonhuman”.

Secondly, the author states that Live Coding is not about writing code in advance. However, at the current level, it is almost impossible to be completely on the spot. I remember during the first group performance, our group had a lot of coding came up on the stage. That was a big challenge for me. In performing, like the article mentions, you can’t just focus on one note, instead, you have to generate from a higher-order process. In the groups, I learned a lot that Bato would write notes very casually, followed by more at random. What surprised me was that just by putting them together, even without much manipulation, they could sound great. So I don’t think the statement in the article that “technique doesn’t matter” fits that much for Live Coding with music. I learned Live Coding because I saw a lot of Live Coding performances in New York, and both the art form and the logic behind it appealed to me. I was attracted to the art and, to be honest, the limitations and technology, but was very much drawn to the art form of Live Coding. I’m what the article refers to as “composed improvisation or improvisation with a composed structure.” Live Coding’s liveness is what sets it apart from other forms of code, and it’s what’s most attractive. The liveness of Live Coding is what sets it apart from other forms of code, and what makes it most attractive.

This chapter delves into the interesting idea of the liveness of live coding. I find it very interesting that it is a hotly debated topic, but it is mostly up to one’s own interpretation. Can a live coding set truly be live with pregramming? If there isn’t code there to begin with, is it fine to still make music beforehand, and use those elements in your performance? Is that truly live coding? I find this a really interesting topic because. going into this class, I sort of assumed that in live coding performances were always completely improv, where people work off one another on the fly. However, the more I thought about it, the more I realized that’s probably not the case always. From my own experience trying to live code, a lot of times there is pregrammed stuff, and if not, stuff that people probably messed around with before a performance. This is what I do, anyway. It takes a lot of skill to know all the sounds and workings of code and coordination to be able to just make something out of nothing on the spot. When I would do improv solos on the saxophone, it’s not like I would be playing stuff on the fly always. A lot of times I would be playing along to the track beforehand, analyzing the chord progressions, the key, listening to other sax solos, and workshopping random stuff until I found things that I thought sounded nice or fit with the sound of the song. That way, when I am doing my actual performance, I can incorporate these small pieces I played into my actual solo. For me, the same applies to live coding. I think almost everyone has to, you are drawing from your own musical influence when you are doing a improv performance no matter what it is, because I’m pretty sure there is not a single person who does improv performances and doesn’t listen to music. So, I believe that there is no issue in pregramming or having pieces of music that are incorporated into a live set. But, if you can do it off the dome like that, kudos to you.

I found the reading interesting in how it shows how the “live” component of live coding is gradually evolving. I think there are many unexplored avenues in which live coding can be intersected with other forms of media, and I’m excited to explore these ideas throughout the rest of the semester as part of the final project. Through this form of self-expression, I think we can also continue to blur the boundary between person and machine and perform as a singular entity, and see how computing can give way to a wider variety of art forms.