Process

Our process began with a clear vision of the environment we wanted to create, an abstract narrative that subtly tells the story of a group of friends watching TV and embarking on a surreal, psychedelic trip. While we didn’t want to portray this explicitly, the goal was to evoke the strange sensations and shifting experiences they go through, using a mix of visual cues and atmospheric design. With the concept in place, we knew that the project would need creative and unusual visuals right from the start, paired with immersive psytrance or liquid drum and bass audio to match the tone and energy of the story.

Once we had a solid sense of the visual and sonic direction, we dedicated ourselves to an intense 12 hour live coding jam session (with a couple of runs to the Baqala for snacks), which we streamed on Instagram. This session became a space of spontaneous experimentation and rapid development, where we started shaping the core of the experience. Although we made significant progress during the jam, the following days, especially after Tuesday, revealed some lingering technical issues and problems with timing that still needed to be resolved. These challenges became the focus of our attention as we worked toward polishing the final piece.

Audio

Aadil and I mostly handled the audio. Here we focused on using some effective voice samples to bring out parts of the performance we thought needed more attention. We just used samples from our favourite songs (e.g., Everything In Its Right Place by Radiohead) and favourite genres (Techno, Hardgroove). We had a buildup using layers of ambient textures and chopped samples with increasing intensity to simulate anticipation. Specifically, we manipulated legato ambience, gradually intensified techno patterns, and used MIDI CC values to sync modulation effects like low pass filters and crush.

In the midsections, we used heavy 808 kicks, distorted jungle breaks, and glitchy acid lines (like the “303” patterns) to keep up the tension and energy. For the end sections, we wanted to have some big piano synths that brought home the feeling of a comedown. The tonal shift was meant to echo that feeling of emotional release, what it feels like when a trip starts to settle.

Visuals: 

The goal was to mirror the full arc of a trip, with visuals locked to every change in the track. Mo Seif first sketched a concept for each moment, then built the look layer‑by‑layer, checking each draft against the audio until they matched perfectly. We had seven primary sections, each tied to a distinct musical cue.

  1. Intro – “Sofa & Tabs”

Us slouched on a couch, half‑watching TV. They decide to take a journey of their lifetimes; the trip timer starts.

2. Onset – “TV Gets Wavy”

The first tingle hits. The TV image begins to undulate – colors drifting, lines bending. A slow warp effect hints that reality is about to buckle.

3. First Peak – “Nixon + Rectangles”

Audio: vintage Nixon sample followed by drum drop.

Visuals: explosion of rectangle‑shaped, ultra‑psychedelic patterns that sync to each snare hit. The crowd pops; everything feels bigger, faster, weirder.

4. Chiller Section 

A short breather featuring three “curated” GIFs:

Lo Siento, Wilson – pure goofy laughter.

Sassy the Sasquatch – laughter + tripping out 

Pikachu tripping – those paranoid, deep‑thought vibes.

Together they nail the mood‑swings of a trip.

5. Meta Moment – “Pikachu Breaks the 4th Wall”

Pikachu dissolves into a live shot of the same GIF playing on my laptop in the dorm while we’re editing. Filming ourselves finishing the piece made it hilariously meta; syncing it to the beat was a nightmare, but it clicked.

6. Street‑Fighter Segment – “Choose Your Fighter”

Inspiration: I was playing GTA once back and saw myself in the game as Trevor. So we wanted to recreate that feeling and put us in the video game. 

Build: we took 4‑5 photos of each of us, turned them into looping GIFs, and dropped them onto the classic character‑select screen with p5.js.

Plot Twist: Mo’s fighter “dies” (tongue out), smash‑cut to an Attack on Titan GIF – like he resurrected.

7. Final Drop & Comedown – “Hard‑Groove + CC Sync”

The last drop pivots to a hard‑groove techno feel. Every strobe and colour hit is driven by MIDI CC values mapped to the track. We fade back to the original couch shot: the three of us staring straight into the lens, coming down – sweaty, wired, grinning.

Here are the gifs we made for the street fighter visuals:

Here is the code we used:
Tidal:

--Lets watch some tv + mini build up + mini drop (SEC 1hey guys why dont we watch some tv

d1 $ chop 2 $ loopAt 16 $  s "ambience" # legato 3 # gain 1 #lpf (range 200 400 sine)
   
once $ "our:3" # up "-5"
     
d10 $ fast 2 $ "our:4" # up "-6"

d2 $ slow 1 $ s "techno2:2*4" # gain 0.9 # room 0.1 

--cc (eval separately)
d16 $ fast 16 $ ccv "<0 100 80 20 0 100 80 20>" # ccn "2" # s "midi"


d4 $ ghost $ slow 2 $ s "rm*16" # gain 0.75 # crush 2 # lpf 2500 # lpq(range 0.4 0.6 sine)

d5 $ stack [
    n "0 ~ 0 ~ 0 ~ 0 ~" # "house",
    n "11 ~ 11 ~ 11 ~ 11 ~" # s "808bd" # speed 1 # squiz 0 # nudge 0.01 # release 0.4 # gain 0.3,
    slow 1 $ n "8 ~ 8 8 ~ 8 ~ 8" # s "jungle"
]


d6
  $ linger 1
      $ n "[d3@2 d3 _ d3 _ d3 _ _ c3 _]/1"
     -- $ n "[d3 d3 c3 d3 d3 d3 c3 d3 f3 _ _ f3 _ _ c3]/2"
     -- $ n "[f3 _ _ g3 _ _ g3 _]*2"
  # s "supergong" # gain 1.2 #lpf 100 # lpq 0.5 # attack 0.04 # hold 2 # release 0.1 


  d7 $ stack [randslice 8 $ loopAt 8 $ slow 2 $ jux (rev) $ off 0.125 (|+| n "<12 7 5>") $ off 0.0625 (|+| n "<5 3>") $ cat [
  n "0 0 0 0",
  n "5 5 5 5",
  n "4 4 4 4",
  n "1 1 1 1"
  ]] # s "303" # gain 0.9 # legato 1 # cut 2 # krush 2

d10 $ fast 2 $ ccn "0*128" # ccv (range 200 400 $ sine) # s "midi"


--at the end of first visual
d1 silence
d2 silence
d5 silence
d6 silence
d7 silence


-- drugs r enemy (before the drop)
once $ s "sample:2" # gain 1.2

-- THE drums (the drop)
d11 $ stack [fast 2 $ s "[bd*2, hh*4, ~ cp]"] # gain 1.2


--after drums 
d9 $ stack [
    slow 1 $ s "techno2:2*4" # gain 0.9 # room 0.1,
   stack [
      n "0 ~ 0 ~ 0 ~ 0 ~" # "house",
      n "11 ~ 11 ~ 11 ~ 11 ~" # s "808bd" # speed 1 # squiz 0 # nudge 0.01 # release 0.4 # gain 0.3,
      slow 1 $ n "8 ~ 8 8 ~ 8 ~ 8" # s "jungle"
  ],
     linger 1
        $ n "[d3@2 d3 _ d3 _ d3 _ _ c3 _]/1"
       -- $ n "[d3 d3 c3 d3 d3 d3 c3 d3 f3 _ _ f3 _ _ c3]/2"
       -- $ n "[f3 _ _ g3 _ _ g3 _]*2"
    # s "supertron" # gain 0.8 #lpf 100 # lpq 0.5 # attack 0.04 # hold 2 # release 0.1 ,
   stack [randslice 8 $ loopAt 8 $ slow 2 $ jux (rev) $ off 0.125 (|+| n "<12 7 5>") $ off 0.0625 (|+| n "<5 3>") $ cat [
    n "0 0 0 0",
    n "5 5 5 5",
    n "4 4 4 4",
    n "1 1 1 1"
    ]] # s "303" # gain 0.9 # legato 1 # cut 2 # krush 2, 
    fast 16 $ ccv "<0 100 80 20 0 100 80 20>" # ccn "2" # s "midi"
] # gain 0

d11 silence
d9 silence 

-- START XFADE WHEN READY FOR GIF MUSIC

d10
$ whenmod 16 4 (|+| 3)
$ jux (rev . (# s "arpy") . chunk 4 (iter 4))
$ off 0.125 (|+| 12)
$ off 0.25 (|+| 7)
$ n "[d1(3,8) f1(3,8) e1(3,8,2) a1(3,8,2)]/2" # s "arpy"
# room 0.5 # size 0.6 # lpf (range 200 8000 $ slow 2 $ sine)
# resonance (range 0.03 0.6 $ slow 2.3 $ sine)
# pan (range 0.1 0.9 $ rand)
# gain 0.6 

-- -->7


d16 $ fast 16 $ ccv "<0 100 80 20 0 100 80 20>" # ccn "2" # s "midi"



-- GIF section Sudden drop from the mini drop, chill background matching music + sample audios for gifs (SEC 2.1)
-- GIF SECTION MUSIC

-- 1) DONNY 

-- 2) WILSONNNNNN
once $ s "wilson"  # gain 1.4

-- 3) PIKAPIKA
once $ s "pikapika" # gain 1.4 

--backgroung silence WHEN ICE SPICE 
d10 silence -- aadil


-- Ice Spice Queen (SEC 2.2)
once $ "our:5" #gain 2.5 --j


-- Start boss music LOUD, reverse drop off glitchy into us fighting (SEC 3)
d1 $ fast 2 $ s "techno2:2*4" # gain 1.2 # room 0.1
   --SELECT UR FIGHTER CC VALUES 
d10 $ fast 4 $ ccn "0*128" # ccv (range 200 400 $ sine) # s "midi"
-- d16 $ slow 1 $ ccn "0*128" # ccv (range 0.9 1.2 $ slow 2 $ rand) # s "midi"
-----------------------------------------------------------

--cc for street fight
--d16 $ fast 16 $ ccv "0 60 0 70" # ccn "0" # s "midi"
-- 

-- WHEN STREET FIGHT MO VS AADIL
d2 $ stack [
  sometimesBy 0.25 (|*| up "<2 5>") $
  sometimesBy 0.2 (|-| up "<2 1>")
  $ jux (rev) $
  n "[a4 b4 c4 a4]*4" # s "superhammond" # cut 4 # distort 0.3 # up "-9"
  # lpf (range 200 7000 $ slow 2 $ sine) # resonance ( range 0.03 0.5 $ slow 3 $ cosine) # octave (choose [4, 5, 6, 3]),
  sometimesBy 0.15 (degradeBy 0.125) $
  s "reverbkick*16" # n (irand(8)) # distort 0 # speed (range 0.9 1.2 $ slow 2 $ rand) # gain 0.9
] # room 0.5 # size 0.5 # pan (range 0.2 0.8 $ slow 2 $ sine) #gain 0.9


d5 $ fast 2 $ (|+| n "12")$ slowcat [ 
n "0 ~ 0 2 5 ~ 4 ~", 
n "2 ~ 0 2 ~ 4 7 ~", 
n "0 ~ 0 2 5 ~ 4 ~", 
n "2 ~ 0 2 ~ 4 7 ~", 
n "12 11 0 2 5 ~ 4 ~", 
n "2 ~ 0 2 ~ 4 7 ~", 
n "0 ~ 0 2 5 ~ 4 ~", 
n "2 ~ 0 2 ~ 4 ~ 2"
] # s "supertron" # release 0.7 # distort 10 # krush 10 # room 0.5 #hpf 8000 # gain 0.7

-- silence d2 when the next DO starts playing
d2 silence

-- Quiet as in im dead, mini build up, mini drop after lick (SEC 4)

do {
  d5 $ qtrigger $ filterWhen (>=0) silence;
  d4 $ qtrigger $ filterWhen (>=0) $ stack[
    s "hammermood ~" # room 0.5 # gain 1.8 # up "8",
    fast 2 $ s "jvbass*2 jvbass*2 jvbass*2 <jvbass*6 [jvbass*2]!3>" # krush 9 # room 0.7
  ] # speed (slow 4 (range 1 2 saw));
  d3 $ qtrigger $ filterWhen (>=8) $ seqP [
    (0, 1, s "808bd:2*4"),
    (1,2, s "808bd:2*8"),
    (2,3, s "808bd:2*16"),
    (3,4, s "808bd:2*32")
  ] # room 0.3 # hpf (slow 4 (100*saw + 100)) # speed (fast 4 (range 1 2 saw)) # gain 0.8;
}

d1 $ "[reverbkick(3,8), jvbass(3,8)]" # room 0.5#krush 6 # up "-9" # gain 1

--

drop_deez = do
{
  d5 $ qtrigger $ filterWhen (>=0) $ fast 4 $ chop 2 $ loopAt 8 $ s "drumz:1" # gain 1.8  # legato 3 # cut 3;
  d6 $ qtrigger $ filterWhen (>=4) $ s "jvbass" # gain 1 # room 4.5;
  d10 $ qtrigger $ filterWhen (>=6) $ loopAt 4 $ s "acapella" # legato 3 # gain (range 0.7 1.5 saw);
  d7 $ qtrigger $ filterWhen (>=0) silence;
  d8 $ qtrigger $ filterWhen (>=0) silence;
  d2 $ qtrigger $ filterWhen (>=0) silence;
  d3 $ qtrigger $ filterWhen (>=0) silence;
  d4 $ qtrigger $ filterWhen (>=0) silence;
  d9 $ qtrigger $ filterWhen (>=8) $ s "amencutup*16"  # n (irand(8))  # speed "2 1" # gain 1.8 # up "-2"
 }

d11 $ slow 1 $ ccn "0*128" # ccv (range 1 62 saw) # s "midi" 

drop_deez 

--after drop settles
do
  d1 silence
  d6 silence

-- Quieting down with the couches, drug bad sample (SEC 5)

do {
  d2 $ qtrigger $ filterWhen (>=6) silence;
  d6 $ qtrigger $ filterWhen (>=4) silence;
  d9 $ qtrigger $ filterWhen (>=0) silence;
  d10 $ qtrigger $ filterWhen (>=0) silence;
  d5 $ qtrigger $ filterWhen (>=8) silence;
  d1 $ sound "our:1" # cut 4 # gain 1.5;
  d12 $ qtrigger $ filterWhen (>=0) $ slow 4.1 $ ccv "10 5 4 3" # ccn "0" # s "midi";
}


d1 silence

once $ s "drugsrbad" # gain 1.4

hush

Here is the hydra code:

//SEC 1 

s0.initImage("https://i.imgur.com/enIHg2L.jpeg"); //start scene
s1.initImage("https://i.imgur.com/SRNmgQR.jpeg"); //tv

src(s0).scale(.95).out()

// SYnced with Radiohead sample
inc=1;
startZoom = 1;
update = ()=>{
  if(startZoom){
    inc+=0.01;
  }
  if (inc >= 3.4)
  {
    inc = 3.4;
    return; //fix this
  }
  console.log(inc);
}
//tv zoom
src(s1) 
  //.scale(()=>inc)
   //.scale(3.4)
   //.modulate(noise(()=>cc[2],1))
   //.modulate(s1,()=>cc[2]*5)
   //.colorama(()=>cc[2]*0.1)
   //.modulateKaleid(osc(()=>cc[2]**2,()=>ccActual[2],20),0.01) //changed to 0.1 to 0.01
  .out() 


//first visual
src(o0)
	.hue("tan(st.x+st.y)")
	.colorama("pow(tan(st.x),tan(st.y))")
	.posterize("sin(st.x)*10.0", "cos(st.y)*10.0")
    .luma(()=>cc[2],()=>cc[2])
	.modulatePixelate(src(o0)
		.shift("cos(st.x)", "sin(st.y)")
		.scale(1.01), () => Math.sin(time / 10) * 10, () => Math.cos(time / 10) * 10)
	.layer(osc(1, [0, 2].reverse()
			.smooth(1 / Math.PI)
      .ease('easeInOutQuad')
			.fit(1 / Math.E)
			.offset(1 / 5)
			.fast(1 / 6), 300)
		.mask(shape(4, 0, 1)
			.colorama([0, 1].ease(() => Math.tan(time / 10))
				.fit(1 / 9)
				.offset(1 / 8)
				.fast(1 / 7))))
	.blend(o0, [1 / 100, 1 - (1 / 100)].reverse()
		.smooth(1 / 2)
		.fit(1 / 3)
		.offset(1 / 4)
		.fast(1 / 5))
	.out()

//SEC 2.1 (GIFS)

//gorilla
s2.initVideo("https://i.imgur.com/YMApoXd.mp4");
src(s2).out() 

//wilson
s2.initVideo("https://i.imgur.com/lEVy8F2.mp4");
src(s2).luma(0.1).colorama(0.1).out()

//pikachu
s0.initVideo("https://i.imgur.com/TS10M9l.mp4"); //pikachu
src(s0).out()

//load next clips
s1.initImage("https://i.imgur.com/mW62iXz.jpeg"); //still pikachu

//still pikachu
src(s1).out()

//SEC 2.2 (QUEEN)

//gang
s2.initVideo("https://i.imgur.com/i7msTNY.mp4"); //GANG
src(s2)
  .modulate(s0,0.05)
  .colorama(0)
  .out()

//SEC 3

//choose sf
s2.initImage("https://i.imgur.com/E4SPvyb.jpeg"); //choose your player
src(s2).colorama(()=>ccActual[0]*0.001).modulate(noise(0.2,()=>cc[0]*0.001)).out()

//gang street fighter
let sf1 = new P5();
s3.init({src: sf1.canvas})
sf1.out
sf1.hide()
let mo = sf1.loadImage("https://media0.giphy.com/media/v1.Y2lkPTc5MGI3NjExMDc2NGU0aGEwZmJwZXQ0ZWttZng5aGNqdWp6azlja2ZmeXBlMGs0aiZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9cw/3I1aZCTDfNCd0Fclr4/giphy.gif")
let aadil = sf1.loadImage("https://media.giphy.com/media/gQFjPwZm0pMqW8yoUY/giphy.gif")
let back = sf1.loadImage("https://blog.livecoding.nyuadim.com/wp-content/uploads/background.jpg")
mo.play();
aadil.play();
sf1.scale(0.9);
sf1.fill(255);
sf1.draw = ()=>{
  //sf1.fill(255)
  //et img = ba.get();
  sf1.image(back, 0, 0, sf1.width, sf1.height);
  //sf1.video(ba,0,0 sf1.width sf1.height);
  // sf1.image(mo, 250, sf1.height - mo.height); // bottom-left
  //sf1.image(aadil, sf1.width - aadil.width-100, sf1.height+5 - aadil.height); // bottom-right
//sf1.height/1.3 - mo.height
  sf1.image(mo, sf1.width/6, sf1.height/1.3 - mo.height, sf1.height/1.5, sf1.width/2.5); // bottom-left
  sf1.image(aadil, (sf1.width/1.5) - aadil.width, sf1.height/1.3- aadil.height, sf1.height/1.5, sf1.width/2.5); //
  sf1.fill(255);
  // uncomment below to sync the gif with tida
  // mo.setFrame(Math.floor((()=>cc[0]*mo.numFrames())));
  // aadil.setFrame(Math.floor((()=>cc[0]*mo.numFrames())));
}
src(s3).out()


//SEC 4

//load clips
s0.initImage("https://i.imgur.com/dPIOZnu.jpeg"); // mo dead
s1.initImage("https://i.imgur.com/Ukz54v3.png"); //mo really dead


//mo dead
src(s0).out()

//mo really dead
inc=1;
startZoom = 1;
update = ()=>{
  if(startZoom){
    inc+=0.005;
  }
  if (inc >= 2.1)
  {
    inc = 2.1;
    return; //fix this
  }
  console.log(inc);
}
src(s1)
  //.scale(()=>inc)
  .out()

//lickkkk
s2.initVideo("https://i.imgur.com/hvNHjEg.mp4"); //lick
src(s2).scale(1.3)
  //.diff(s2,0.005)
  //.colorama(1)
  //.modulate(s2,0.05)
  .out()

//post lick
osc(()=>cc[1]*100, 0.003, 1.6)
.modulateScale(osc(cc[1]*10, 0.7
  , 1.1)
      .kaleid(2.7))
  .repeat(cc[1]*3, cc[1]*3)
.modulate(o0, 0.05)
  .modulateKaleid(shape(4.2, cc[1], 0.8))
  //.add(o0,cc[1])
  .out(o0);

//SEC 5

//couch
s3.initImage("https://i.imgur.com/dGW1rt5.png"); //couch
inc=1;
startZoom = 1;
update = ()=>{
  if(startZoom){
    inc+=0.01;
  }
  if (inc >= 9)
  {
    inc = 999.5;
    return; //fix this
  }
  console.log(inc);
}
src(s3)
.repeat(()=>ccActual[0],()=>ccActual[0])
  //.scale(()=>inc)
  .out()

Heres is a frame from our livestream:

I think at one point we had two drops and we just couldn’t proceed creatively from that point on. But we went back to all of our previous blog posts and realized that we could just think of something and really we already have the tools to do it.

Thank you Aaron for all the help and I think I speak for all three of us when I say that we really needed the fun we had in this class as graduating seniors.

The 14-week journey has finally ended, and it was time to show everyone what we’ve been working on and how we grew throughout the semester! We were inspired by Alice in Wonderland when brainstorming for our final performance, hence our funky team name. However, while we were composing, we decided to stray off from following Alice in Wonderland’s narrative from the start to the end, and instead decided to mix in some random visuals and sounds in there as well while still keeping the general flow of the composition based on Alice in Wonderland.

We wanted to show contrast and build-up of the visuals and sounds between the starting point and the ending point, so we decided to start with black-and-white visuals and rather quieter, mysterious audio to go along with it to hint at what’s about to come later in the composition, which was Phase 1.

In Phase 2, we began to include very obvious Alice references (i.e. video and audio of the door closing and opening, teacups, Alice in Wonderland soundtrack, etc.), and our climax for Phase 2 was the appearance of the Cheshire cat image — I also added a sound clip saying “a Cheshire cat” from the movie, and this was the signal for transition into Phase 3, which was the “craziest” phase in our composition.

We tried to make the visuals and the audio as engaging as possible in Phase 3 because this was the final part of our performance, so there were a lot of fast beats and loud melodies. We also wanted to end with “a bang,” so we decided to use the chorus of the song “Gnarly” by Katseye and have a little surprise dance party to end our performance! The reason why we chose this song was because we thought the beats of it was very similar to something that we’d create in Tidal, and the whole repetition of the word “gnarly” seemed to fit well with our the intriguing, slightly unpredictable vibes we wanted for our last phase.

I want to give a special shoutout to Emilie and Rashed for being down to join me even though it was very last minute. 🙂 Because we wanted it to be a total surprise and make it seem like it was a “spontaneous” attempt to gauge audience engagement, they climbed up on stage when I gave them the signal so that it looked out of the blue rather than them waiting on stage beforehand, and we’re happy that it all worked out well!

Although there was a definite shift from serene/calm visuals/audio to the crazy/vibrant point we were at by the end of our composition, we still tried to keep the theme of mysterious, outworldly, fantastical, and intriguing vibes throughout the whole performance so that there was still a somewhat coherent picture portrayed to the audience.

With that being said, I’ll stop yapping and post the codes here now:

Hydra code (Adilbek — phase 1 + phase 2 till the Cheshire cat part; Jannah — ending of phase 2 + phase 3):

// Init values
basePath = "https://blog.livecoding.nyuadim.com/wp-content/uploads/"
videoNames = [basePath+"tea-1.mov", basePath+"kettle.mov"]
vids = []
allLoaded = false
loadedCount = 0
for (let i=0; i<videoNames.length; i++){
	vids[i] = document.createElement('video')
	vids[i].autoplay = true
	vids[i].loop = true
  vids[i].crossOrigin="anonymous"
  vids[i].src = videoNames[i]
	vids[i].addEventListener(
	"loadeddata", function () {
	  loadedCount += 1;
		console.log(videoNames[i]+" loaded")
		if (loadedCount == videoNames.length){
			allLoaded = true;
			console.log("All loaded");
		}
	}, false);
}
whichVid = 1

s0.init({src: vids[0]})

update = () =>{
  if (whichVid != ccActual[10]){
    whichVid = ccActual[10];
    s0.init({src: vids[whichVid]})
  }
}

// Phase 1
// Calm Visuals with transition to Alice in Wonderlands
// Hydra 1.1
osc(() => Math.sin(time) * cc[0] * 200, 0)
  .kaleid(() => cc[0] * 200)
  .scale(1, 0.4)
  .out()


// Hydra 1.2
osc(200, 0)
  .kaleid(200)
  .scale(1, 0.4)
  .scrollX(0.1, 0.01)
  .mult(
    osc(() => cc[0] * 200, 0)
      .kaleid(200)
      .scale(1, 0.4)
  )
  .out();

// Hydra 1.3
shape(1,0.3)
  .rotate(2)
  .colorama(3)
  .thresh(.2)
  .modulatePixelate(noise(()=>(cc[0])*2.5,.1))
  .out(o0)

//birds???
//where are we
// Hydra 1.4
shape(20,0.1,0.01)
  .scale(() => Math.sin(time)*3)
  .repeat(() => Math.sin(time)*10)
  .modulateRotate(o0)
  .scale(() => Math.sin(time)*2)
  .modulate(noise(()=>cc[0]*1.5,0))
  .rotate(0.1, 0.9)
.out(o0)


// Hydra 1.5
src(o0)
  .layer(src(o0).scale(() => 0.9 + cc[0] * 0.2).rotate(.002))
  .layer(shape(2, .5).invert().repeat(3,2).luma(.2).invert().scrollY(.1, -.05).kaleid(3).rotate(.1,.2))
  .out()

// Phase 2
// Door appears
// Tea cup appears
// Tea clinking - video
// Door opening and entering the door - video
// ------ Build Up ----------
// Furniture appears again
// Furniture rotating around the screen
// Cheshire Cat starts to appear during the buildup and fully visible on drop
// Cheshire Cat flicker with left beats

// Hydra 2.1
s3.initVideo("https://blog.livecoding.nyuadim.com/wp-content/uploads/door-animation.mov")

src(s3).scale(() => {
    const video = s3.src
    const scaleStartTime = 0
    video.currentTime = cc[0] * 2
    const scaleAmount = Math.max(1.2, (video.currentTime - scaleStartTime) * 1.6)
    return scaleAmount
  }).out()

//who's there
//creak

// Hydra 2.2
// s2.initVideo("https://blog.livecoding.nyuadim.com/wp-content/uploads/tea.mov")
// src(s2).invert().scale(1.15).out()
src(s0).invert().scale(1.15).out()

// Hydra 2.3
s2.initVideo("https://blog.livecoding.nyuadim.com/wp-content/uploads/tea.mov")
src(s2).invert().scale(0.1).rotate(() => Math.sin(time) * 0.1).kaleid(() => cc[1] * 32).out()

// Hydra 2.4
voronoi(350,0.15)
    .modulateScale(osc(8).rotate(Math.sin(time)),.5)
    .thresh(.8)
    .modulateRotate(osc(7),.4)
    .thresh(.7)
    .diff(src(o0).scale(1.8))
    .modulateScale(osc(2).modulateRotate(o0,.74))
    .diff(src(o0).rotate([-.012,.01,-.002,0]).scrollY(0,[-1/199800,0].fast(0.7)))
    .brightness([-.02,-.17].smooth().fast(.5))
    .out()

// Hydra 2.5 cat meow
//what is a cheshire cat??
s3.initImage("https://blog.livecoding.nyuadim.com/wp-content/uploads/cheshire-cat.png")


src(s3).blend(
    noise(18)
    .colorama(9)
     .posterize(2)
     .kaleid(50)
    .mask(
        shape(25, 0.25)
          .modulateScale(noise(400.5, 0.5))
      )
      .mask(shape(400, 1, 2.125))
      .modulateScale(
        osc(6, 0.125, 0.05)
          .kaleid(50)
      )
      .mult(
        osc(20, 0.05, 2.4)
          .kaleid(50),
        0.25
      )
      .scale(1.75, 0.65, 0.5)
      .modulate(noise(0.5))
      .saturate(6)
      .posterize(4, 0.2)
      .scale(1.5),
    0.7
  )  
  .rotate(()=> Math.sin(time))
  .scale(() => Math.cos(time))
.out()




//transition to phase 3
src(s3).blend(
shape([4,5,6].fast(0.1).smooth(1),0.000001,[0.2,0.7].smooth(1))
.color(0.2,0.4,0.3)
.scrollX(()=>Math.sin(time*0.27))
.add(
  shape([4,5,6].fast(0.1).smooth(1),0.000001,[0.2,0.7,0.5,0.3].smooth(1))
  .color(0.6,0.2,0.5)
  .scrollY(0.35)
  .scrollX(()=>Math.sin(time*0.33)))
.add(
  shape([4,5,6].fast(0.1).smooth(1),0.000001,[0.2,0.7,0.3].smooth(1))
  .color(0.2,0.4,0.6)
  .scrollY(()=> cc[0]*2)
  .scrollX(()=>Math.sin(time*0.41)*-1))
.add(
      src(o0).shift(0.001,0.01,0.001)
      .scrollX([0.05,-0.05].fast(0.1).smooth(1))
      .scale([1.05,0.9].fast(0.3).smooth(1),[1.05,0.9,1].fast(0.29).smooth(1))
      ,0.85)
  .modulate(voronoi(10,2,2)),
  0.7
  )
  .rotate(() => Math.sin(time))
  .scale(() => Math.cos(time))
  .out()


//
shape([4,5,6].fast(0.1).smooth(1),0.000001,[0.2,0.7].smooth(1))
.color(0.2,0.4,0.3)
.scrollX(()=>Math.sin(time*0.27))
.add(
  shape([4,5,6].fast(0.1).smooth(1),0.000001,[0.2,0.7,0.5,0.3].smooth(1))
  .color(0.6,0.2,0.5)
  .scrollY(0.35)
  .scrollX(()=>Math.sin(time*0.33)))
.add(
  shape([4,5,6].fast(0.1).smooth(1),0.000001,[0.2,0.7,0.3].smooth(1))
  .color(0.2,0.4,0.6)
  .scrollY(()=> cc[0]*2)
  .scrollX(()=>Math.sin(time*0.41)*-1))
.add(
      src(o0).shift(0.001,0.01,0.001)
      .scrollX([0.05,-0.05].fast(0.1).smooth(1))
      .scale([1.05,0.9].fast(0.3).smooth(1),[1.05,0.9,1].fast(0.29).smooth(1))
      ,0.85)
.modulate(voronoi(10,2,2))
.out();

// Phase 3

//hydra 3.0
osc(18, 0.1, 0).color(2, 0.1, 2)
.mult(osc(20, 0.01, 0)).repeat(2, 20).rotate(0.5).modulate(o1)
.scale(1, () =>  (cc[0]*8 + 2)).diff(o1).out(o0)
osc(20, 0.2, 0).color(2, 0.7, 0.1).mult(osc(40)).modulateRotate(o0, 0.2)
.rotate(0.2).out(o1)

//hydra 3.1
  osc(()=> cc[0]*10,3,4) 
 .color(0,4.2,5)
 .saturate(0.4)
 .luma(1,0.1, (6, ()=> 1 + a.fft[3]))
 .scale(0.7, ()=> cc[0]*0.5) //change to * 5
 .diff(o0)// o0
 .out(o0)// o1

//hydra 3.2
  osc(5, 0.9, 0.001)
    .kaleid([3,4,5,7,8,9,10].fast(0.1))
    .color(()=>cc[0], 4)
    .colorama(0.4)
    .rotate(()=>cc[0],()=>Math.sin(time)* -0.001 )
    .modulateRotate(o0,()=>Math.sin(time) * 0.003)
    .modulate(o0, 0.9)
    .scale(()=>cc[0])
    .out(o0)

//hydra 3.3
osc(()=>cc[0]*100, -0, 1)
	.pixelate(40.182)
	.kaleid(() => cc[0] * 19 + 3)
	.rotate(()=>cc[0]*20, 0.125)
	.modulateRotate(shape(3) //change shape maybe
		.scale(() => Math.cos(time) * 2) //change to 5
		.rotate(()=>cc[0], -0.25))
	.diff(src(o0)
		.brightness(0.3))
	.out();

//hydra 3.4
osc(15, 0.01, 0.1).mult(osc(1, -0.1).modulate(osc(2).rotate(4,1), 20))
.color(0,2.4,5) //change to cc[0]
.saturate(0.4)
.luma(1,0.1, (6, ()=> 1 + a.fft[3]))
.scale(()=> cc[0], ()=> 0.7 + a.fft[3])
.diff(o0)// o0
.out(o0)// o1


//hydra 3.5
osc(49.633, 0.2, 1)
	.modulateScale(osc(40, ()=>cc[0]*0.3, 1)
		.kaleid(15.089))
	.repeat(()=>cc[0], 2.646)
	.modulate(o0, 0.061)
    .scale(()=>cc[0], ()=> 1+ a.fft[3])
	.modulateKaleid(shape(4, 0.1, 1))
	.out(o0);


//crazyy 3.5
s0.initImage("https://blog.livecoding.nyuadim.com/wp-content/uploads/door.png")
s1.initImage("https://blog.livecoding.nyuadim.com/wp-content/uploads/cup.png")
s2.initVideo("https://blog.livecoding.nyuadim.com/wp-content/uploads/tea.mov")

src(s2) //alternate between s0,s1,s2,s3
.blend(
  osc(5, 0.9, 0.001) //change first to cc[0]
    .kaleid([3,4,5,7,8,9,10].fast(0.1))
    .color(()=>cc[0], 4)
    .colorama(0.4)
    .rotate(()=>cc[0],()=>Math.sin(time)* -0.001 )
    .modulateRotate(o0,()=>Math.sin(time) * 0.003)
    .modulate(o0, 0.9)
    .scale(0.9))
   .rotate(() => Math.sin(time))
   .scale(() => Math.cos(time))
    .out(o0)

//end
  src(s3)
    .modulate(noise(3, 0.2))
    .modulate(noise(()=>(cc[0]+cc[1])*1,0.3))
    .blend(src(o0).scale(1.01), 0.7)
    .out(o0)

  s2.initVideo("https://blog.livecoding.nyuadim.com/wp-content/uploads/Gnarlyy.mp4")
  src(s2).blend(
    osc(5,0.9,0.01)
     .kaleid([3,4,5,7,8,9,10].fast(0.1)))
      // .colorama(0.1) //adilbek got it!!
     .out()


hush()

// Phase 3.5
// Dancing people with "Not Gradebale" text






osc(200,0).kaleid(200).scale(1, 0.4).scrollX(0.1, 0.01).mult().out()


s0.initCam()
src(s0).saturate(2).contrast(1.3).layer(src(o0).mask(shape(4,2).scale(0.5,0.7).scrollX(0.25)).scrollX(0.001)).modulate(o0,0.001).out(o0)
 osc(15, 0.01, 0.1).mult(osc(1, -0.1).modulate(osc(2).rotate(4,1), 20))


osc(13,0,1)
  .modulate(osc(21,0.25,0))
  .modulateScale(osc(34))
  .modulateKaleid(osc(55),0.1,1)
  .out()

Tidal code (Clara — phase 1 + phase 2, Jiho — phase 3):

-- phase 1
-- hydra 1.1

d13 $ ccv "20 90" # ccn "0" # s "midi" -- change from 20 90 to 10 70
d2 $ slow 1 $ s "~ hh" # gain 2
d4 $ slow 2 $ s "superpiano" # n (range 60 72 $ sine) # sustain 0.1 # room 0.5 # gain 1.2 -- start as slow 2, then $ fast 2 

d3 $ s "birds:3"
-- d3 $ "birds" -- alt between birds and birds:3

-- hydra 1.2
d1 $ n ("<c2 a2 g2>")
  # s "notes"
  # gain ((range 0.6 0.9 rand) * 1.2)
  -- # legato 1
  -- # room 0.8
  # size 0.95
  # resonance 0.5
  # pan (slow 5 sine)
  # cutoff (range 500 1200 $ slow 4 sine)
  # detune (range (-0.1) 0.1 $ slow 3 sine)

  d9 $ n ("<[c3,fs3,g3] [~ c4]>*2 <[a2,gs3,e3,b2]*3 [~ d4,fs4]>")
    # s "notes"
    # gain (range 0.6 0.9 rand)
    # legato 0.7
    # room 0.6
    # size 0.8
    # resonance 0.4
    # pan (slow 5 sine)

-- hydra 1.3

  d2 $ ccv "17 [~50] 100 ~" # ccn "0" # s "midi"
     
  d3 $ n "e5 ~ ~ a5 fs5 ~ e5 ~ ~ ~ c5 ~ ~ a5 ~ ~"
  # s "notes"
  # legato 1
  # gain 1
  -- # pan (slow 4 sine)
  # room 0.7
  # size 0.9

-- d9 silence

-- hydra 1.4
do
  d2 $ ccv (segment 8 "0 20 64 127 60 30 127 50") # ccn "0" # s "midi"
  d3 $ n "[0 4 7 12]!4"
    # s "notes"
    # gain "1"
    # legato "0.5"
    # speed "1"
    -- # room "0.8"
    # lpf 2000

-- Hydra 1.5
  d5 $ s "[~ drum]*2" -- drum *4, then *2
    # gain "1.3"
    # delay "0.3"
    # delayfeedback "0.2"
    # speed "1"

do
  d1 $ s "bd(5,8)"
    # n (irand 5)
    # gain "1.1"
    # speed "0.6"
    # lpf 600
  d2 $ struct "<t(3,120) t(3,27,120)>" $ ccv ((segment 128 (range 127 0 saw))) # ccn "0" # s "midi"
     
-- END OF PHASE 1

-- d3 silence
-- d5 silence

-- PHASE 2: alice in wonderland
-- Hydra 2.1
do
  d2 $ ccv "[[0 ~ 50 127 127]]" # ccn "0" # s "midi"
  d1 $ s "[[door:1 ~ door:2 ~ ~]]" # gain 3 
     # speed 1

d4 $ slow 2 $ s "superpiano" # n (range 60 72 $ sine) # sustain 0.1 # room 0.5 # gain 1.5

d10 $ s "alice" # gain 1

d3 $ slow 2 "mug" # gain 1.8

d10 silence

-- Hydra 2.2

do
  d2 $ ccv "[[1 0 1 0]]" # speed 1.2 # ccn "10" # s "midi"
  d4 $ slow 4
    $ s "[ [glass:3 ~ glass:5 ~]]"
    # gain 1.5
    # speed 1.2
    -- drum
    # shape (choose [0.3, 0.6])
    # room 0.4
    # delay 0.25
    -- glass
    # lpf (slow 4 $ range 800 2000 sine)
    # pan (slow 8 $ sine)

-- d10 silence

d5 $ every 3 (rev)
      $ s "space"
      # n (run 5 + rand)
      # octave "<5 6>"
      # speed (rand * 0.5 + 0.8)
      # lpf (slow 16 $ range 800 2000 sine)
      # resonance 0.4
      # orbit "d5"

-- Hydra 2.3
do
  d2 $ ccv "80 [10 ~] [30 ~] ~" # ccn "1" # s "midi"
  d1 $ s "bd bd sd bd" # gain 1.5

d3 $ slow 2 $ s "superpiano"
    # n (scale "minor pentatonic" "0 2 4 7 11" + "<12 -12>")
    # octave 5
    # sustain 8
    # legato 0.8
    # gain 1
    # lpf (slow 16 $ range 800 2000 sine)
    # room 0.7
    # delay 0.75
    -- # delayfeedback 0.8
    # speed (slow 4 $ range 0.9 1.1 sine)
    # pan (slow 16 sine)
    -- # vowel "ooh"
    # orbit "d5"

d7 $ stack [
  slow 4 $ s "pad:1.5" # gain 0.9,
  s "bass*2" # room 0.5 # gain 1.2
]

-- d1 silence

-- Hydra 2.4
do
  d6 $ s "~"
  d8 $ every 2 (|+ speed 0.2) $ slow 2 $ sound "hh*8" # gain 1.5 # hpf 3000 # pan rand
  d5 $ s "~ bass:1" # gain 2 # speed 0.5 # lpf 300 # room 0.4
  d6 $ every 2 (0.25 ~>) $
    s "~ cp"
    # gain 1.5
    # speed 1.2
  d8 $ s "hh hh hh <hh*6 [hh*2]!3>" # gain 1.5
  d5 $ s "[~ ~ bass:1]"
    # gain 1.8
    # speed 0.6
    # lpf 400
    # room 0.3
  d3 $ qtrigger $ filterWhen (>=0) $ seqP [
      (0,1, s "[bd bd] [sd hh]"),
      (1,2, s "[bd bd bd bd] [sd hh]"),
      (2,3, s "[bd bd bd bd bd bd] [sd hh]"),
      (3,4, s "[bd bd bd bd bd bd bd bd] [sd hh]")
    ] # gain (slow 4 (range 0.8 1 saw))

-- Hydra 2.5
d9 $ s "cat3" # gain 5.2
    # orbit "-1"
    # dry "1"
    # room "0"
    # delay "0"
    # shape "0"
    # resonance "0"
    # delay "0"
    # delayfeedback "0"
    # lpf "20000"

d4 $ stack [
  s "<bd sn cp hh>" # speed "1 1.5 2",
  s "808bd:4(3,8) 808sd:7(5,8)" # gain 1.1
]

d8 $ stack [
s "moog" >| note (arp "up" (scale "major" ("[0,2,4,6]") + "a5")) # room 0.4 # gain 0.7,
ccv 0 # ccn 1 # s "midi"
]

do
  d4 $ s "bd bd sd bd cp odx mt <[bd*2]!8>" # gain 1.5
  d2 $ ccv (segment 8 "0 20 64 [50 90]") # ccn "0" # s "midi"
   
d9 $ fast 2 $ s "moog" >| note (arp "up" (scale "major" ("[0,2,4,6]") + "a5")) # room 0.4 # gain 1 # squiz 0.3

-- Hydra 2.6 

do -- change from d4 to d1
  d1 $ sound "feel:2*8"
    # gain 1.9 # speed (range 1 3.5 $ saw)
    # cutoff (range 800 2000 $ sine) # resonance 0.2
    # room 0.5 # accelerate 0.5
    # sz 0.5 # crush 1
  d2 $ ccv "0 20 64 90 0 30 70 112" # ccn "0" # s "midi"

do
  d2 silence
  d3 silence
  d6 silence
  d7 silence
  d8 silence
  d9 silence

-- END OF PHASE 2

do
  d3 $ jux rev $ "bass:4*2 <bass:4 [bass*4]!2>"
    # room 0.3 # gain 5
    # shape 0.7
  d1 $ ccv "0 40 64 14 70 112" # ccn "0" # s "midi"

d4 $ iter 4 $ sound "hh*2 hh*4 hh*2 <[hh] hh*2!2>"
  # room 0.3 # shape 0.4
  # gain 1.7
  # speed (range 1.3 1.6 $ slow 4 sine)

do
  d5 $ fast 2 $ s "sine" >| note (arp "up" (scale "major" ("[2,0,-4,6]"+"<-8 4 -2 5 3>") + "f5"))
    # room 0.4 # gain 1.4
    # legato 3
    # pan (slow 8 $ sine)
  d1 $ ccv "40 12 60 25 <34 70>" # ccn "0" # s "midi"

do
  d6 $ s "arpy*4 arpy@1~ ~"
    # legato 2.5
    # up "f6 a5 c3 g6" # shape 0.7 # gain 1.3
  d7 $ s "hh*8 ~ ~ cp!2 ~"
    # gain 3 # shape 0.5  # resonance 0.5 # krush 0.3
  d1 $ ccv "55 14 20 ~ ~ 70 112" # ccn "0" # s "midi"

d8 $ s "gnarly:1@1.2"
  # cut 1 # shape 0.9 # gain 7

do
  d10 $ s "bd*2 drum*4 <sd:1 feel:16> [~bd?]"
    # gain "4.5 5 4" # shape 0.8
  d1 $ ccv "[23, 45] [45, 12] [51 90]" # ccn "0" # s "midi"

d9 $ iter 4 $ sound "{<arpy:3(4,8) arpy:5(3,8) arpy:2(7,8)>}%2"
  # n "7 32 11 6 21 17 10 3"
  # room 0.5 # speed 2 # gain 1.4 # shape 0.2

do
  d2 $ silence
  d3 $ silence
  d4 $ silence
  d5 $ silence
  d6 $ silence
  d7 $ silence
  d10 $ silence

do
  d2 $ slow 1.25 $ s "sine" >| note (arp "up" (scale "min" ("[7,5,8,3,2,7,8,3,9,5]") + "a5"))
    # shape 0.9 # gain 7 # sz 0.4
  d1 $ ccv "45 ~~ 12 ~~ 75" # ccn "0" # s "midi"

do
  d3 $ s "[bd*2, drum:2*4, <sd:4(3,8) feel:12(5,8)>, [~ bd:7?]]"
    # gain "5 4 6" # shape 0.5
    # squiz (range 1.5 3 $ slow 8 sine)
  d1 $ ccv "12 51 30 ~~ [90 37]" # ccn "0" # s "midi"

d4 $ s "[newnotes:3*2, ~ newnotes:5*4?]" # gain 3 # cut 1
    # n "-1" # shape 0.2
    # squiz (range 1 1.5 $ slow 8 sine)

do
  d5 $ s "[~, sd:3*4, [~ sd:5@2 sd:6*2]?]"
    # gain 4 # speed 1.5
    # size 0.4
  d6 $ s "[~, metal:2(5,8), ~, metal:4(3,8)]"
    # gain 1.7 # speed 0.7
    # pan (slow 16 sine) # room 0.6
  d7 $ stack [
    s "feel:2*8" # gain "1",
    s "bass:11*8" # gain "1.6" # speed "1.2" # pan "-0.5",
    s "hh:4*4" # gain "3" # speed "0.7" # pan "0.5"
    ] # krush (range 0 2 $ rand)
  d8 $ silence
  d1 $ ccv "32 15 ~ ~ 69 ~~ [15 37]" # ccn "0" # s "midi"

do
  d9 $ stack [
    s "feel:2*8" # gain "1.4",
    s "bass:11*8" # gain "1.6" # speed "1.6" # pan "-0.5",
    s "hh:4*4" # gain "3" # speed "1" # pan "0.5"
    ] # krush (range 0 2 $ rand)
  d8 $ s "gnarly:1"
    # cut 1 # shape 0.9 # gain 7 # speed 3

do
  d2 $ silence
  d3 $ silence
  d5 $ silence
  d9 $ silence

do
  once $ s "msam:2"
    # gain 5 # legato 4
  hush

  once $ s "msam:4"
    # gain 5

Aaaand here’s our final performance video!!! Hope you guys enjoy it. 🙂

Last but not least, these are some future improvements we want to make/the limitations coming from Jiho:

I’ve always struggled with creating impactful beat drops, and I think they’re especially important in rave music because they really shape how the audience responds. Looking back, I feel I could’ve done a better job and spent more time developing that section. It was a similar experience with the composition project—I kept layering different sound lines because each previous version felt dull or lacking. Most of the added elements ended up contributing more to the buildup than the drop itself. Personally, I think the buildup ended up being stronger than the actual beat drop. Moreover, while the integration of the first gnarly sound worked well, the ending felt too abrupt. That’s partly because the idea of using “gnarly” music was added later in the process. If I had built my music code around those gnarly beats from the start, the overall transitions would’ve been smoother and more cohesive. On a similar note, another area I’d personally like to work on is incorporating external sound files into my compositions. I feel like this is where I currently lack creativity, and watching other groups, including Clara, really inspired me. They were able to integrate external audio so seamlessly, and it made their pieces feel more dynamic and refined. It’s something I want to explore further to expand the range and depth of my own sound work in the future.

We did it…

Our concept was inspired by the musical Wicked. We wanted to create a piece that was playful but also something that showcased our personality. This project does not just encapsulate the skills we learnt across this course, but also how we learnt to improvise efficiently and stay on track with our vision.

Ziya and Linh were in charge of the visuals for this performance. As we are inspired by the musical “Wicked”, we want to develop a similar theme for the visual, including the images from “Wicked” itself, and also a theme of witches. As the audio starts with a simple and slow pace, we used simple patterns with simple changes using MIDI value to match with the sample. From the start, we knew we wanted to include a theme of the musical itself into our performance as we were big fans of it, but we of course did not want to make it entirely Wicked-centered, therefore, we thought of telling the story of Wicked through campus cats. We decided that we’d take pictures and videos, and edit them to make them related to the wicked theme. However, when we were executing this, we quickly grew sick of these images, and decided to draw images directly from the wicked musical itself.

One of our prominent challenges was working on transitions, particularly the part where we were switching between the two images. We found that we had to be very careful about what functions and details to include and at what time. Furthermore, we also had to listen carefully to the sound to ensure that the beat drop and the overall transitions were in sync. One of the strategies we approached was layering the first and second visuals on top of each other for transition. Then, we can just fade out the first visual to reveal our final intention. Also, we decided to change the visuals significantly after the drop as the musical color and timbre changes at that point.

Another challenge for us is to sync with the sound so that the audience can see the relations between the changes in visuals and the audio together. As Linh did not have any extensive experience in music, she could not catch on when the audio changes by listening. Therefore, we asked Luke and Rashed to signal us when they are moving to a new section so that the visual can adapt to change.

Staying on theme was important to us, and we had two: Brat and Wicked. And somewhere in the corner of Tiktok, this subculture exists, so we decided to bring it to the stage of NYUAD in our performance. This was exemplified through the sounds used such as the “365” which were sampled from Charlie XCX, who is the originator of “Brat”. Brat also fit well with Wicked, and both were prominent with the colour green — Brat was neon green, and Wicked is a darker green. Colour for us was just as important, and in Wicked, there are two main colours which are Green and Pink – representing the opposing sides of Glinda and Elphaba. Hencewhy, throughout our entire performance we referenced these two colours, hopefully in a manner that did not seem too repetitive.

voronoi(100, 0.15) //shape(2,0.15)
  .thresh(0.8)
  .modulateRotate(osc(10), 0.4, () => cc[0]*50) // cc
  .thresh(0.5)
  .diff(src(o0).scale(1.8))
  .modulateScale(osc(10) // cc
  .modulateRotate(o0, 0.74))     
  .diff(src(o0))
  .mult(osc(()=>cc[0], 0.1, 3))
  .out()

hush()

// VIDEO SECTION
s0.initVideo("https://blog.livecoding.nyuadim.com/wp-content/uploads/the-bratty-vid.mp4")
p5 = new P5()
vid = p5.createVideo("https://blog.livecoding.nyuadim.com/wp-content/uploads/the-bratty-vid.mp4");
vid.size(window.innerWidth, window.innerHeight);
vid.hide()
p5.draw=()=> {
  let img = vid.get();
  p5.image(img, 0, 0, width, height); // redraws the video frame by frame in                           p5
}
s0.init({src: p5.canvas})
vid.play()
src(s0).out()

s5.initImage("https://blog.livecoding.nyuadim.com/wp-content/uploads/wckd-scaled.png")
src(s5)
  //.modulateRotate(osc(10), 0.4, () => cc[0]*50) // cc
  .scale(0.6,() => cc[0]*1)
  .scrollX(2, 1)
  .out()

hush()

s0.initImage("https://blog.livecoding.nyuadim.com/wp-content/uploads/witch-hat.png")
s1.initImage("https://blog.livecoding.nyuadim.com/wp-content/uploads/oz-img.png")
s2.initImage("https://blog.livecoding.nyuadim.com/wp-content/uploads/witch-kingdom.png")
s3.initImage("https://blog.livecoding.nyuadim.com/wp-content/uploads/nessarose-cat.png")

//  -- HAT SECTION
src(o0)
  .layer(src(s0)
  .add(o1)
  .scale(()=>0.5 + cc[2])
  )
  .out(o1)
render(o1)

hush()

render()


// o3 -> o0 -> scale -> pixelate -> ccActual
src(s2)
  .diff(src(s1).diff(src(o3).scale(()=>cc[0])))
   .diff(src(o1))
  // .blend(src(s1), ()=>ccActual[4])
  // .diff(src(o0))
  // .modulateRotate(o0)
  // .scale(() => cc[0]*2)
  .out(o3)
render(o3)

hush()

s0.initImage("https://blog.livecoding.nyuadim.com/wp-content/uploads/boq-img.png")
s3.initImage("https://blog.livecoding.nyuadim.com/wp-content/uploads/nessarose-cat.png")

// look glinda pt 2 
src(s3)
  .scale(()=>cc[5]/2)
  .blend(
        src(s0).invert().luma(0.3).invert().scale(0.5)
        .rotate(()=> (cc[2] - 0.5)* 50 * 0.02)
        .scale(()=>cc[3]*0.5)
        //.modulateScale(osc(5, 0.1), () => cc[0])
    , ()=>cc[6])
  .out()
src(o2)
  .layer(src(o0))
  .out(o1)
render(o1)

render()
hush()

////////////////////////////////
let p5 = new P5()
let lastCountdown = null;
let ellipses = [];
p5.hide();
s4.init({src: p5.canvas})
p5.hide();
p5.noFill()
p5.strokeWeight(20);
p5.stroke(255);
p5.draw = () => {
  p5.background(0);
  p5.fill(255);
  p5.textAlign(p5.CENTER, p5.CENTER);
  p5.textSize(200);
  // Get the current CC value
  let ccValue = 1; // or cc[0] if it's from another source
  // Decide which text to display based on the CC value
  if (ccValue == 1) {
    p5.text("wicked",cc[0]*p5.width, p5.noise(cc[0]*2)*p5.height);
  }
}
src(s4).mult(osc(10,0,3)).modulate(voronoi((10, 0.5, 2)))
  .luma(0.1)
      .repeat(()=>cc[2]*10, ()=>cc[2]*10)
  .out(o4)
render(o4)

/// NEW PRPOSED VISUALS
a.show()
a.setBins(8)
a.setSmooth(0.8)  
solid(1, 0, 1) // pink
    .mask(
      shape(999, 0.5, 0.5)
        .scale(() => a.fft[1] + 0.2)
         .scrollX(-0.3) 
    )
    .layer(
      solid(0, 1, 0.5) // green
        .mask(
          shape(4, 0.5, 0.5)
            .scale(() => a.fft[1] + 0.2)
            // .scrollX(0.3)
        )
    )
    // .modulate(voronoi(999,3),0.8)
    // .modulatePixelate(noise(55,0.5))
    // .modulate(noise(0.9, 0.1))
    .out()

hush()


hush()

In terms of the audio, we initially wanted to combine the idea of campus cats and simultaneously tell the story of ACT 1 of Wicked the musical and after further experimenting, we realized that we would have to split our performance into 9 different sections (That is, for each of the songs in ACT 1, so we decided to abandon the campus cats and we decided to put our energy into three main songs in ACT 1 which are The Wizard and I, What is this Feeling, and Defying Gravity. However, Rashed could not handle making the performance about one singular thing because he likes, as he says, “mixing things together that don’t really make sense but they somehow also do make sense”  So he suggested adding Brat. The thing is, we did not know how we would add that element because that is an entirely different concept and Rashed found a clip of Abby Lee Miller stating “Oh, that sounded really bratty” and after further discussion we decided that clip would be a perfect transition from wicked to Brat. But that was not enough, Rashed wanted to add more things and he brought up the idea of using “Crazy” by Le Sserafim because he also like voguing and that song would reach the Kpop lovers and all the gaming community since it’s been used in various games and edits and so we added it. Rashed suggested adding one more thing from a Nicki Minaj song but he wanted to make it Wicked themed to which we said yes but he had to censor the one curse word to which he agreed. After further discussion, we decided that Rashed and Ziya would say/ sing the first part of the song “What is This Feeling” to add a humorous aspect and Rashed would say the Nicki Minaj part and the ending War Cry which is a reference to Cythia Erivo’s Target Commercial.

At first when we approached the composition, we were not sure how it would sound. We know the musical songs from Wicked are already very theatrical and professionally composed and sung. What we came up with was our interpretation, our own twist of the music. We worked on the composition section by section, that is intro, build-up, drop, bridge, final ending, and I would say interlude. Starting out, we let every idea that came across our minds be realized into codes. In our first attempt, we ended up having a composition that had a runtime of roughly 7 – 8 minutes. After many more rehearsals, we realized that every section seemed to exist on its own term and didn’t connect much to the previous or next section, which was a little bit frustrating given that each section sounded so good on their own. We worked a lot on the transitions between sections. After multiple attempts at rehearsal, we realized the main issue with the transition was the sonic palette itself. We were using too many different samples; almost each pattern used a distinct sample. We figured out a good way to fix the problem is to narrow down the number of samples used, so we replaced the samples used in a few patterns with the samples that we already had, or reuse a few of them. Specifically, putting patterns that share the same sample next to each other helped a lot with smoothing out the transition. Moving forward, the lesson learned is that simplicity is better than complexity. At first in each section, we had a lot of sounds stuffed together. But as we progressed, we had to cut down the sounds and the patterns, either combining a few or removing a few completely. Critical thinking and feedback from the visual team, our classmates, and Professor Aaron helped us to reflect on the composition. Cleaning up and tightening the composition took a lot of time because we had to rearrange, add, remove patterns here and there. In addition to cleaning up, we also had to keep the filter and effects consistent. We ran into a few issues but eventually things got sorted out thanks to Professor Aaron’s help. One final thing, even though it is live coding, we also had a good time composing the music with code and mixing the sounds together. I wish we had more time to develop the live composing skill.

Overall, we are proud of our final composition and the way we were able to execute our idea in a unique yet smooth manner to the extent that the audience also enjoyed.

Tidal code:

setcps(135/60/4) 

do
  once "loath"
  p "background" $ slow 2 $ ccv ((segment 10 (range 0 127 saw))) # ccn "0" # s "midi"
        
once "loath:5"

do
  resetCycles
  d11 $ loopAt 4 $ s "360b:3"
  p "360b visuals" $ ccv "20 90 40" # ccn "0" # s "midi"
     
hush

--blonde--

do -- evaluate the hat section
  d1 $ s "gra" # legato 1 -- add in scrollX
  p "hat" $ ccv "50 65" # ccn "2" # s "midi"

d2 $ "hh*8"
   
   --

do
  d3 $ fast 2 $ n "1*2" # s "bd" # amp 1 -- comment out diff    
  p "background" $ fast 4 $ ccv ((segment 10 (range 0 127 saw))) # ccn "0" # s "midi" -- change voronoi to shape

do
  d3 $ fast 2 $ n "0 1*2 2 1*2" # s "bd" # amp 7
  p "switch" $ ccv "0 1" # ccn "4" # s "midi"  -- comment out the blend


d4 $ s "909(5,16)" 

do -- do the shape NOT BLEND -- EVALUATE PIXELATE SECTION
  d5 $ s "bass1:11*4" # speed "2" # gain 1 # cutoff "70"
  --p "pixelate" $ ccv ((segment 4 (range 0 100 saw))) # ccn "0" # s "midi"
  p "pixelate" $ ccv "80 100 120 127" # ccn "0" # s "midi"

--


d6 $ swingBy (1/3) 4 $ sound "hh:13*4" # speed "0.5" # hcutoff "7000" # gain 1

d7 $ jux rev $ fast 0.5 $ s "crzy:6" # gain 0.6 # legato 1 


hush

d1 silence    
d2 silence
d3 silence
d4 silence
d5 silence
d6 silence
d7 silence
d8 silence

hush

-- build-up and beatdrop --


lookGlinda = do 
  d1 $ qtrigger $ filterWhen (>=0) $ 
    seqP 
      [ (0, 1, s "bd*4"  # room 0.3)
      , (1, 2,  s "bd*8"  # room 0.3)
      , (2, 3,  s "bd*16" # room 0.3)
      , (3, 4,  s "bd*32" # room 0.3)
      ]
      # hpf (range 100 1000 $ slow 4 saw)
      # speed (range 1 4 $ slow 4 saw)
      # gain 1.2
      # legato 0.5
  p "popkick visual" $ qtrigger $ filterWhen (>=0) $ 
    seqP
      [ (0, 1,  ccv "30 60 90 120" # ccn "0" # s "midi")
      , (1, 2,  ccv "15 30 45 60 75 90 120" # ccn "0" # s "midi")
      , (2, 3,  ccv "30 10 20 40 50 60 70 80 90 100 110 120 10 30 60" # ccn "0" # s "midi")
      , (3, 4, ccv ((segment 32 (range 0 127 saw)))  # ccn "0" # s "midi")
      ]
  d2 $ qtrigger $ filterWhen (>=0) $ 
    seqP 
      [ (0, 4, stack 
          [ s "~ cp" # room 0.9
          , fast 2 $ s "hh*4 ~ hh*2 <superchip*2>"
          ])
      ]
      # room 0.4
      # legato 1
      # gain (range 1 6 rand)
      # speed (range 1 2 $ slow 4 saw)
  d3 $ qtrigger $ filterWhen (>=0) $ 
    seqP 
      [ (4, 5, s "boq:1" # room 0.3 # gain 2) ]


lookGlinda


-- LOOK GLINDA P2 --
do
  lookGlinda
  p "disappear" $ qtrigger $ filterWhen (>=0) $ 
      seqP 
        [ (0, 4, ccv "0" # ccn "6" # s "midi") ]
  p "disappearcat" $ qtrigger $ filterWhen (>=0) $ 
      seqP 
        [ (0, 4, ccv "0" # ccn "5" # s "midi") ]
  p "me" $ qtrigger $ filterWhen (>=0) $ 
    seqP 
      [ (4, 5, ccv "[0 40 80 100 120] ~" # ccn "5" # s "midi") ]
  p "boq" $ qtrigger $ filterWhen (>=0) $ 
    seqP 
      [ (4, 5, ccv "0 127" # ccn "6" # s "midi") ]

once $ "boq:1"

d11 silence


-- beat drop --
hush

-- change to :1 --
do  -- uncomment the rotate
  d1 $ fast 1 $ s "crzy" # legato 1 # gain 1.5
  p "rotate" $ ccv "100 20" # ccn "2" # s "midi"

d1 silence

-- Act like witches, dress like crazy --

d5 $ fast 2 $ sound "bd:13 [~ bd] sd:2 bd:13" #krush "4" # gain 2

do
  setcps(120/60/4)
  resetCycles
  d5 silence
  d1 $ loopAt 4 $ s "crzy:4" # gain 1.2
  p "rotate" $ ccv "100 20" # ccn "2" # s "midi"

   
hush
d1 silence
d2 silence


d3 silence
hush


-- LOOK AT GLINDA PART 2

do --comment out scale
  d4 $ slice 32 "2" $ sound "twai:1" # gain 1
  p "scale" $ slow 2 $ ccv "100 20" # ccn "3" # s "midi"

d6 $ striate 8 $ s "ykw" # legato 1 # gain 1.2


do
  d7 $ fast 0.5 $ s "oiia" # gain 1.1 # speed 0.8
  p "shape-popular" $ fast 1 $ ccv "40 120 40" # ccn "0" # s "midi"




-------------------------------------------------------------------------

hush


do 
  d10 $ sound "bd:13 [~ bd] sd:2 bd:13" #krush "4" # gain 1.8
  p "d10 sound" $ ccv "124*3 [~10] 10*2 30*13" # ccn "2" # s "midi"

d10 silence
    
d12 $ s "gra"

hush


d1 silence
d2 silence
d3 silence
d4 silence
d5 silence
d6 silence 
d7 silence
d8 silence
d9 silence
d10 silence
hush

--ENDING VORONOI-- 

do 
 d12 silence
 once $ s "chun"


-- defying gravity --
once $ s "defy:2" # gain 1.5

d1 $ loopAt 1.2 $ "defy:6" 

hush
     
d2 $ fast 2 $ sometimes (|+ n 12) $ scramble 4 $ n "<af5 ef6 df6 f5> df5 ef5 _" # s "superpiano" # legato 2
# pitch2 4
--change 4 to 8
# pitch3 2
# voice 0
# orbit 2
# room 0.1
# size 0.7
# slide 0
# speed 1
# gain 1.2
# accelerate 0
# cutoff 200

d3 $ slow 4 $ n "af5 ~ ef5 ~ df5 ~ f5 ~"
  # s "supersaw"
  # gain 0.6
  # attack 0.2
  # sustain 2
  # release 3
  # cutoff 800
  # room 0.9
  # size 0.8

d4 $ n "<[0 ~ 1 ~][~ 0 1 ~]>" # s "tink:4"


do
  d5 $ slow 2 $ sound "superpiano:2" <| up "af5 af5 ef6 df6 ~*4 f5 af5 ~*24 df5 ~*2 f5 ef5 ~*12" # gain "0.6" # room "0.9"
  p "endingpiano" $ slow 2 $ ccv "30 30 40 35 ~*4 50 50 ~*24 30 ~*2 50 40 ~*12" # ccn "0" # s "midi"

--d5 $ sound "superpiano:2" <| up "g5 f6 ~ [e6 c6]"

d5 $ slow 2 $ sound "superpiano:2" <| up "af5 af5 ef6 df6 ~*4 f5 af5 ~*12 df5 ~*2 f5 ef5 ~*8" # gain "0.6" # room "0.9" $ slow 2 $ sound "superpiano:2" <| up "af5 af5 ef6 df6 ~*4 f5 af5 ~*12 df5 ~*2 f5 ef5 ~*8" # gain "0.6" # room "0.9"
   
hush
   
once $ s "defy:5"

Code on Github

Performance Video:

Final Documentation Live Coding

In our group, Mike was in charge of the music, Ruiqi worked on the visuals, and Rebecca worked on both and also controlled the midi value, and the Ascii texts.

Visual

Personally, I’ve started thinking of Hydra more as a post-processing tool than a starting point for visuals. I’ve gotten a bit tired of its typical abstract look, but I still love how effortlessly it adds texture and glitchy effects to existing visuals. That’s why I chose to build the base of the visuals in Blender and TouchDesigner, then bring them into Hydra to add that extra edge.

As always, I’m drawn to a black, white, and red aesthetic—creepy and dark visuals are totally my thing. I pulled inspiration from a previous 3D animation I made, focusing on the human body, shape, and brain. In the beginning, I didn’t have a solid concept. I was just exploring faces, masks, bodies—seeing what looked “cool.” Then I started bringing some renders into Hydra and tried syncing up with what Mike was creating. We quickly realized that working separately made our pieces feel disconnected, so we adjusted things a bit to make the whole thing feel more cohesive.

At one point, I found myself overusing modulatePixelate() and colorama()—literally slapping them on everything. That’s when I knew I needed to change things up. So I went for Touch Designer and used instancing to build a rotating visual with a box, which gave the piece a nice shift in rhythm and form.

In the end, I’m proud of what I made. The visuals really reflect my style, and it felt great combining tools I’ve picked up along the way—it made me feel like a real multimedia artist. I’m also super thankful for my teammates. Everyone put in so much effort, and even though some issues popped up during the final performance, it didn’t really matter. We knew we had given it our all. Big love to the whole cyber brain scanners crew.

Here are some images and videos we made in Blender and TouchDesigner for the performance:

Audio

For the whole performance, we were trying to create upon several keywords: space, cyberpunk, and huge distortion. I drew inspiration from Chicago house music, glitch, and industrial music for how to make the sounds raw and wild, to correspond to the sketches for the visual.

At the early iterations of the performance, our theme was a space odyssey for cyborgs. So I thought a continuous beeping sound from a robot would fit in to start the performance. Though later we built something slightly different, we still agree this intro is effective in grasping the audience’s attention, so we chose to keep it.

For the build up, I really like the idea of using human voices to serve as a transition into the second part. And to echo with the theme, I picked a recording from crews on Discovery, a space shuttle orbiter with NASA on testing the communication system.

The aesthetic for the visual reminded me to keep the audio minimalistic. Instead of layering too many different tracks as the performance progressed, I used different variants of the main melody by adding effects like lpf, crush, and chop. The original sample for creating the main melody is a one-shot synth, and these effects helped make it sound intense, creepy and distorted.

In the second part, we wanted to make the audience feel hyped, so I focused more on the sound design for drums. The snare created a depth for the sound, and the clap can make the audience interact with us. And the glitch sample was adopted according to the pixel noise cross from the visual. 

It’s really amazing to see how we have evolved as a group since the very first drum circle project, and it is a pleasure to work together and exchange ideas to make everything better.

Communication with the audience

To do live coding as a performance, we decided to use some extra methods to communicate with the audience. Typically, in a performance, the performer might communicate with the audience directly via microphone, which might undermine the consistency of the audio we are creating. Live coders might also type something in comments, which takes advantage of the nature of live coding, but the comments might be too small compared to the visual effects, and it might be hard for the audience to notice them.

Finally, we came up with the idea of creating ASCII art. ASCII art has been a part of the coding community for a long time, especially when it comes to live coding. In one of the most well-known live coding platforms, Sonic Pi, users will encounter an ASCII art title of this software. We would like to hype up the audience by adding some ASCII art to our flok panel, which could also utilize the flok layout and let those who don’t read code pay attention to the code panel.

We really managed to hype up the audience and express our big thanks and love to the community that has been supporting us throughout this semester.

👽👽👽💻🧠❤❤❤

Reading Artist-Musicians, Musician-Artists made me think about how blurry the line between disciplines really is, and maybe always has been. Looking up Paul Klee’s work was also interesting as he literally structured his paintings like musical compositions. It reminded me of how we use TidalCycles and Hydra, where coding becomes a tool to create a hybrid performance, a balance between live, rhythmic, and visual elements. Also, the part about intensity over virtuosity also stood out. It made me think of how, in live coding, it’s not about being super polishedl it’s about being present and responsive. Mistakes, randomness, and improvisation are part of the experience, and sometimes even enhance it. Sometimes in Tidal, we throw in randomness just to see what the system gives back. That unpredictability feels exciting, like giving up some control and letting the tool collaborate with you. What I found especially interesting was how often artists, like Cornelia Schleime, shifted between disciplines because they had to, whether it was due to censorship, economics, or needing a new form to express something. It made me realize that interdisciplinary practice isn’t always just an aesthetic choice, it often carries a sense of urgency or necessity. Are labels like artist, musician, or performer even useful anymore? Or are they just there for institutions and funding applications? When we do live coding, these lines feel less and less relevant.

I think it’s interesting to see how the combination of art and sound has existed since very early times, even before the advent of live coding. It’s fascinating how people have integrated other disciplines into their own areas of interest. I remember watching a video about how sound can shape sand into different patterns. It feels like the idea of interdisciplinarity with sound has helped lay the foundation for experimentation, forming the initial steps toward sound visualization through sand.

As we move into the digital era, people now have access to tools that make it easy to create both art and sound compositions. This has blurred the line between musicians and artists in the digital world, as mentioned in the reading, opening up opportunities for new forms of sound and visuals that can complement each other. This kind of co-evolution between the senses powerfully represents the concept of “multidisciplinary” work, as fields with distinct terminology and skill sets begin to build on one another to elevate both art and music to new levels.

Synesthesia is very commonly seen in contemporary art and I think it plays an even bigger role in the context of live coding. In the narrative of the artist-musician/musician artist, being able to interact with one form of sense as an artist and another as a musician becomes particularly potent when the tools themselves facilitate this blend. 

Abstract motifs in music and visuals and especially the idea of counterpoint in visuals as well as musically is something that I would look into as a tool for expression, as artists sought a universal language beyond representation. Just as musical counterpoint involves the interplay of independent melodic lines, visual counterpoint can be looked at through the juxtaposition and interaction of distinct visual elements like colour, form, or rhythm within a composition like Hans Richter’s work. 

Programming personality that moved into music. This is something we see greatly in the live coding community, again going back to Orca and Devine Lu Linvega from earlier in the semester, we see a lot of individuals being really good programmers and creating tools that help them bring musical ideas and motifs they inherit. The ability as a programmer to be able to wield coding languages as instruments for their artistic voice have made these individuals really good musicians/performers too. So in a sense in the scope of live coding, it goes beyond just touching on the two disciplines but for one ability to really be able to nourish the other.

In the very wide scope of topics that the article covers, art as an expression and art as a tool for fun is explored. Like in Fiorucci Made Me Hardcore (1999) there is one section where the dance looks entirely performative, while this may be besides the point. I think its very cool to see the performative side of the English club scene, in the same way as there are youtube tutorials on how to dance at a rave.