The 14-week journey has finally ended, and it was time to show everyone what we’ve been working on and how we grew throughout the semester! We were inspired by Alice in Wonderland when brainstorming for our final performance, hence our funky team name. However, while we were composing, we decided to stray off from following Alice in Wonderland’s narrative from the start to the end, and instead decided to mix in some random visuals and sounds in there as well while still keeping the general flow of the composition based on Alice in Wonderland.

We wanted to show contrast and build-up of the visuals and sounds between the starting point and the ending point, so we decided to start with black-and-white visuals and rather quieter, mysterious audio to go along with it to hint at what’s about to come later in the composition, which was Phase 1.

In Phase 2, we began to include very obvious Alice references (i.e. video and audio of the door closing and opening, teacups, Alice in Wonderland soundtrack, etc.), and our climax for Phase 2 was the appearance of the Cheshire cat image — I also added a sound clip saying “a Cheshire cat” from the movie, and this was the signal for transition into Phase 3, which was the “craziest” phase in our composition.

We tried to make the visuals and the audio as engaging as possible in Phase 3 because this was the final part of our performance, so there were a lot of fast beats and loud melodies. We also wanted to end with “a bang,” so we decided to use the chorus of the song “Gnarly” by Katseye and have a little surprise dance party to end our performance! The reason why we chose this song was because we thought the beats of it was very similar to something that we’d create in Tidal, and the whole repetition of the word “gnarly” seemed to fit well with our the intriguing, slightly unpredictable vibes we wanted for our last phase.

I want to give a special shoutout to Emilie and Rashed for being down to join me even though it was very last minute. 🙂 Because we wanted it to be a total surprise and make it seem like it was a “spontaneous” attempt to gauge audience engagement, they climbed up on stage when I gave them the signal so that it looked out of the blue rather than them waiting on stage beforehand, and we’re happy that it all worked out well!

Although there was a definite shift from serene/calm visuals/audio to the crazy/vibrant point we were at by the end of our composition, we still tried to keep the theme of mysterious, outworldly, fantastical, and intriguing vibes throughout the whole performance so that there was still a somewhat coherent picture portrayed to the audience.

With that being said, I’ll stop yapping and post the codes here now:

Hydra code (Adilbek — phase 1 + phase 2 till the Cheshire cat part; Jannah — ending of phase 2 + phase 3):

// Init values
basePath = "https://blog.livecoding.nyuadim.com/wp-content/uploads/"
videoNames = [basePath+"tea-1.mov", basePath+"kettle.mov"]
vids = []
allLoaded = false
loadedCount = 0
for (let i=0; i<videoNames.length; i++){
	vids[i] = document.createElement('video')
	vids[i].autoplay = true
	vids[i].loop = true
  vids[i].crossOrigin="anonymous"
  vids[i].src = videoNames[i]
	vids[i].addEventListener(
	"loadeddata", function () {
	  loadedCount += 1;
		console.log(videoNames[i]+" loaded")
		if (loadedCount == videoNames.length){
			allLoaded = true;
			console.log("All loaded");
		}
	}, false);
}
whichVid = 1

s0.init({src: vids[0]})

update = () =>{
  if (whichVid != ccActual[10]){
    whichVid = ccActual[10];
    s0.init({src: vids[whichVid]})
  }
}

// Phase 1
// Calm Visuals with transition to Alice in Wonderlands
// Hydra 1.1
osc(() => Math.sin(time) * cc[0] * 200, 0)
  .kaleid(() => cc[0] * 200)
  .scale(1, 0.4)
  .out()


// Hydra 1.2
osc(200, 0)
  .kaleid(200)
  .scale(1, 0.4)
  .scrollX(0.1, 0.01)
  .mult(
    osc(() => cc[0] * 200, 0)
      .kaleid(200)
      .scale(1, 0.4)
  )
  .out();

// Hydra 1.3
shape(1,0.3)
  .rotate(2)
  .colorama(3)
  .thresh(.2)
  .modulatePixelate(noise(()=>(cc[0])*2.5,.1))
  .out(o0)

//birds???
//where are we
// Hydra 1.4
shape(20,0.1,0.01)
  .scale(() => Math.sin(time)*3)
  .repeat(() => Math.sin(time)*10)
  .modulateRotate(o0)
  .scale(() => Math.sin(time)*2)
  .modulate(noise(()=>cc[0]*1.5,0))
  .rotate(0.1, 0.9)
.out(o0)


// Hydra 1.5
src(o0)
  .layer(src(o0).scale(() => 0.9 + cc[0] * 0.2).rotate(.002))
  .layer(shape(2, .5).invert().repeat(3,2).luma(.2).invert().scrollY(.1, -.05).kaleid(3).rotate(.1,.2))
  .out()

// Phase 2
// Door appears
// Tea cup appears
// Tea clinking - video
// Door opening and entering the door - video
// ------ Build Up ----------
// Furniture appears again
// Furniture rotating around the screen
// Cheshire Cat starts to appear during the buildup and fully visible on drop
// Cheshire Cat flicker with left beats

// Hydra 2.1
s3.initVideo("https://blog.livecoding.nyuadim.com/wp-content/uploads/door-animation.mov")

src(s3).scale(() => {
    const video = s3.src
    const scaleStartTime = 0
    video.currentTime = cc[0] * 2
    const scaleAmount = Math.max(1.2, (video.currentTime - scaleStartTime) * 1.6)
    return scaleAmount
  }).out()

//who's there
//creak

// Hydra 2.2
// s2.initVideo("https://blog.livecoding.nyuadim.com/wp-content/uploads/tea.mov")
// src(s2).invert().scale(1.15).out()
src(s0).invert().scale(1.15).out()

// Hydra 2.3
s2.initVideo("https://blog.livecoding.nyuadim.com/wp-content/uploads/tea.mov")
src(s2).invert().scale(0.1).rotate(() => Math.sin(time) * 0.1).kaleid(() => cc[1] * 32).out()

// Hydra 2.4
voronoi(350,0.15)
    .modulateScale(osc(8).rotate(Math.sin(time)),.5)
    .thresh(.8)
    .modulateRotate(osc(7),.4)
    .thresh(.7)
    .diff(src(o0).scale(1.8))
    .modulateScale(osc(2).modulateRotate(o0,.74))
    .diff(src(o0).rotate([-.012,.01,-.002,0]).scrollY(0,[-1/199800,0].fast(0.7)))
    .brightness([-.02,-.17].smooth().fast(.5))
    .out()

// Hydra 2.5 cat meow
//what is a cheshire cat??
s3.initImage("https://blog.livecoding.nyuadim.com/wp-content/uploads/cheshire-cat.png")


src(s3).blend(
    noise(18)
    .colorama(9)
     .posterize(2)
     .kaleid(50)
    .mask(
        shape(25, 0.25)
          .modulateScale(noise(400.5, 0.5))
      )
      .mask(shape(400, 1, 2.125))
      .modulateScale(
        osc(6, 0.125, 0.05)
          .kaleid(50)
      )
      .mult(
        osc(20, 0.05, 2.4)
          .kaleid(50),
        0.25
      )
      .scale(1.75, 0.65, 0.5)
      .modulate(noise(0.5))
      .saturate(6)
      .posterize(4, 0.2)
      .scale(1.5),
    0.7
  )  
  .rotate(()=> Math.sin(time))
  .scale(() => Math.cos(time))
.out()




//transition to phase 3
src(s3).blend(
shape([4,5,6].fast(0.1).smooth(1),0.000001,[0.2,0.7].smooth(1))
.color(0.2,0.4,0.3)
.scrollX(()=>Math.sin(time*0.27))
.add(
  shape([4,5,6].fast(0.1).smooth(1),0.000001,[0.2,0.7,0.5,0.3].smooth(1))
  .color(0.6,0.2,0.5)
  .scrollY(0.35)
  .scrollX(()=>Math.sin(time*0.33)))
.add(
  shape([4,5,6].fast(0.1).smooth(1),0.000001,[0.2,0.7,0.3].smooth(1))
  .color(0.2,0.4,0.6)
  .scrollY(()=> cc[0]*2)
  .scrollX(()=>Math.sin(time*0.41)*-1))
.add(
      src(o0).shift(0.001,0.01,0.001)
      .scrollX([0.05,-0.05].fast(0.1).smooth(1))
      .scale([1.05,0.9].fast(0.3).smooth(1),[1.05,0.9,1].fast(0.29).smooth(1))
      ,0.85)
  .modulate(voronoi(10,2,2)),
  0.7
  )
  .rotate(() => Math.sin(time))
  .scale(() => Math.cos(time))
  .out()


//
shape([4,5,6].fast(0.1).smooth(1),0.000001,[0.2,0.7].smooth(1))
.color(0.2,0.4,0.3)
.scrollX(()=>Math.sin(time*0.27))
.add(
  shape([4,5,6].fast(0.1).smooth(1),0.000001,[0.2,0.7,0.5,0.3].smooth(1))
  .color(0.6,0.2,0.5)
  .scrollY(0.35)
  .scrollX(()=>Math.sin(time*0.33)))
.add(
  shape([4,5,6].fast(0.1).smooth(1),0.000001,[0.2,0.7,0.3].smooth(1))
  .color(0.2,0.4,0.6)
  .scrollY(()=> cc[0]*2)
  .scrollX(()=>Math.sin(time*0.41)*-1))
.add(
      src(o0).shift(0.001,0.01,0.001)
      .scrollX([0.05,-0.05].fast(0.1).smooth(1))
      .scale([1.05,0.9].fast(0.3).smooth(1),[1.05,0.9,1].fast(0.29).smooth(1))
      ,0.85)
.modulate(voronoi(10,2,2))
.out();

// Phase 3

//hydra 3.0
osc(18, 0.1, 0).color(2, 0.1, 2)
.mult(osc(20, 0.01, 0)).repeat(2, 20).rotate(0.5).modulate(o1)
.scale(1, () =>  (cc[0]*8 + 2)).diff(o1).out(o0)
osc(20, 0.2, 0).color(2, 0.7, 0.1).mult(osc(40)).modulateRotate(o0, 0.2)
.rotate(0.2).out(o1)

//hydra 3.1
  osc(()=> cc[0]*10,3,4) 
 .color(0,4.2,5)
 .saturate(0.4)
 .luma(1,0.1, (6, ()=> 1 + a.fft[3]))
 .scale(0.7, ()=> cc[0]*0.5) //change to * 5
 .diff(o0)// o0
 .out(o0)// o1

//hydra 3.2
  osc(5, 0.9, 0.001)
    .kaleid([3,4,5,7,8,9,10].fast(0.1))
    .color(()=>cc[0], 4)
    .colorama(0.4)
    .rotate(()=>cc[0],()=>Math.sin(time)* -0.001 )
    .modulateRotate(o0,()=>Math.sin(time) * 0.003)
    .modulate(o0, 0.9)
    .scale(()=>cc[0])
    .out(o0)

//hydra 3.3
osc(()=>cc[0]*100, -0, 1)
	.pixelate(40.182)
	.kaleid(() => cc[0] * 19 + 3)
	.rotate(()=>cc[0]*20, 0.125)
	.modulateRotate(shape(3) //change shape maybe
		.scale(() => Math.cos(time) * 2) //change to 5
		.rotate(()=>cc[0], -0.25))
	.diff(src(o0)
		.brightness(0.3))
	.out();

//hydra 3.4
osc(15, 0.01, 0.1).mult(osc(1, -0.1).modulate(osc(2).rotate(4,1), 20))
.color(0,2.4,5) //change to cc[0]
.saturate(0.4)
.luma(1,0.1, (6, ()=> 1 + a.fft[3]))
.scale(()=> cc[0], ()=> 0.7 + a.fft[3])
.diff(o0)// o0
.out(o0)// o1


//hydra 3.5
osc(49.633, 0.2, 1)
	.modulateScale(osc(40, ()=>cc[0]*0.3, 1)
		.kaleid(15.089))
	.repeat(()=>cc[0], 2.646)
	.modulate(o0, 0.061)
    .scale(()=>cc[0], ()=> 1+ a.fft[3])
	.modulateKaleid(shape(4, 0.1, 1))
	.out(o0);


//crazyy 3.5
s0.initImage("https://blog.livecoding.nyuadim.com/wp-content/uploads/door.png")
s1.initImage("https://blog.livecoding.nyuadim.com/wp-content/uploads/cup.png")
s2.initVideo("https://blog.livecoding.nyuadim.com/wp-content/uploads/tea.mov")

src(s2) //alternate between s0,s1,s2,s3
.blend(
  osc(5, 0.9, 0.001) //change first to cc[0]
    .kaleid([3,4,5,7,8,9,10].fast(0.1))
    .color(()=>cc[0], 4)
    .colorama(0.4)
    .rotate(()=>cc[0],()=>Math.sin(time)* -0.001 )
    .modulateRotate(o0,()=>Math.sin(time) * 0.003)
    .modulate(o0, 0.9)
    .scale(0.9))
   .rotate(() => Math.sin(time))
   .scale(() => Math.cos(time))
    .out(o0)

//end
  src(s3)
    .modulate(noise(3, 0.2))
    .modulate(noise(()=>(cc[0]+cc[1])*1,0.3))
    .blend(src(o0).scale(1.01), 0.7)
    .out(o0)

  s2.initVideo("https://blog.livecoding.nyuadim.com/wp-content/uploads/Gnarlyy.mp4")
  src(s2).blend(
    osc(5,0.9,0.01)
     .kaleid([3,4,5,7,8,9,10].fast(0.1)))
      // .colorama(0.1) //adilbek got it!!
     .out()


hush()

// Phase 3.5
// Dancing people with "Not Gradebale" text






osc(200,0).kaleid(200).scale(1, 0.4).scrollX(0.1, 0.01).mult().out()


s0.initCam()
src(s0).saturate(2).contrast(1.3).layer(src(o0).mask(shape(4,2).scale(0.5,0.7).scrollX(0.25)).scrollX(0.001)).modulate(o0,0.001).out(o0)
 osc(15, 0.01, 0.1).mult(osc(1, -0.1).modulate(osc(2).rotate(4,1), 20))


osc(13,0,1)
  .modulate(osc(21,0.25,0))
  .modulateScale(osc(34))
  .modulateKaleid(osc(55),0.1,1)
  .out()

Tidal code (Clara — phase 1 + phase 2, Jiho — phase 3):

-- phase 1
-- hydra 1.1

d13 $ ccv "20 90" # ccn "0" # s "midi" -- change from 20 90 to 10 70
d2 $ slow 1 $ s "~ hh" # gain 2
d4 $ slow 2 $ s "superpiano" # n (range 60 72 $ sine) # sustain 0.1 # room 0.5 # gain 1.2 -- start as slow 2, then $ fast 2 

d3 $ s "birds:3"
-- d3 $ "birds" -- alt between birds and birds:3

-- hydra 1.2
d1 $ n ("<c2 a2 g2>")
  # s "notes"
  # gain ((range 0.6 0.9 rand) * 1.2)
  -- # legato 1
  -- # room 0.8
  # size 0.95
  # resonance 0.5
  # pan (slow 5 sine)
  # cutoff (range 500 1200 $ slow 4 sine)
  # detune (range (-0.1) 0.1 $ slow 3 sine)

  d9 $ n ("<[c3,fs3,g3] [~ c4]>*2 <[a2,gs3,e3,b2]*3 [~ d4,fs4]>")
    # s "notes"
    # gain (range 0.6 0.9 rand)
    # legato 0.7
    # room 0.6
    # size 0.8
    # resonance 0.4
    # pan (slow 5 sine)

-- hydra 1.3

  d2 $ ccv "17 [~50] 100 ~" # ccn "0" # s "midi"
     
  d3 $ n "e5 ~ ~ a5 fs5 ~ e5 ~ ~ ~ c5 ~ ~ a5 ~ ~"
  # s "notes"
  # legato 1
  # gain 1
  -- # pan (slow 4 sine)
  # room 0.7
  # size 0.9

-- d9 silence

-- hydra 1.4
do
  d2 $ ccv (segment 8 "0 20 64 127 60 30 127 50") # ccn "0" # s "midi"
  d3 $ n "[0 4 7 12]!4"
    # s "notes"
    # gain "1"
    # legato "0.5"
    # speed "1"
    -- # room "0.8"
    # lpf 2000

-- Hydra 1.5
  d5 $ s "[~ drum]*2" -- drum *4, then *2
    # gain "1.3"
    # delay "0.3"
    # delayfeedback "0.2"
    # speed "1"

do
  d1 $ s "bd(5,8)"
    # n (irand 5)
    # gain "1.1"
    # speed "0.6"
    # lpf 600
  d2 $ struct "<t(3,120) t(3,27,120)>" $ ccv ((segment 128 (range 127 0 saw))) # ccn "0" # s "midi"
     
-- END OF PHASE 1

-- d3 silence
-- d5 silence

-- PHASE 2: alice in wonderland
-- Hydra 2.1
do
  d2 $ ccv "[[0 ~ 50 127 127]]" # ccn "0" # s "midi"
  d1 $ s "[[door:1 ~ door:2 ~ ~]]" # gain 3 
     # speed 1

d4 $ slow 2 $ s "superpiano" # n (range 60 72 $ sine) # sustain 0.1 # room 0.5 # gain 1.5

d10 $ s "alice" # gain 1

d3 $ slow 2 "mug" # gain 1.8

d10 silence

-- Hydra 2.2

do
  d2 $ ccv "[[1 0 1 0]]" # speed 1.2 # ccn "10" # s "midi"
  d4 $ slow 4
    $ s "[ [glass:3 ~ glass:5 ~]]"
    # gain 1.5
    # speed 1.2
    -- drum
    # shape (choose [0.3, 0.6])
    # room 0.4
    # delay 0.25
    -- glass
    # lpf (slow 4 $ range 800 2000 sine)
    # pan (slow 8 $ sine)

-- d10 silence

d5 $ every 3 (rev)
      $ s "space"
      # n (run 5 + rand)
      # octave "<5 6>"
      # speed (rand * 0.5 + 0.8)
      # lpf (slow 16 $ range 800 2000 sine)
      # resonance 0.4
      # orbit "d5"

-- Hydra 2.3
do
  d2 $ ccv "80 [10 ~] [30 ~] ~" # ccn "1" # s "midi"
  d1 $ s "bd bd sd bd" # gain 1.5

d3 $ slow 2 $ s "superpiano"
    # n (scale "minor pentatonic" "0 2 4 7 11" + "<12 -12>")
    # octave 5
    # sustain 8
    # legato 0.8
    # gain 1
    # lpf (slow 16 $ range 800 2000 sine)
    # room 0.7
    # delay 0.75
    -- # delayfeedback 0.8
    # speed (slow 4 $ range 0.9 1.1 sine)
    # pan (slow 16 sine)
    -- # vowel "ooh"
    # orbit "d5"

d7 $ stack [
  slow 4 $ s "pad:1.5" # gain 0.9,
  s "bass*2" # room 0.5 # gain 1.2
]

-- d1 silence

-- Hydra 2.4
do
  d6 $ s "~"
  d8 $ every 2 (|+ speed 0.2) $ slow 2 $ sound "hh*8" # gain 1.5 # hpf 3000 # pan rand
  d5 $ s "~ bass:1" # gain 2 # speed 0.5 # lpf 300 # room 0.4
  d6 $ every 2 (0.25 ~>) $
    s "~ cp"
    # gain 1.5
    # speed 1.2
  d8 $ s "hh hh hh <hh*6 [hh*2]!3>" # gain 1.5
  d5 $ s "[~ ~ bass:1]"
    # gain 1.8
    # speed 0.6
    # lpf 400
    # room 0.3
  d3 $ qtrigger $ filterWhen (>=0) $ seqP [
      (0,1, s "[bd bd] [sd hh]"),
      (1,2, s "[bd bd bd bd] [sd hh]"),
      (2,3, s "[bd bd bd bd bd bd] [sd hh]"),
      (3,4, s "[bd bd bd bd bd bd bd bd] [sd hh]")
    ] # gain (slow 4 (range 0.8 1 saw))

-- Hydra 2.5
d9 $ s "cat3" # gain 5.2
    # orbit "-1"
    # dry "1"
    # room "0"
    # delay "0"
    # shape "0"
    # resonance "0"
    # delay "0"
    # delayfeedback "0"
    # lpf "20000"

d4 $ stack [
  s "<bd sn cp hh>" # speed "1 1.5 2",
  s "808bd:4(3,8) 808sd:7(5,8)" # gain 1.1
]

d8 $ stack [
s "moog" >| note (arp "up" (scale "major" ("[0,2,4,6]") + "a5")) # room 0.4 # gain 0.7,
ccv 0 # ccn 1 # s "midi"
]

do
  d4 $ s "bd bd sd bd cp odx mt <[bd*2]!8>" # gain 1.5
  d2 $ ccv (segment 8 "0 20 64 [50 90]") # ccn "0" # s "midi"
   
d9 $ fast 2 $ s "moog" >| note (arp "up" (scale "major" ("[0,2,4,6]") + "a5")) # room 0.4 # gain 1 # squiz 0.3

-- Hydra 2.6 

do -- change from d4 to d1
  d1 $ sound "feel:2*8"
    # gain 1.9 # speed (range 1 3.5 $ saw)
    # cutoff (range 800 2000 $ sine) # resonance 0.2
    # room 0.5 # accelerate 0.5
    # sz 0.5 # crush 1
  d2 $ ccv "0 20 64 90 0 30 70 112" # ccn "0" # s "midi"

do
  d2 silence
  d3 silence
  d6 silence
  d7 silence
  d8 silence
  d9 silence

-- END OF PHASE 2

do
  d3 $ jux rev $ "bass:4*2 <bass:4 [bass*4]!2>"
    # room 0.3 # gain 5
    # shape 0.7
  d1 $ ccv "0 40 64 14 70 112" # ccn "0" # s "midi"

d4 $ iter 4 $ sound "hh*2 hh*4 hh*2 <[hh] hh*2!2>"
  # room 0.3 # shape 0.4
  # gain 1.7
  # speed (range 1.3 1.6 $ slow 4 sine)

do
  d5 $ fast 2 $ s "sine" >| note (arp "up" (scale "major" ("[2,0,-4,6]"+"<-8 4 -2 5 3>") + "f5"))
    # room 0.4 # gain 1.4
    # legato 3
    # pan (slow 8 $ sine)
  d1 $ ccv "40 12 60 25 <34 70>" # ccn "0" # s "midi"

do
  d6 $ s "arpy*4 arpy@1~ ~"
    # legato 2.5
    # up "f6 a5 c3 g6" # shape 0.7 # gain 1.3
  d7 $ s "hh*8 ~ ~ cp!2 ~"
    # gain 3 # shape 0.5  # resonance 0.5 # krush 0.3
  d1 $ ccv "55 14 20 ~ ~ 70 112" # ccn "0" # s "midi"

d8 $ s "gnarly:1@1.2"
  # cut 1 # shape 0.9 # gain 7

do
  d10 $ s "bd*2 drum*4 <sd:1 feel:16> [~bd?]"
    # gain "4.5 5 4" # shape 0.8
  d1 $ ccv "[23, 45] [45, 12] [51 90]" # ccn "0" # s "midi"

d9 $ iter 4 $ sound "{<arpy:3(4,8) arpy:5(3,8) arpy:2(7,8)>}%2"
  # n "7 32 11 6 21 17 10 3"
  # room 0.5 # speed 2 # gain 1.4 # shape 0.2

do
  d2 $ silence
  d3 $ silence
  d4 $ silence
  d5 $ silence
  d6 $ silence
  d7 $ silence
  d10 $ silence

do
  d2 $ slow 1.25 $ s "sine" >| note (arp "up" (scale "min" ("[7,5,8,3,2,7,8,3,9,5]") + "a5"))
    # shape 0.9 # gain 7 # sz 0.4
  d1 $ ccv "45 ~~ 12 ~~ 75" # ccn "0" # s "midi"

do
  d3 $ s "[bd*2, drum:2*4, <sd:4(3,8) feel:12(5,8)>, [~ bd:7?]]"
    # gain "5 4 6" # shape 0.5
    # squiz (range 1.5 3 $ slow 8 sine)
  d1 $ ccv "12 51 30 ~~ [90 37]" # ccn "0" # s "midi"

d4 $ s "[newnotes:3*2, ~ newnotes:5*4?]" # gain 3 # cut 1
    # n "-1" # shape 0.2
    # squiz (range 1 1.5 $ slow 8 sine)

do
  d5 $ s "[~, sd:3*4, [~ sd:5@2 sd:6*2]?]"
    # gain 4 # speed 1.5
    # size 0.4
  d6 $ s "[~, metal:2(5,8), ~, metal:4(3,8)]"
    # gain 1.7 # speed 0.7
    # pan (slow 16 sine) # room 0.6
  d7 $ stack [
    s "feel:2*8" # gain "1",
    s "bass:11*8" # gain "1.6" # speed "1.2" # pan "-0.5",
    s "hh:4*4" # gain "3" # speed "0.7" # pan "0.5"
    ] # krush (range 0 2 $ rand)
  d8 $ silence
  d1 $ ccv "32 15 ~ ~ 69 ~~ [15 37]" # ccn "0" # s "midi"

do
  d9 $ stack [
    s "feel:2*8" # gain "1.4",
    s "bass:11*8" # gain "1.6" # speed "1.6" # pan "-0.5",
    s "hh:4*4" # gain "3" # speed "1" # pan "0.5"
    ] # krush (range 0 2 $ rand)
  d8 $ s "gnarly:1"
    # cut 1 # shape 0.9 # gain 7 # speed 3

do
  d2 $ silence
  d3 $ silence
  d5 $ silence
  d9 $ silence

do
  once $ s "msam:2"
    # gain 5 # legato 4
  hush

  once $ s "msam:4"
    # gain 5

Aaaand here’s our final performance video!!! Hope you guys enjoy it. 🙂

Last but not least, these are some future improvements we want to make/the limitations coming from Jiho:

I’ve always struggled with creating impactful beat drops, and I think they’re especially important in rave music because they really shape how the audience responds. Looking back, I feel I could’ve done a better job and spent more time developing that section. It was a similar experience with the composition project—I kept layering different sound lines because each previous version felt dull or lacking. Most of the added elements ended up contributing more to the buildup than the drop itself. Personally, I think the buildup ended up being stronger than the actual beat drop. Moreover, while the integration of the first gnarly sound worked well, the ending felt too abrupt. That’s partly because the idea of using “gnarly” music was added later in the process. If I had built my music code around those gnarly beats from the start, the overall transitions would’ve been smoother and more cohesive. On a similar note, another area I’d personally like to work on is incorporating external sound files into my compositions. I feel like this is where I currently lack creativity, and watching other groups, including Clara, really inspired me. They were able to integrate external audio so seamlessly, and it made their pieces feel more dynamic and refined. It’s something I want to explore further to expand the range and depth of my own sound work in the future.

This reading unexpectedly turned out to be one of my favorites from this semester, because exploring the relationship between a musician and an artist has always intrigued me. As someone who was always closely linked to both musical and artistic worlds since I was young, there were moments when I was confused which one I wanted to choose/was a “better fit” for in terms of career paths; and to be frank, I’m still standing at a crossroad with this decision. This was probably why this reading had many relatable and intriguing aspects, because it talked about how the boundaries between different categories of art are becoming blurred, especially with the rise of technology usage in both music and art — this allowed many artists to become “multiple artists,” or artists who refuse to be confined to a single category.

I found it particularly interesting how club spaces in the 1990s became the new “institutions” that acted as new forms of galleries and museums where hybrid work could get done. Reading about this also reminded me of my time in Berlin, where we got to learn about underground art scenes during the early 1900s and the evolvement of the club culture in Germany throughout the 20th century. It was also around this time when works in which music and visual art that are conceptuallyand technologically intertwined were starting to become popular, thus showing how digitalization didn’t just provide new tools, but moreso fundamentally redefined the relationship between music and visual art, as well as artist and audience.

It was fascinating to read about two artists who took polar opposite approaches to what it means to “live code” in terms of where the liveliness lies and what it means for something to count as “live coding.” I personally thought that I was more similar to Deadmau5 because like him, I like to have a structure/demo of what’s going to happen even if it’s not entirely concrete, especially because with the music part; this is because I think that unlike Hydra where there’s less stakes with improvising your visuals, your audio should have a build up and a storytelling aspect that’s clear to the audience. After all, it’s more prominent if your different audio aspects don’t harmonize.

“The liveness in live coding is fulfilled through a performer’s activity in generating the sound, rather than a performer’s presence as a figurehead in a spectacle.”

I thought the above quote was quite interesting, because it summed up the writer’s belief that the essence of live coding should be in the element of the performer actually performing and thinking at the spot on his composition, rather than the performer just hitting the play button, which is what Deadmau5’s performances are like according to the writer. This made me question my own self on how much I am exactly “live coding,” because although I’ve been tweaking things on the spot, I still had the majority of the code all planned out before my performances. Did this mean that I wasn’t fulfilling that “live” aspect of live coding, too?

Finally, the writer’s conclusion of how live coding is a practice that opens us up to the “unbounded exploration of the musical potential of materials” made me realize that one of the most important mindsets I should have in live coding is to not be afraid of making mistakes, which are bound to happen especially if I were to respect and follow the live coding’s liveness element and do more improvisation at the spot.

Here’s the YouTube link to my demo.

Here’s also my Hydra code…

//hydra

let p5 = new P5()
s0.init({src: p5.canvas})
// in a browser you'll want to hide the canvas
p5.hide();

// no need for setup
p5.noFill()
p5.strokeWeight(20);
p5.stroke(255);

let circlePositions = [
  { x: p5.width / 4, y: p5.height / 2, size: 300 }, // First circle
  { x: (p5.width / 4) * 3, y: p5.height / 2, size: 300 } // Second circle
];

p5.draw = () => {
  p5.background(0);

  // first circle
  p5.ellipse(circlePositions[0].x, circlePositions[0].y, circlePositions[0].size, circlePositions[0].size);

  // second circle
  p5.ellipse(circlePositions[1].x, circlePositions[1].y, circlePositions[1].size, circlePositions[1].size);
}
p5.draw = ()=>{
  p5.background(0);
  if (cc[1]==1){
    p5.ellipse(p5.width/2,p5.height/2,600*cc[0]+300*p5.noise(cc[0]),600*cc[0]+300*p5.noise(cc[0]));
  } else {
    p5.ellipse(p5.noise(cc[0]*2)*p5.width,cc[0]*p5.height,300,300);
  }
}

src(s0).modulate(noise(3, 0.6), 0.03).mult(osc(1, 0, 1)).diff(src(o1)).out()
src(s0).modulate(noise(2, 0.9), .3).mult(osc(10, 0, 1)).diff(src(o1)).out()
src(s0).modulate(noise(5, 5), .9).mult(osc(80, 30, 100)).diff(src(o1)).out()

// feedback effects --> .1 - .6, osc 0 - 10
src(s0).modulate(noise (4, 1.5), .6).mult(osc(0, 10, 1)).out(o2)
src(o2)
  .modulate(src(o1).add(solid(0, 0), -0.5), 0.005)
  .blend(src(o0).add(o0).add(o0).add(o0), 0.1)
  .out(o2)
  render(o2)

hush()

…and my Tidal!

//tidal

d3 $ s "superpiano" >| note (scale "major" ("[7 11 2 4 7 21 4 2]") + "15") # room 0.4
d1 $ juxBy 0.6 (slow 8) $ sound "bd cp sn hh:3" # gain 1.5
d4 $ juxBy 0.6 (slow 3) $ s "blip" # gain 1.7

-- 1 to 4, then to 8
d2 $ whenmod 16 8 (# ccv ((segment 128 (range 0 127 saw)))) $ struct "<t(8,8)>" $ ccv ((segment 128 (range 40 120 rand))) # ccn "0" # s "midi"

The theme I had for this composition project was recreating nature, or more specifically, a tropical jungle. The reason was pretty simple — I grew up in Taiwan, where we were always graced with sunlight and a hot, tropical climate, and this warm, happy feeling was what I missed the most during my time in Berlin and New York last year, so I wanted to recreate this atmosphere for all of us to enjoy. I also thought the idea of attempting to portray nature through technology — which is anything but natural — would be fun. 🙂

After a lot of fooling around, trying various audio samples, and playing with how each instrument will harmonize, I created a composition with a bright, happy, and somewhat dreamlike vibe, both in terms of its visuals and its audio. I first began by picking out some sounds that I found myself drawn to, such as the pelog scale, the sine instrument/audio, usage of claps, beats with unexpected irregular rhythms once in a while, etc., and then once I started assembling them together, I tweaked some parts to make them harmonize with each other.

Creating a build-up as well as brainstorming what my big “drop” was going to be was a bit more difficult because I felt this “pressure” to make them “catchy” and carry impact. While I did want it to be somewhat pleasant to the vast majority of the audience’s ears, I also didn’t want them to be too generic because there seemed to be certain formulas for both the build-up and the drop that a lot of people use in their compositions. I did end up using the classic acceleration of rhythm and the pitch for my build-up, but my drop ended up straying a lot from what a classic “beat drop” might look like because I didn’t want to use any overpowering bass or beat but rather wanted to create a more full and rich harmony of all the instruments I’ve been using up to this point with an addition of sounds like birds chirping, multiple sine melodies that mimic the songbirds, etc.

For my visuals, I used a lot of vibrant colors and waves/rounded lines/circles. I also imported an image of the jungle at the end to match my beat drop as a “big reveal” of the composition’s final destination.

Here’s the video:

Here’s the Tidal code:

d1 $ s "~ ~ cp ~" # gain 1
d2 $ ccv "<127 50 127 50>" # ccn "0" # s "midi"

d1 $ s "bd [~bd] cp ~" # gain 1
d2 $ ccv "17 [~50] 100 ~" # ccn "0" # s "midi"

d5 $ scramble 4 $ s "sine" >| note ((scale "minor" "<[-4 [10 3]]>"))
d2 $ scramble 4 $ ccv "10 127 5 30" # ccn "0" # s "midi"

d1 $ s "bd bd sd bd cp odx mt <[bd*2]!8>" # gain 1
d2 $ struct "t(4,8)" $ ccv ((segment 128 (range 127 0 saw))) # ccn "0" # s "midi"

d5 $ struct "<t(2,4)>" $ s "sine" # note ((scale "pelog" "c'maj f'maj")) # room 0.3
d2 $ ccv (segment 8 "0 84 10 127") # ccn "0" # s "midi"

d7 $ s "sine*16" # note ((scale "pelog" "-5 .. 10"))
d2 $ ccv (segment 8 "0 20 64 127 60 30 127 ~") # ccn "0" # s "midi"

d3 $ struct "<t(3,8) t(3,8,1)>" $ s "sine" # note "<[1 3 1] [5 13 10]>" # room 0.4
d2 $ struct "<t(3,120) t(3,27,120)>" $ ccv ((segment 128 (range 127 0 saw))) # ccn "0" # s "midi"

d5 $ s "cp*4" # gain 1
d2 $ struct "<t(4,8)>" $ ccv ((segment 128 (range 127 0 saw))) # ccn "0" # s "midi"

d1 $ s "[bd bd bd bd] [sd cp] [bd bd] [cp bd]" # gain 1.1
d2 $ ccv "[127 20 70 0] [100 10] [80 ~] [0 ~]" # ccn "0" # s "midi"
d3 $ s "hh hh hh <hh*6 [hh*2]!3>" # gain 1.5

-- d7 $ s "sine*8" # note "[[1 1] 5 8 10 5 8]" # room 0.2

-- BUILDUP

do {
  d9 silence;
  d7 silence;
  d8 silence;
  d9 $ qtrigger $ filterWhen (>=0) $ s "808cy" <| n (run 16); -- 25 808 cymbals
  d10 $ qtrigger $ filterWhen (>=0) $ seqP [
    (0,1, s "sine*2"),
    (1,2, s "sine*4"),
    (2,3, s "sine*8"),
    (3,4, s "sine*16")
  ] # gain (slow 4 (range 0.8 1.2 saw)) # speed (slow 4 (range 2 4 saw));
  d1 $ qtrigger $ filterWhen (>=0) $ seqP [
      (0,1, s "[bd bd] [sd cp]"),
      (1,2, s "[bd bd bd bd] [sd cp]"),
      (2,3, s "[bd bd bd bd bd bd] [sd cp]"),
      (3,4, s "[bd bd bd bd bd bd bd bd] [sd cp]")
  ] # gain (slow 4 (range 0.8 1 saw));
  d3 $ qtrigger $ filterWhen (>=0) $ seqP [
    (0, 1, s "cp:2*2"),
    (1, 2, s "cp:2*4"),
    (2, 3, s "cp:2*8"),
    (3, 4, s "cp:2*16")
  ] # room 0.3 # hpf (slow 4 (1000*saw + 100)) # speed (slow 4 (range 1 2 saw));
}

-- drop
nature_go_Crazy = do
  d8 $ qtrigger $ filterWhen (>=0) $ s "blip*8" # gain 1 # note "[[6 8] 13 10 18 6 10]"
  d3 $ qtrigger $ filterWhen (>=0) $ struct "<t(3,8) t(3,8,1)>" $ s "sine" # note "<[1 3 1] [5 13 10]>" # room 0.4
  d12 $ qtrigger $ filterWhen (>=0) $ s "sine*16"
      # note "[[15 17 10] [8 20] [27 8] 25]"
      # room 0.4
      # gain "0.8"
      # pan "<0.2 0.8 0.5>"
  d1 $ qtrigger $ filterWhen (>=0) $ s "[bd bd sd bd] [bd sd] [bd cp] [sd bd]" # gain 1
  d9 $ qtrigger $ filterWhen (>=0) $ s "birds"
  -- d14 $ slow 2 $ s "arpy" <| up "c'maj(3,8) f'maj(3,8) ef'maj(3,8,1) bf4'maj(3,8)"
  -- d15 $ s "bass" <| n (run 4) -- four short bass sounds, nasty abrupt release
  -- d15 $ slow 3 $ s "bassdm" <| n (run 24)
  d14 $ qtrigger $ filterWhen (>=0) $ s "can" <| n (run 8) # gain 2
  d16 $ qtrigger $ filterWhen (>=0) $ s "<808lt:6(3,8) 808lt:6(5,8,1)>" <| n (run 8) # squiz 2 # gain 2 # up "-2 -12 -14"
  d10 $ qtrigger $ filterWhen (>=0) $ s "[bd bd cp bd bd cp bd bd] [sd cp]"

d2 $ ccv "127 ~ 70 20 [120] [40] [90]" # ccn "0" # s "midi"

nature_go_Crazy

d8 silence
d1 silence
d2 $ ccv "20 ~ 80 40 [120 ~] [20 ~] 127" # ccn "1" # s "midi"
    d16 silence

d10 silence
d3 silence
d14 silence
d2 $ ccv "0 ~ [50 127] ~ 20 ~" # ccn "0" # s "midi"
d5 silence
d9 silence

hush

And here’s the Hydra code:

//start!!

// first shape
shape(999, 0.3, 0.01).modulate(noise(2, 0.5)).luma(()=>cc[0],0.0).modulate(noise(()=>(cc[0]+cc[1])*1,0.3)).out(o0)

// 2nd
osc(10, 0.1, 1)
  .modulate(noise(2, 0.5))
  .mask(shape(999, 0.3, 0.3))
  .scale(1.5)
  .luma(()=>cc[0],0.0).modulate(noise(()=>(cc[0]+cc[1])*1,0.3)).out(o0)

// 3rd shape; later add .repeat(2,2), and then change to .repeat(2,2)
osc(10, 0.1, 1).modulate(noise(20, 0.9)).mask(shape(99, 0.2, 1)).luma(() => cc[0] / 127, 0.2).modulate(noise(()=>(cc[0]+cc[1])*1,0.3)).repeat(2,2).rotate(() => cc[2] * 0.1, 0.6).out(o0)

// 5th shape --> at first noise (30), then change to 10
osc(10,0.1,1).rotate(2).layer(osc(30,0,1)).modulate(noise (10,0.03),.5).luma(()=>cc[0],0.0).modulate(noise(()=>(cc[0]+cc[1])*1,0.3)).out(o0)

// 6th
osc(10, 0.1, 1).modulate(noise(2,0.5).luma(0.4,.03)).modulate(noise(()=>(cc[0]+cc[1])*1,0.3)).out(o0)

//7th; change noise 3 to 9, osc 20 to 40, and add  .posterize(5, 0.5)
osc(40, 0.2, 1)
  .kaleid(4)
  .modulateScale(noise(9, 0.5), 0.2)
  .blend(noise(3, 0.5))
  .modulate(noise(()=>(cc[0]+cc[1])*1,0.3))
  .out(o0)

      shape(2, 0.6, 0.4)
        .repeat(3, 3)
        .modulateScale(noise(2, 0.1))
        .mult(gradient().hue(0.5))
        .modulate(noise(()=>(cc[0]+cc[1])*1,0.3))
        .out(o0)

        s0.initImage("https://i.pinimg.com/736x/83/8c/d6/838cd6e2a27f7887e49d869f1857742c.jpg")

        noise(2, 0.5)
          .contrast(2)
          .modulate(noise(2, 0.1))
          .brightness(-0.3)
          .colorama(0.1)
          .diff(o0, 0.1)
          .modulate(noise(()=>(cc[0]+cc[1])*1,0.3))
          .out(o0)

  src(s0)
  .modulate(noise(4, 0.2))
  .modulate(noise(()=>(cc[0]+cc[1])*1,0.3))
  .kaleid(5)
  .out(o0)

  src(s0)
    .modulate(noise(2, 0.2))
    .modulate(noise(()=>(cc[0]+cc[1])*1,0.3))
    .modulate(osc(()=>(cc[0]*10+5), 0.1).rotate(()=>(cc[1]*0.5)))
    .out(o0)

  src(s0)
    .modulate(noise(3, 0.2))
    .modulate(noise(()=>(cc[0]+cc[1])*1,0.3))
    .blend(src(o0).scale(1.01), 0.7)
    .out(o0)

Here’s my progress so far with my composition piece! I’m pretty satisfied with my visual progression, but I’m thinking of adding many more layers for my audio because I didn’t quite have enough time to experiment and develop my audio enough by last Thursday’s class.

I also had a question regarding syncing the tidal/hydra together — is there a way for me to match/sync the audio to the visuals without me setting a certain rhythm in front of ccv/ccn? What I’m doing in my code is d2 $ struct “t(3,8)” $ ccv ((segment 128 (range 127 0 saw))) # ccn “0” # s “midi” because I found that it was the easiest for me to do this, but I realized that it might not be necessary for me to give such a complicated/long code for the ccv/ccn code line.

Thank you, professor! And happy belated birthday. 🙂

“Composers sculpt time. In fact, I like to think of my work as time design.” — Kurokawa

Whether it be simultaneously stimulating multiple sensory systems or mixing up rational with irrational realities and emotions, one of the biggest themes in Kurokawa’s compositions seem to be tying multiple elements and worlds that are seemingly different from each other all at once; and to me, it’s hard to imagine what kind of unexpected harmony his technique might bring. It seems that Kurokawa likes to take things that are already existing and are accessible to him (i.e. nature’s disorderly ways, hyper-rational realities) and give them a twist that acts as a “shocker” factor that brings forth curiosity and confusion from the recipient, which I thought was similar to other artists who do the same when they’re drawing inspiration for their works. However, I also came to realize that a lot of these other artists are all more focused on producing visual works, such as photographs, paintings, installations, etc., which made me wonder how it’s possible for sound engineers/composers to apply this “mixing reality with hyper-reality” into their works — I imagine that it might be more difficult for them because making this collision of two worlds seems to be much more prominent and thus easier to tell in visual artworks rather than audio.

I found his remark about how computer acted as a gateway for him to dive into graphic design/the “art world” fascinating, because this was one of the discussions we had in class a few weeks ago on whether the technology tools we have these days make it easier or harder for us to create art/music, etc. Because I was already inseparable from the more “traditional” art techniques such as painting and sculpting since youth before I transitioned into “tech-y” art, I thought creating works on technology was harder/more limiting to artists like me because learning about computers was always a scary realm. However, I can see how to many, it can be the exact reverse, especially if they didn’t have any background information or experience in creating artworks/music. Regardless of which circumstance you relate to more, I think this new relationship between technology and art opens up entirely new fields to both sides that allows us to expand our creativity and explore all kinds of possibilities in a much broader way.