We did it…

Our concept was inspired by the musical Wicked. We wanted to create a piece that was playful but also something that showcased our personality. This project does not just encapsulate the skills we learnt across this course, but also how we learnt to improvise efficiently and stay on track with our vision.

Ziya and Linh were in charge of the visuals for this performance. As we are inspired by the musical “Wicked”, we want to develop a similar theme for the visual, including the images from “Wicked” itself, and also a theme of witches. As the audio starts with a simple and slow pace, we used simple patterns with simple changes using MIDI value to match with the sample. From the start, we knew we wanted to include a theme of the musical itself into our performance as we were big fans of it, but we of course did not want to make it entirely Wicked-centered, therefore, we thought of telling the story of Wicked through campus cats. We decided that we’d take pictures and videos, and edit them to make them related to the wicked theme. However, when we were executing this, we quickly grew sick of these images, and decided to draw images directly from the wicked musical itself.

One of our prominent challenges was working on transitions, particularly the part where we were switching between the two images. We found that we had to be very careful about what functions and details to include and at what time. Furthermore, we also had to listen carefully to the sound to ensure that the beat drop and the overall transitions were in sync. One of the strategies we approached was layering the first and second visuals on top of each other for transition. Then, we can just fade out the first visual to reveal our final intention. Also, we decided to change the visuals significantly after the drop as the musical color and timbre changes at that point.

Another challenge for us is to sync with the sound so that the audience can see the relations between the changes in visuals and the audio together. As Linh did not have any extensive experience in music, she could not catch on when the audio changes by listening. Therefore, we asked Luke and Rashed to signal us when they are moving to a new section so that the visual can adapt to change.

Staying on theme was important to us, and we had two: Brat and Wicked. And somewhere in the corner of Tiktok, this subculture exists, so we decided to bring it to the stage of NYUAD in our performance. This was exemplified through the sounds used such as the “365” which were sampled from Charlie XCX, who is the originator of “Brat”. Brat also fit well with Wicked, and both were prominent with the colour green — Brat was neon green, and Wicked is a darker green. Colour for us was just as important, and in Wicked, there are two main colours which are Green and Pink – representing the opposing sides of Glinda and Elphaba. Hencewhy, throughout our entire performance we referenced these two colours, hopefully in a manner that did not seem too repetitive.

voronoi(100, 0.15) //shape(2,0.15)
  .thresh(0.8)
  .modulateRotate(osc(10), 0.4, () => cc[0]*50) // cc
  .thresh(0.5)
  .diff(src(o0).scale(1.8))
  .modulateScale(osc(10) // cc
  .modulateRotate(o0, 0.74))     
  .diff(src(o0))
  .mult(osc(()=>cc[0], 0.1, 3))
  .out()

hush()

// VIDEO SECTION
s0.initVideo("https://blog.livecoding.nyuadim.com/wp-content/uploads/the-bratty-vid.mp4")
p5 = new P5()
vid = p5.createVideo("https://blog.livecoding.nyuadim.com/wp-content/uploads/the-bratty-vid.mp4");
vid.size(window.innerWidth, window.innerHeight);
vid.hide()
p5.draw=()=> {
  let img = vid.get();
  p5.image(img, 0, 0, width, height); // redraws the video frame by frame in                           p5
}
s0.init({src: p5.canvas})
vid.play()
src(s0).out()

s5.initImage("https://blog.livecoding.nyuadim.com/wp-content/uploads/wckd-scaled.png")
src(s5)
  //.modulateRotate(osc(10), 0.4, () => cc[0]*50) // cc
  .scale(0.6,() => cc[0]*1)
  .scrollX(2, 1)
  .out()

hush()

s0.initImage("https://blog.livecoding.nyuadim.com/wp-content/uploads/witch-hat.png")
s1.initImage("https://blog.livecoding.nyuadim.com/wp-content/uploads/oz-img.png")
s2.initImage("https://blog.livecoding.nyuadim.com/wp-content/uploads/witch-kingdom.png")
s3.initImage("https://blog.livecoding.nyuadim.com/wp-content/uploads/nessarose-cat.png")

//  -- HAT SECTION
src(o0)
  .layer(src(s0)
  .add(o1)
  .scale(()=>0.5 + cc[2])
  )
  .out(o1)
render(o1)

hush()

render()


// o3 -> o0 -> scale -> pixelate -> ccActual
src(s2)
  .diff(src(s1).diff(src(o3).scale(()=>cc[0])))
   .diff(src(o1))
  // .blend(src(s1), ()=>ccActual[4])
  // .diff(src(o0))
  // .modulateRotate(o0)
  // .scale(() => cc[0]*2)
  .out(o3)
render(o3)

hush()

s0.initImage("https://blog.livecoding.nyuadim.com/wp-content/uploads/boq-img.png")
s3.initImage("https://blog.livecoding.nyuadim.com/wp-content/uploads/nessarose-cat.png")

// look glinda pt 2 
src(s3)
  .scale(()=>cc[5]/2)
  .blend(
        src(s0).invert().luma(0.3).invert().scale(0.5)
        .rotate(()=> (cc[2] - 0.5)* 50 * 0.02)
        .scale(()=>cc[3]*0.5)
        //.modulateScale(osc(5, 0.1), () => cc[0])
    , ()=>cc[6])
  .out()
src(o2)
  .layer(src(o0))
  .out(o1)
render(o1)

render()
hush()

////////////////////////////////
let p5 = new P5()
let lastCountdown = null;
let ellipses = [];
p5.hide();
s4.init({src: p5.canvas})
p5.hide();
p5.noFill()
p5.strokeWeight(20);
p5.stroke(255);
p5.draw = () => {
  p5.background(0);
  p5.fill(255);
  p5.textAlign(p5.CENTER, p5.CENTER);
  p5.textSize(200);
  // Get the current CC value
  let ccValue = 1; // or cc[0] if it's from another source
  // Decide which text to display based on the CC value
  if (ccValue == 1) {
    p5.text("wicked",cc[0]*p5.width, p5.noise(cc[0]*2)*p5.height);
  }
}
src(s4).mult(osc(10,0,3)).modulate(voronoi((10, 0.5, 2)))
  .luma(0.1)
      .repeat(()=>cc[2]*10, ()=>cc[2]*10)
  .out(o4)
render(o4)

/// NEW PRPOSED VISUALS
a.show()
a.setBins(8)
a.setSmooth(0.8)  
solid(1, 0, 1) // pink
    .mask(
      shape(999, 0.5, 0.5)
        .scale(() => a.fft[1] + 0.2)
         .scrollX(-0.3) 
    )
    .layer(
      solid(0, 1, 0.5) // green
        .mask(
          shape(4, 0.5, 0.5)
            .scale(() => a.fft[1] + 0.2)
            // .scrollX(0.3)
        )
    )
    // .modulate(voronoi(999,3),0.8)
    // .modulatePixelate(noise(55,0.5))
    // .modulate(noise(0.9, 0.1))
    .out()

hush()


hush()

In terms of the audio, we initially wanted to combine the idea of campus cats and simultaneously tell the story of ACT 1 of Wicked the musical and after further experimenting, we realized that we would have to split our performance into 9 different sections (That is, for each of the songs in ACT 1, so we decided to abandon the campus cats and we decided to put our energy into three main songs in ACT 1 which are The Wizard and I, What is this Feeling, and Defying Gravity. However, Rashed could not handle making the performance about one singular thing because he likes, as he says, “mixing things together that don’t really make sense but they somehow also do make sense”  So he suggested adding Brat. The thing is, we did not know how we would add that element because that is an entirely different concept and Rashed found a clip of Abby Lee Miller stating “Oh, that sounded really bratty” and after further discussion we decided that clip would be a perfect transition from wicked to Brat. But that was not enough, Rashed wanted to add more things and he brought up the idea of using “Crazy” by Le Sserafim because he also like voguing and that song would reach the Kpop lovers and all the gaming community since it’s been used in various games and edits and so we added it. Rashed suggested adding one more thing from a Nicki Minaj song but he wanted to make it Wicked themed to which we said yes but he had to censor the one curse word to which he agreed. After further discussion, we decided that Rashed and Ziya would say/ sing the first part of the song “What is This Feeling” to add a humorous aspect and Rashed would say the Nicki Minaj part and the ending War Cry which is a reference to Cythia Erivo’s Target Commercial.

At first when we approached the composition, we were not sure how it would sound. We know the musical songs from Wicked are already very theatrical and professionally composed and sung. What we came up with was our interpretation, our own twist of the music. We worked on the composition section by section, that is intro, build-up, drop, bridge, final ending, and I would say interlude. Starting out, we let every idea that came across our minds be realized into codes. In our first attempt, we ended up having a composition that had a runtime of roughly 7 – 8 minutes. After many more rehearsals, we realized that every section seemed to exist on its own term and didn’t connect much to the previous or next section, which was a little bit frustrating given that each section sounded so good on their own. We worked a lot on the transitions between sections. After multiple attempts at rehearsal, we realized the main issue with the transition was the sonic palette itself. We were using too many different samples; almost each pattern used a distinct sample. We figured out a good way to fix the problem is to narrow down the number of samples used, so we replaced the samples used in a few patterns with the samples that we already had, or reuse a few of them. Specifically, putting patterns that share the same sample next to each other helped a lot with smoothing out the transition. Moving forward, the lesson learned is that simplicity is better than complexity. At first in each section, we had a lot of sounds stuffed together. But as we progressed, we had to cut down the sounds and the patterns, either combining a few or removing a few completely. Critical thinking and feedback from the visual team, our classmates, and Professor Aaron helped us to reflect on the composition. Cleaning up and tightening the composition took a lot of time because we had to rearrange, add, remove patterns here and there. In addition to cleaning up, we also had to keep the filter and effects consistent. We ran into a few issues but eventually things got sorted out thanks to Professor Aaron’s help. One final thing, even though it is live coding, we also had a good time composing the music with code and mixing the sounds together. I wish we had more time to develop the live composing skill.

Overall, we are proud of our final composition and the way we were able to execute our idea in a unique yet smooth manner to the extent that the audience also enjoyed.

Tidal code:

setcps(135/60/4) 

do
  once "loath"
  p "background" $ slow 2 $ ccv ((segment 10 (range 0 127 saw))) # ccn "0" # s "midi"
        
once "loath:5"

do
  resetCycles
  d11 $ loopAt 4 $ s "360b:3"
  p "360b visuals" $ ccv "20 90 40" # ccn "0" # s "midi"
     
hush

--blonde--

do -- evaluate the hat section
  d1 $ s "gra" # legato 1 -- add in scrollX
  p "hat" $ ccv "50 65" # ccn "2" # s "midi"

d2 $ "hh*8"
   
   --

do
  d3 $ fast 2 $ n "1*2" # s "bd" # amp 1 -- comment out diff    
  p "background" $ fast 4 $ ccv ((segment 10 (range 0 127 saw))) # ccn "0" # s "midi" -- change voronoi to shape

do
  d3 $ fast 2 $ n "0 1*2 2 1*2" # s "bd" # amp 7
  p "switch" $ ccv "0 1" # ccn "4" # s "midi"  -- comment out the blend


d4 $ s "909(5,16)" 

do -- do the shape NOT BLEND -- EVALUATE PIXELATE SECTION
  d5 $ s "bass1:11*4" # speed "2" # gain 1 # cutoff "70"
  --p "pixelate" $ ccv ((segment 4 (range 0 100 saw))) # ccn "0" # s "midi"
  p "pixelate" $ ccv "80 100 120 127" # ccn "0" # s "midi"

--


d6 $ swingBy (1/3) 4 $ sound "hh:13*4" # speed "0.5" # hcutoff "7000" # gain 1

d7 $ jux rev $ fast 0.5 $ s "crzy:6" # gain 0.6 # legato 1 


hush

d1 silence    
d2 silence
d3 silence
d4 silence
d5 silence
d6 silence
d7 silence
d8 silence

hush

-- build-up and beatdrop --


lookGlinda = do 
  d1 $ qtrigger $ filterWhen (>=0) $ 
    seqP 
      [ (0, 1, s "bd*4"  # room 0.3)
      , (1, 2,  s "bd*8"  # room 0.3)
      , (2, 3,  s "bd*16" # room 0.3)
      , (3, 4,  s "bd*32" # room 0.3)
      ]
      # hpf (range 100 1000 $ slow 4 saw)
      # speed (range 1 4 $ slow 4 saw)
      # gain 1.2
      # legato 0.5
  p "popkick visual" $ qtrigger $ filterWhen (>=0) $ 
    seqP
      [ (0, 1,  ccv "30 60 90 120" # ccn "0" # s "midi")
      , (1, 2,  ccv "15 30 45 60 75 90 120" # ccn "0" # s "midi")
      , (2, 3,  ccv "30 10 20 40 50 60 70 80 90 100 110 120 10 30 60" # ccn "0" # s "midi")
      , (3, 4, ccv ((segment 32 (range 0 127 saw)))  # ccn "0" # s "midi")
      ]
  d2 $ qtrigger $ filterWhen (>=0) $ 
    seqP 
      [ (0, 4, stack 
          [ s "~ cp" # room 0.9
          , fast 2 $ s "hh*4 ~ hh*2 <superchip*2>"
          ])
      ]
      # room 0.4
      # legato 1
      # gain (range 1 6 rand)
      # speed (range 1 2 $ slow 4 saw)
  d3 $ qtrigger $ filterWhen (>=0) $ 
    seqP 
      [ (4, 5, s "boq:1" # room 0.3 # gain 2) ]


lookGlinda


-- LOOK GLINDA P2 --
do
  lookGlinda
  p "disappear" $ qtrigger $ filterWhen (>=0) $ 
      seqP 
        [ (0, 4, ccv "0" # ccn "6" # s "midi") ]
  p "disappearcat" $ qtrigger $ filterWhen (>=0) $ 
      seqP 
        [ (0, 4, ccv "0" # ccn "5" # s "midi") ]
  p "me" $ qtrigger $ filterWhen (>=0) $ 
    seqP 
      [ (4, 5, ccv "[0 40 80 100 120] ~" # ccn "5" # s "midi") ]
  p "boq" $ qtrigger $ filterWhen (>=0) $ 
    seqP 
      [ (4, 5, ccv "0 127" # ccn "6" # s "midi") ]

once $ "boq:1"

d11 silence


-- beat drop --
hush

-- change to :1 --
do  -- uncomment the rotate
  d1 $ fast 1 $ s "crzy" # legato 1 # gain 1.5
  p "rotate" $ ccv "100 20" # ccn "2" # s "midi"

d1 silence

-- Act like witches, dress like crazy --

d5 $ fast 2 $ sound "bd:13 [~ bd] sd:2 bd:13" #krush "4" # gain 2

do
  setcps(120/60/4)
  resetCycles
  d5 silence
  d1 $ loopAt 4 $ s "crzy:4" # gain 1.2
  p "rotate" $ ccv "100 20" # ccn "2" # s "midi"

   
hush
d1 silence
d2 silence


d3 silence
hush


-- LOOK AT GLINDA PART 2

do --comment out scale
  d4 $ slice 32 "2" $ sound "twai:1" # gain 1
  p "scale" $ slow 2 $ ccv "100 20" # ccn "3" # s "midi"

d6 $ striate 8 $ s "ykw" # legato 1 # gain 1.2


do
  d7 $ fast 0.5 $ s "oiia" # gain 1.1 # speed 0.8
  p "shape-popular" $ fast 1 $ ccv "40 120 40" # ccn "0" # s "midi"




-------------------------------------------------------------------------

hush


do 
  d10 $ sound "bd:13 [~ bd] sd:2 bd:13" #krush "4" # gain 1.8
  p "d10 sound" $ ccv "124*3 [~10] 10*2 30*13" # ccn "2" # s "midi"

d10 silence
    
d12 $ s "gra"

hush


d1 silence
d2 silence
d3 silence
d4 silence
d5 silence
d6 silence 
d7 silence
d8 silence
d9 silence
d10 silence
hush

--ENDING VORONOI-- 

do 
 d12 silence
 once $ s "chun"


-- defying gravity --
once $ s "defy:2" # gain 1.5

d1 $ loopAt 1.2 $ "defy:6" 

hush
     
d2 $ fast 2 $ sometimes (|+ n 12) $ scramble 4 $ n "<af5 ef6 df6 f5> df5 ef5 _" # s "superpiano" # legato 2
# pitch2 4
--change 4 to 8
# pitch3 2
# voice 0
# orbit 2
# room 0.1
# size 0.7
# slide 0
# speed 1
# gain 1.2
# accelerate 0
# cutoff 200

d3 $ slow 4 $ n "af5 ~ ef5 ~ df5 ~ f5 ~"
  # s "supersaw"
  # gain 0.6
  # attack 0.2
  # sustain 2
  # release 3
  # cutoff 800
  # room 0.9
  # size 0.8

d4 $ n "<[0 ~ 1 ~][~ 0 1 ~]>" # s "tink:4"


do
  d5 $ slow 2 $ sound "superpiano:2" <| up "af5 af5 ef6 df6 ~*4 f5 af5 ~*24 df5 ~*2 f5 ef5 ~*12" # gain "0.6" # room "0.9"
  p "endingpiano" $ slow 2 $ ccv "30 30 40 35 ~*4 50 50 ~*24 30 ~*2 50 40 ~*12" # ccn "0" # s "midi"

--d5 $ sound "superpiano:2" <| up "g5 f6 ~ [e6 c6]"

d5 $ slow 2 $ sound "superpiano:2" <| up "af5 af5 ef6 df6 ~*4 f5 af5 ~*12 df5 ~*2 f5 ef5 ~*8" # gain "0.6" # room "0.9" $ slow 2 $ sound "superpiano:2" <| up "af5 af5 ef6 df6 ~*4 f5 af5 ~*12 df5 ~*2 f5 ef5 ~*8" # gain "0.6" # room "0.9"
   
hush
   
once $ s "defy:5"

https://drive.google.com/file/d/1ZPple_A4kVd4Ttdrza2H5VuWWIC72h00/view?usp=sharing

Visuals (Linh & Ziya)

We want to start with a simple pattern, a circle. As we start from something simple, we add more layers on top of it. As we started with one circle, we tried to make simple changes such as scale and modulateScale. We also wanted to add feedback to the visual, so we added another layer of s0 to the screen. Finally, because we have 2 people work on the visual, we kind of create different visuals and mult them together to get the final visual. Ziya decided to work with the ccn and ccv values to add more dynamic feel to the performance, but overall our aim was to synthesis all our performances into one.

shape(200, 0.5, 1.5)
  .scale(0.5,0.5)
  .color([0.5, 2].smooth(1),()=>cc[1], ()=>cc[0])
  .repeat(2,2)
  .modulateScale(osc(3,0.5), -0.6)
  .add(o0, 0.5)
  .scale(0.9)
  .out()

osc(5, 1, 90)
.kaleid(99)
.modulate(noise(1.9, 0.1))
.color(0.8,0.9,0.9)
.brightness(0.5)
.out(o1)

render()

render(o2)

voronoi(100, 0.15)
  .modulateScale(osc(8).rotate(Math.sin(time)), .5)
  .thresh(0.8)
  .modulateRotate(osc(7), 0.4)
  .thresh(0.)
  .diff(src(o0).scale(1.8))
  .modulateScale(osc(2).modulateRotate(o0, 0.74))
  .diff(src(o0).rotate([-0.012, 0.01, -0.002, 0]).scrollY(0,[-1/19980, 0].fast(0.7)))
  .brightness([-0.02, -0.17].smooth().fast(0.5))
  .out()
hush()

Sound (Rashed & Luke)

Rashed:

When I started working on the audio, I wanted to use a sample. I really like music that starts in a very angelic and harmonic vibe and then it would shift into chaos. I was listening to a song the other day called GOLDWING and I decided to sample the first couple of seconds. I, then, started experimenting with legato along with chop and striate. I decided to play with the vocals instead of having my main sound be a drum sound or a bass sound (inspired by your washing machine). I feel like at first, improvising was really hard and it felt like this really fixed and concrete method of live coding which is why I really disliked it at first. However, after meetings with my group mates, I realized that some things are just not my strong suits and some things are my strong suits which is why I decided to go with audio rather than visuals. I found it really entertaining to experiment with audio as we were recording. Even when it came to the parts we had previously written, I wanted to experiment with what i could do with what I already had to the point where I found myself digging inside of my memories and finding random samples that I have previously used for previous projects and implementing them without taking a second to think if they fit the vibe we were going for. 

Also! I decided to add another sample from one of the best songs ever created called Headlock (hence the sample name) by Imogen Heap because I found one of the instruments she used in her song very interesting and I wanted to somehow incorporate that into the performance. I decided to do this last second but I really pushed for it and thank goodness everyone agreed. Very fun indeed.

Luke:

I was stuck in an endless loop of capstone and all the materials related to it so I wasn’t able to meet with my groupmate. The music on my end has no script, I instead had to improvise my part during the drum circle performance, I followed Rashed’s cue and tried to figure out how to come in so that it matched with visuals and added to the music; his music led the way for me to think on the spot how I could accompany in real time environment.

once $ jux rev $ s "gold"

d1 $ s "~ ~ ~ tink*2"
d8 $ ccv "~ ~ ~ 120 60" # ccn "1" # s "midi"
d9 $ ccv " 124 0 124 0 120" # ccn "0" #s "midi"
d10 $ ccv ((segment 128 (range 127 0 saw))) # ccn "0" # s "midi"
d1 $ s "tink" >| n (scale "major" ("[4*2 3*4 2*2 1*2]")+"1") 

d2 $ s "bass1" # room 0.9 # legato 1 # gain 1

d3 $ chop 2 $ s "gold*2" # legato 0.4

d4 $ jux rev $ striate 6 $ s "gold ~ ~ bass:2*3" # cut 1

d2 $ s "~ ~ bass1*2" # room 0.9 # legato 1 # gain 1

hush

d3 silence 

d4 silence 
 
d1 silence 

d4 $ jux (striate 8) $ jux rev $ s "gold*2"  # legato 1
d9 $ ccv "120 20" # ccn "0" # s "midi"


d5 $ jux rev $ "hh*12 ~ hh*4 ~ hh*8"

d5 $ "~ ~ ~ hh*2" # room 0.2 # legato 0.5 # gain 1.5

hush
d4 silence 
d6 silence
d7 silence
d5 silence 

d6 $ s "tink" >| n (scale "minor" ("[4 3 2 4 1]")+"0.8")
once $ s "lock"
d7 $ jux rev $ chop 6 $ s "lock*4" # legato 1 


hush

Originally, I had gone for one of the visuals from tgh wesbite that was shared with us. However, Pulsar afterwards, crashed towards the end so I decided to use a simple visual I had made during my Intro to IM class. It’s a little over 2 mins (sorry :/ )

https://youtu.be/XYRRBaNS35w

Here is my Tidal code!

d2 $ struct "<t(3,8) t(5,8)>" $ s "casio" # n (run 8)

d4 $ struct "<t(3,8) t(5,8)>" $ ccv "<169 109 120 127>" 
  # ccn "0" 
  # s "midi"


d1 $ n ("e4 d c a b a b c7" |+ "<2 2 7 12>") # s "[superpiano, cp, bd, arpy, bd]"  # room 1

d3 $ struct "<t(3,8) t(5,8)>" $ ccv "<127 115 66 107>" 
  # ccn "0" 
  # s "midi"

d2 silence


hush

Hydra:

let p5 = new P5(); 
s0.init({ src: p5.canvas }); 
src(s0).out(); 

p5.hide(); 

let bubbles = [];

p5.draw = () => {
  if (bubbles.length === 0) {
    p5.createCanvas(window.innerWidth, window.innerHeight);
    for (let i = 0; i < 30; i++) {
      bubbles.push(new Bubble(p5.random(p5.width), p5.random(p5.height), p5.random(20, 100)));
    }
  }
  
  p5.background(137, 207, 240, 50);

  for (let i = 0; i < bubbles.length; i++) {
    bubbles[i].move();
    bubbles[i].display();
  }

  if (p5.frameCount % 15 === 0) {
    bubbles.push(new Bubble(p5.random(p5.width), p5.height, p5.random(20, 100)));
  }
};

class Bubble {
  constructor(x, y, r) {
    this.x = x;
    this.y = y;
    this.r = r;
    this.speed = p5.map(this.r, 20, 100, 2, 0.5);
    this.color = p5.color(p5.random(100, 255), p5.random(100, 255), p5.random(255), p5.random(100, 200));
  }

  move() {
    this.y -= this.speed;
    this.x += p5.random(-1, 1);
  }

  display() {
    p5.fill(this.color);
    p5.noStroke();
    p5.ellipse(this.x, this.y, this.r);
  }
}

src(s0)
    .mult(osc(2, () => cc[0] * 2, 3))
    .modulate(noise(() => cc[1] * 0.5))  
    .rotate( () => cc[0] * 0.5 )        
    .colorama(() => cc[0] * 1)       
    .out();

src(o2)
  .modulate(src(o1)
  .modulate(noise(() => cc[1] * 0.05))  
  .rotate( () => cc[2] * 0.2 ))
  .colorama(() => cc[0] * 2)       
  .blend(src(o0))
  .out(o2)

  
render(o2)

hush()

Apologies for the late submission, it slipped my mind to post one, though I had recorded my video prior to the class.

All in all, I had to idea what to expect with the composition. I had no idea what my personal stylistic choices are, so I kind of struggled at the start with a concept. Therefore, I simply began with crafting a funky upbeat solid rhythm. I took my time to become more familiar with the visuals, therefore, I had spent quite a bit of time experimenting with them, but not many of my results felt like they aligned with the direction my piece was heading. Then I thought to bring in a personal sound sample to spice things up. As a result, I went to the first thing that came to my mind — Pingu. I included the Noot Noot sample as I find Pingu to be the perfect embodiment of chaos but also a playful character (and also he is one of my favourite characters to exist).  I wanted to ensure the visuals were in sync with the sound, and at the start I had struggled, especially with finding the right sort of ccv values, however, through a brute iterative trial and error session, I found a neat balance. I had started going with a more subtle approach, however I found that it was quite challenging to recognise this, and I was worried that given the time limit during the demos, I would not be able to execute in a proper manner. Therefore, I went for more bolder visuals, with simpler beats. I noted that in class you said that the sync between the visuals and the audio was not as evident, so I hope from this video you are able to find a more distinguishable link between them.

From the 0:27 part, I introduce a new melody, and I wanted to represent that with squiggly lines to indicate its playful nature. This is then followed by even funkier and playful beats such as Casio and blip. Once I had found an interesting synchrony with casio and blip, I understood how I wanted to go ahead — as this made it easy for me to create something that reflects the feeling lightheartedness with a tinge of a spirited and lively approach, however, as I had Pingu in my vision, around the end of my video (4:00) I began to truly mess with the visuals and create something that is quite disorderly in nature despite it being in sync with my sound.

I hope that you enjoyed!

Here is my code! (It’s a bit changed from the video since it is from the class demo)

Tidal

--- FINAL CODE

hush
d1 $ s "{808bd:5(3,4) 808sd:2(2,6)} " # gain 2 # room 0.3

d1 silence
d2 $ struct "{t(3,4) t(2,6) t(2,4)}" $ ccv ((segment 128 (range 127 0 saw))) # ccn "0" # s "midi"
hush

d3 $ fast 2 $ s "pluck" <| n (run 4) # gain 1 # krush 2
d2 $ ccv "0 20 64 127" # ccn "0" # s "midi"

d4 $ s "glasstap" <| n (run 4) # gain 1.5

d5 $ slow 2 $ s "arpy" <| up "c d e f g a b c6" # gain 1.5
d2 $ ccv " 9 19 36 99 80 87 45 100" # ccn "0" # s "midi"

d6  $ fast 2 $ s "casio" <| n (run 4) # gain 2
d3 $ qtrigger $ filterWhen (>=0) $ seqP [
  (0, 1, s "blip:1*4"),
  (1,2, s "blip:1*8"),
  (2,3, s "blip:1*12"),
  (3,4, s "blip:1*16")
] # room 0.3

d4 silence
hush
nooty = once $ sound "nootnoot:Noot" # squiz 1 # up "-2" # room 1.2 # krush 2
nooty
-- PART 2

d5 $ s "blip"  <| n (run 4)
  # krush 3
  # gain 1

d2 $ ccv "30 80 120 60" # ccn "0" # s "midi"
d6 silence

hush

d6 $ fast 2 $ s "control" <| n (run 2)
d7$ fast 2 $ s "casio" <| n (run 4) #gain 0.9 



d8 $ s "{arpy:5(3,4) 808sd:(2,4)} " # gain 1

d2 $ struct "{t(3,4) t(2,4) t(2,4)}" $ ccv ((segment 128 (range 127 0 saw))) # ccn "0" # s "midi"
nootynooty = once $ sound "nootnoot:Noot" # legato 0.2 # squiz 1 # up "-2" # room 1.2 # krush 2

d6 silence
d10 $ qtrigger $ filterWhen (>=0) $ seqP [
  (0, 1, s "control:1*4"),
  (1,2, s "control:1*8"),
  (2,3, s "control:1*12"),
  (3,4, s "control:1*16")
] # room 0.3
nooty

hush

Hydra


//SHAPE ONE 

osc(20, 0.4, 1)
  .color(0.3, 1.2, 1.2)
  .rotate(() => cc[0] * 0.9 * 0.8)
  .kaleid(10)
  .modulateRotate(noise(() => (cc[0]) * 0.7, 0.6))
  .rotate(() => cc[0] * 1.1 * 1.8)
  .kaleid(30)
  .modulateRotate(noise(() => (cc[0]) * 0.9, 0.6))

.out()
hush()

//SHAPE TWO 
osc(20, 0.3, 3)
  .color(1.3, 1.8, 2.9)
  .modulate(noise(() => (cc[0] + cc[1]) * 3, 1.4))
  .layer(
    osc(70, 0, 1)
      .luma(0.5, 0.1)
      .kaleid(10)
      .modulate(noise(() => (cc[0] + cc[1]) * 2, 0.4))
  )
  .out(o0)
hush()
//SHAPE THREE
shape(10,0.5).scale(1,1,2).repeat(30,9).modulate(noise(() => (cc[0] + cc[1]) * 9, 0.9)).out()

solid().out()
//SHAPE IV
osc(15, 2.6, 1.8)
  .color(1.2, 1.4, 1.2)
  .rotate(() => cc[0] * 0.9 *0.5)
  .kaleid(20)
  .modulateRotate(noise(() => (cc[0]) * 1.2))
.out()

hush()
//SHAPE V
osc(10, 30, 10)
.kaleid(99)
.modulate(noise(() => (cc[0] + cc[1]) * 1.9, 0.2))
.out(o0)

// noot
hush()

“Nature is disorder. I like to use nature to create order and show another side of it. I like to denature.” Kurokawa bends over the iMac, clicks through examples of his work on a hard drive, and digs out a new concert piece that uses NASA topographic data to generate a video rendering of the Earth’s surface. Peaks and troughs dance overa geometric chunk on the black screen, light years from the cabbie’s satnav. “The surface is abstract, but inside it’s governed by natural laws,” he says.

I find Kurokawa’s perspective on nature as disorder, and his desire to “denature” it very interesting, particularly how it resonates deeply with the tension between chaos and order that exists in both art and science. His use of natural data such as NASA’s topographic information to create structured, perhaps even arguably a surreal presentation of the Earth highlights the duality between the organic and the artificial. It suggests that while nature may appear unpredictable, it operates within a framework of fundamental laws that can be harnessed and reshaped through human interpretation. 

Kurokawa ultimately challenges our perception of what is ‘natural’ and what is ‘artificial.’ His work demonstrates that the act of imposing order on nature does not necessarily strip it of its essence but rather reveals another dimension of its beauty—one that we might not perceive in its raw, untamed state.  

For my research project, I chose to experiment with the platform LiveCodeLab 2.5

LiveCodeLab 2.5 is an interactive, web-based programming environment designed for creative coding and live performances. It allows users to create 3D visuals and generate sounds in real-time as they type their code4. This unique platform is particularly suited for live coding, more particularly visuals as the range of sound samples that are offered are not much.

Live coding lab has however, many samples to work with, meaning it is an excellent introduction for perhaps younger audiences or those that are beginning their journey with live coding.

Unfortunately, I was looking forward to experimenting with sound manipulation, however, I found that this platform worked mainly with manipulating and editing visuals. Therefore, I decided to expand and start polishing my skills with live coding visuals.

https://drive.google.com/file/d/1YrtH6dgI-Y8YJtzzENxbCvzYfVMTkSlP/view?usp=sharing