I wanted to blend old and new Japan via some connecting thread through the composition project. I decided to settle on the koto, a Japanese string instrument, for my principal sound. I got a YouTube video and trimmed it to a singular note in Audacity, then imported it into TidalCycles.

When making the melody for the first act, I tried to incorporate a lot of trills and filler notes as is common in many koto songs by breaking up each note into 16th notes and offsetting high and low pitch sounds. For the visuals, I decided to settle on developing a geometric pattern that felt Japanese as the music progressed. I did this by creating thin rectangles, rotating them, and adding a mask using a rectangle that expanded outwards, revealing pieces of the pattern at a time.

shape(2,0.01,0).rotate(Math.PI/2).repeat(VERT_REP).mask(shape(2,()=>show_1,0).scrollY(0.5))

For the transition, I added the typical Japanese train announcement that signified the shifting of time period, but I didn’t get the transition down as smoothly as I hoped. My initial idea was to add train doors closing and for the whole scene to shift, but I couldn’t get the visuals to work as I wanted to. Looking back, I would add the sound of a train moving and the visuals shifting to create the next scene.

For the second scene, I wanted to show the new era of Japan, aka walking around Shibuya woozy and boozy as hell. I got a video of someone walking through Tokyo, and slapped on color filters as well as distortion. I tried to get hands to better replicate the first person perspective, but I the static png didn’t end up being very good.

src(s0)
  .color(()=>colors[curr_val][0], ()=>colors[curr_val][1], 5)
  .scale(1.05)
  .modulateRotate(osc(1,0.5,0).kaleid(50).scale(0.5),()=>begin_rotate?0.1:0.05,0)
  .modulatePixelate(noise(0.02), ()=>begin_bug?7500:10000000)
  .add(
    src(s1) // za hando
    .rotate(() => (Math.sin(time/2.5)*0.1)%360)
    .scale(()=> 1 + 0.05 * (Math.sin(time/4) + 1.5))
  )
  .modulateScale(osc(4,-0.15,0).kaleid(50),()=>brightness*4) // za warudo
  .blend(src(o2).add(src(s3)).modulatePixelate(noise(0.03), 2000), ()=>brightness)
  .blend(solid([0,0,0]), ()=>(1-total_brightness))
  .out()


I controlled for the modulation amounts via MIDI and added the JoJo time stop sound effect to alleviate and reintroduce tension. I also made the sound slow down and reverse when stopping the music.

  d2 $ qtrigger $ seqP [
    (15, 16, secondMelody 1 0 1),
    (15, 16, secondMelody 2 0 1),
    (15, 16, bass 0 0.5),
    (15, 16, alarm 0),
    (15, 16, announcement 0)
  ] # speed (smooth "1 -1");
  d3 $ qtrigger $ seqP [
    (15, 16, s "custom:8" # gain 1.75 # cut 1), -- stop time
    (18, 19, s "custom:9" # gain 0.75 # lpf 5000 # gain 2 # room 0.1 # cut 1 # speed 0.75) -- start time
  ];
  d4 $ qtrigger $ seqP [
    (18, 23, secondMelody 1 1 1),
    (18, 23, secondMelody 4 1 1),
    (18, 23, bass 1 1),
    (18, 23, alarm 1),
    (18, 23, announcement 1)
  ] # speed (slow 6 $ smooth "-1 1 1 1 1 1");

For my composition project, I wanted to create an introduction/algo-rave-y promotion to the podcast my friend Nour and I have been recording. This podcast, “Thursday Night ‘Live’ Starring Nour & Juanma” is an effort to preserve some memories of our last semester in NYUAD. We’ve been interviewing our friends and sharing some memories in the hopes that when we’re 50, we can listen to this and show it to our families. Maybe I will show them this project as well!

Sound

Our first episode began with the song ‘Me and Michael,’ which also happens to be our song. My first step in the composition was to figure out its chords. Once I did, I spent a LOT of time in Tidal Cycles playing around with them; I did not want to recreate the song, but to create something new based on it.  After various attempts and styles explored, I figured out that writing the chords myself in the super piano synth sounded really good! I used these chords as a base for the composition, and added percussion, custom samples, and other elements. All custom samples were taken from previous episodes of our podcast.

I initially built one loop with all of my elements. It had kick drums, high hats, snares, claps, a melodic percussion, a sequencer-like melody, phrases of ‘Me and Michael’s’ melody and the piano chords. I was happy with the result, and began assembling/adding parts for an actual structure. 

I LOVE the sound of tuning forks, so I used the superfork synth as an alternative for super piano in the beginning. This, I believe, gave depth to my piece. I also used superpwm as an alternative when building up. For the latter, I added an effect to alter the pitch. I wanted this part to be quirky…I also tried to build an effective build up and a drop using techniques we learned in class.

its_ThursDAYYYY = do
  d4 $ qtrigger $ filterWhen(>=0) $ slow 4 $ "superfork" >| note (scale "major" ("[0 -1 ~ -2][ -3 ~ -5 -3 ][~ -5 -3 -3][ -2 -3 ~ ~][ ~ 0 -3 -2 0][~] ") + "a5")  # room 0.4 # sustain 3
  d1 $ qtrigger $ filterWhen(>=0) $ slow 8 $ s "myCuts ~ ~ ~" # gain 0.7

nour_are_you_ready = do
 d1 $ qtrigger $ filterWhen(>=0) $ slow 7 $ "superfork" >| note (scale "major" ("[-9,0,2,5][-3,1,4,6][-5,2,4,6][-4,0,3,5][-6,1,3,5][-5,2,4,6][-7,2,4,7]") + "a5")  # pitch1 (fast 10 (range 0.1 1.5 rand)) # sustain 4 # gain 0.65
 d11 $ qtrigger $ filterWhen(>=0) $ s "hh:13*4" # gain 0.7
 d4 $ silence
 d13 $ ccv "1" # ccn "0" # s "midi"


yes_yes_just_finishing_capstone = do
  d1 $ qtrigger $ filterWhen(>=0) $ slow 7 $ "superpwm" >| note (scale "major" ("[-9,0,2,5][-3,1,4,6][-5,2,4,6][-4,0,3,5][-6,1,3,5][-5,2,4,6][-7,2,4,7]") + "a5")  # pitch1 (fast 10 (range 0.1 1.5 rand)) # sustain 2 # gain 0.65
  d11 $ degradeBy 0.2 $ s "hh:1 hh:8 hh:8 hh:8 hh:8 hh:8 hh:8 hh:8"
  d16  $ ccv "<25.4 38.1 50.8 63.5 >" # ccn "1" # s "midi"
  d13 $ ccv "2" # ccn "0" # s "midi"

ok_ready_lets_record = do
  d12 $ s "bd*4" # gain 0.9
  d16  $ fast 4 $ ccv "<38.1 50.8 63.5 76.2 >" # ccn "1" # s "midi"

reADYYY = do
  d1 $ qtrigger $ filterWhen(>=0) $ slow 7 $ "superpwm" >| note (scale "major" ("[-9,0,2,5][-3,1,4,6][-5,2,4,6][-4,0,3,5][-6,1,3,5][-5,2,4,6][-7,2,4,7]") + "a5")  # pitch1 (fast 10 (range 0.1 1.5 rand)) # sustain 2 # gain 0.65
  d2 $ qtrigger $ filterWhen(>=0) $ slow 7 $ "superpiano" >| note (scale "major" ("[-9,0,2,5][-3,1,4,6][-5,2,4,6][-4,0,3,5][-6,1,3,5][-5,2,4,6][-7,2,4,7]") + "a5")  # room 0.4 # sustain 4 # gain 0.8
  d3 $ qtrigger $ filterWhen(>=0) $ slow 7 $ "superpiano" >| note (scale "major" ("[-9,0,2,5][-3,1,4,6][-5,2,4,6][-4,0,3,5][-6,1,3,5][-5,2,4,6][-7,2,4,7] ") + "a4")  # room 0.4 # sustain 4 # gain 0.8
  d15 $ fast 8 $ ccv "<50 127 20 0>" # ccn "2" # s "midi"
  d13 $ ccv "3" # ccn "0" # s "midi"

play = do
  d1 $ qtrigger $ filterWhen(>=0) $ slow 5 $ "superpiano" >| note (scale "major" ("[2 1 ~ ~][~ 2 1 ~][2 2 1 ~][~ 2 2 2][3 1 ~ 2] ") + "a5")  # room 0.4 # sustain 4 # gain 0.76
  d2 $ qtrigger $ filterWhen(>=0) $ slow 7 $ "superpiano" >| note (scale "major" ("[-9,0,2,5][-3,1,4,6][-5,2,4,6][-4,0,3,5][-6,1,3,5][-5,2,4,6][-7,2,4,7]") + "a5")  # room 0.4 # sustain 4 # gain 0.8
  d3 $ qtrigger $ filterWhen(>=0) $ slow 7 $ "superpiano" >| note (scale "major" ("[-9,0,2,5][-3,1,4,6][-5,2,4,6][-4,0,3,5][-6,1,3,5][-5,2,4,6][-7,2,4,7] ") + "a4")  # room 0.4 # sustain 4 # gain 0.8
  d15 $ fast 16 $ ccv "<50 127 20 0>" # ccn "2" # s "midi"
  d13 $ ccv "4" # ccn "0" # s "midi"

but_MakeItCount = do
  d10 $ qtrigger $ filterWhen(>=0) $ degradeBy 0.05 $ s "bd:9*4"  # room 0.6
  d11 $ qtrigger $ filterWhen(>=0) $ degradeBy 0.2 $ s "hh*16"
  d12 $ qtrigger $ filterWhen(>=0) $ degradeBy 0.05 $ s "sn:13*8"
  d9 $ qtrigger $ filterWhen(>=0) $ s "~ <cp ~> ~ cp" # room 0.2
  d16  $ fast 8 $ ccv "<38.1 50.8 63.5 76.2 >" # ccn "1" # s "midi"

fun_stuff =  do {d3 $ degradeBy 0.05 $ qtrigger $ filterWhen (>=0) $ seqP [
    (0, 6, s "bd:9*4"),
    (1,6, s  "hh*16" # room 0.6),
    (1,6, s "sn:13*8"),
    (3,6, s "~ <cp ~> ~ cp" # room 0.2)
  ];
  d16 $ fast 4 $ ccv "<38.1 50.8 63.5 76.2 >" # ccn "1" # s "midi"
}

oops_lets_restart = do {d3 $ qtrigger $ filterWhen (>=0) $ seqP [
    (0, 1, s "hh:2*32" # room 0.3 # hpf (slow 4 (1000*saw + 100)) # speed (slow 4 (range 1 2 saw))),
    (1,2, s "hh:2*16" # room 0.3 # hpf (slow 4 (1000*saw + 100)) # speed (slow 4 (range 1 2 saw))),
    (2,3, s "hh:2*8" # room 0.3 # hpf (slow 4 (1000*saw + 100)) # speed (slow 4 (range 1 2 saw))),
    (3,4, s "hh:2*4" # room 0.3 # hpf (slow 4 (1000*saw + 100)) # speed (slow 4 (range 1 2 saw))),
    (3,4, s "321")
  ];
  d15 $ fast 8 $ ccv "<0 127 0 127>" # ccn "3" # s "midi";
  d13 $ ccv "5" # ccn "0" # s "midi"}

  go_go_go = do
    d1 $ silence
    d2 $ silence
    d3 $ s "hh:2*4" # room 0.3 # hpf (slow 4 (1000*saw + 100)) # speed (slow 4 (range 1 2 saw))
    d4 $ qtrigger $ filterWhen(>=0) $ degradeBy 0.1 $ s "superfork*8" # up "cs5 d5 e5 cs5 b45 cs5 cs5 <cs5 b5 a5>" # room 0.4 # gain 1.5
    d13 $ ccv "6" # ccn "0" # s "midi"

welcome_to_tnl_featuringNourandJuanma = do
  d3 $ silence
  d2 $ qtrigger $ filterWhen(>=0) $ slow 4 $ s "Finally ~ ~ ~" # gain 1.5
  d13 $ ccv "9" # ccn "0" # s "midi"

podcasting = do
    d2 $ qtrigger $ filterWhen(>=0) $ slow 7 $ "superpiano" >| note (scale "major" ("[-9,0,2,5][-3,1,4,6][-5,2,4,6][-4,0,3,5][-6,1,3,5][-5,2,4,6][-7,2,4,7]") + "a5")  # room 0.4 # sustain 4 # gain 0.8
    d3 $ qtrigger $ filterWhen(>=0) $ slow 7 $ "superpiano" >| note (scale "major" ("[-9,0,2,5][-3,1,4,6][-5,2,4,6][-4,0,3,5][-6,1,3,5][-5,2,4,6][-7,2,4,7] ") + "a4")  # room 0.4 # sustain 4 # gain 0.8
    d13 $ ccv "7" # ccn "0" # s "midi"

senior_year = do
  d5 $ degradeBy 0.003 $ slow 0.25 $ "superpwm" >| note (scale "major" ("[2 3 4 <2 1 0>] ") + "a4")  # room 0.4 # gain 0.75
  d13 $ ccv "7" # ccn "0" # s "midi"

nour_and_juanma woohoo = do
  d6 $ slow woohoo $ s "nj" # gain 1.2

best_senior_podcast = do
  d5 $ degradeBy 0.003 $ slow 0.25 $ "superpwm" >| note (scale "major" ("[7 6 5 4] ") + "a5")  # room 0.4 # gain (slow 8(range 0.75 1 saw))
  d13 $ ccv "7" # ccn "0" # s "midi"
  d15 $ fast 4 $ ccv "<50 127 20 0>" # ccn "2" # s "midi"


climb =  d6 $ qtrigger $ filterWhen (>=0) $ seqP [
      (0, 1, s "bd*4" # room 0.3 # hpf (slow 4 (1000*saw + 100)) # speed (slow 4 (range 1 2 saw))),
      (1,2, s "bd*8" # room 0.3 # hpf (slow 4 (1000*saw + 100)) # speed (slow 4 (range 1 2 saw))),
      (2,3, s "bd*16" # room 0.3 # hpf (slow 4 (1000*saw + 100)) # speed (slow 4 (range 1 2 saw))),
      (3,4, s "bd*32" # room 0.3 # hpf (slow 4 (1000*saw + 100)) # speed (slow 4 (range 1 2 saw))),
      (4,5, s "vibe")
    ] # gain 1.3

shh = do
  d1 $ silence
  d2 $ silence
  d3 $ silence
  d4 $ silence
  d5 $ silence
  d7 $ silence
  d8 $ silence
  d9 $ silence
  d10 $ silence
  d11 $ silence
  d12 $ silence
  d13 $ ccv "9" # ccn "0" # s "midi"


boom = do
  d3 $ qtrigger $ filterWhen(>=0) $ slow 14 $ "superpiano" >| note (scale "major" ("[-9,0,2,5][-3,1,4,6][-5,2,4,6][-4,0,3,5][-6,1,3,5][-5,2,4,6][-7,2,4,7] ") + "a3")  # room 0.9 # gain 0.8
  d4 $ qtrigger $ filterWhen(>=0) $ degradeBy 0 $ fast 2 $ s "superfork*4" # up "cs6 d6 e6 <b45 cs5> " # room 0.4 # gain 1.8
  d5 $ degradeBy 0.003 $ slow 0.25 $ "superpwm" >| note (scale "major" ("[2 3 4 <2 1 0>] ") + "a4")  # room 0.4 # gain 0.75
  d10 $ qtrigger $ filterWhen(>=0) $ degradeBy 0.05 $ s "bd:9*4" # gain 1.5 # room 0.6
  d11 $ qtrigger $ filterWhen(>=0) $ degradeBy 0.2 $ s "hh*16" # gain 1.5
  d12 $ qtrigger $ filterWhen(>=0) $ degradeBy 0.05 $ s "sn:13*16"
  d9 $ qtrigger $ filterWhen(>=0) $ s "~ <cp ~> ~ cp" # room 0.2
  d13 $ ccv "10" # ccn "0" # s "midi"
  d16 $ fast 4 $ ccv "<115 0 50 100 >" # ccn "2" # s "midi"


  boom_2 = do
    d3 $ qtrigger $ filterWhen(>=0) $ slow 14 $ "superpiano" >| note (scale "major" ("[-9,0,2,5][-3,1,4,6][-5,2,4,6][-4,0,3,5][-6,1,3,5][-5,2,4,6][-7,2,4,7] ") + "a3")  # room 0.9 # gain 0.8
    d4 $ qtrigger $ filterWhen(>=0) $ degradeBy 0 $ fast 2 $ s "superfork*4" # up "d6 fs6 e6 <a6 cs6> " # room 0.4 # gain 1.8
    d5 $ degradeBy 0.003 $ slow 0.25 $ "superpwm" >| note (scale "major" ("[2 3 4 <2 1 0>] ") + "a4")  # room 0.4 # gain 0.75
    d10 $ qtrigger $ filterWhen(>=0) $ degradeBy 0.05 $ s "bd:9*4" # gain 1.5 # room 0.6
    d11 $ qtrigger $ filterWhen(>=0) $ degradeBy 0.2 $ s "hh*16" # gain 1.5
    d12 $ qtrigger $ filterWhen(>=0) $ degradeBy 0.05 $ s "sn:13*16"
    d13 $ ccv "11" # ccn "0" # s "midi"
    d9 $ qtrigger $ filterWhen(>=0) $ s "~ <cp ~> ~ cp" # room 0.2
    d15 $ fast 16 $ ccv "<0 20 50 100>" # ccn "2" # s "midi"

last_stretch = do
  d1 $ degradeBy 0.003 $ slow 0.25 $ "superpwm" >| note (scale "major" ("[7 6 5 4] ") + "a5")  # room 0.4 # gain 0.9
  d13 $ fast 2 $ ccv "<11 7 4 10>" # ccn "0" # s "midi"

shh_2 = do
    d1 $ degradeBy 0.003 $ slow 0.5 $ "superpwm" >| note (scale "major" ("[7 6 5 4] ") + "a5")  # room 0.4 # gain (slow 8(range 0.9 0.5 saw))
    d2 $ silence
    d3 $ silence
    d5 $ silence
    d7 $ silence
    d8 $ silence
    d9 $ silence
    d10 $ silence
    d12 $ silence
    d13 $ ccv "9" # ccn "0" # s "midi"

available_only_on_our_google_drives = do
    d1 $ silence
    d4 $ qtrigger $ filterWhen(>=0) $ degradeBy 0 $ fast 2 $ s "superfork*4" # up "d6 fs6 e6 <a6 cs6> " # room 0.4 # gain (slow 16 (range 1.8 0.5 saw))
    d11 $ silence
    d6 $ silence
    d13 $ ccv "0" # ccn "0" # s "midi"


its_ThursDAYYYY
  nour_are_you_ready
  yes_yes_just_finishing_capstone
  ok_ready_lets_record
  reADYYY
  play
  oops_lets_restart
  go_go_go
  fun_stuff
  but_MakeItCount
  welcome_to_tnl_featuringNourandJuanma
  podcasting
  senior_year
  nour_and_juanma 1
  best_senior_podcast
  climb
  shh
  boom
  boom_2
  last_stretch
  shh_2
  available_only_on_our_google_drives

hush

Composition Structure

Whenever we record, Nour and I always have one (or many) false starts. I thought this would be a good addition to the piece’s structure. If you look at my composition, you will see the piece build up once, go silent, build up again, and then drop. In the first build up, I used many more components from ‘Me and Michael’. Then using a sequenceP, slowing percussion, and a custom sample from one of our false starts, I constructed this transition. Then, for the second build up, I added a lot more percussion, and kept only the base chords from the song. After a transition using slow “bd”, there is silence. Nour says “umm, so that was the vibe” and the beat drops. The drop contains almost the same elements as the second buildup with altered speeds, tempos and octaves. For the end of the song, instruments are removed, leaving only the melody, and the melody fades out. I would imagine my piece to look something like this:

My composition structure is intended to follow the storyline of a Thursday evening: Nour finishes capstone, we meet to record, we have a couple of funny false starts, then we begin again, this time full force.

I used the code from the class example to toggle the visuals in Tidal Cycles.

loadScript('/Users/juanmanuelrodriguezzuluaga/Documents/LiveCoding_HW/launch.js')

s0.initVideo('/Users/juanmanuelrodriguezzuluaga/Documents/LiveCoding_HW/Composition_Vids/composite_S.mp4')
s1.initImage('/Users/juanmanuelrodriguezzuluaga/Documents/LiveCoding_HW/Composition_Vids/tittle-min.png')
s2.initVideo('/Users/juanmanuelrodriguezzuluaga/Documents/LiveCoding_HW/Composition_Vids/IMG_2673.mov')

visuals[0]()

// can use update and switch case with midi:
var whichVisual = 0
update = () =>{
  // very important! only change source once, when necessary
  if (whichVisual != ccActual[0]){
    whichVisual = ccActual[0];
    visuals[whichVisual]()
  }
}

// clear update
hush()
// OR (without stopping visuals all together)
update = ()=> {}

Visuals

Whereas I was quite traditional with the sound structure, I wanted my visuals to be a bit more chaotic. I knew I wanted 3 main elements: Nour and I doing our podcast, Nour’s capstone cell imaging, and colors. I drew a title and added the aforementioned elements. Then, I played around with parameters and components to generate a visual for each part of the composition. My aim was to have a lot going on, but to have the piece be responsive to the beat & storyline. I tried my best to incorporate midi channels in the designs, and to transmit the same story as with the audio. In order to do so, I made sure that the visuals were triggered from Tidal Cycles. I had a lot of fun manipulating Nour’s capstone images. They had a natural pulse, which was difficult to adjust to the beat, but they also looked quite nice when in kaleidoscope. 

visuals = [
  ()=>{src(s0)
    .out()},// its_ThursDAYYYY 
  ()=>{  src(s0)
      .invert()
      .layer(src(s1)
        .mask(shape(4,0.99))
        .scale(1.6,()=>window.innerHeight/window.innerWidth, 0.356)
        .scrollX(0,-0.09))
      .out()}, // nour_are_you_ready
  ()=>{src(s0)
    .invert()
    .kaleid(()=>(cc[1]*10))
    .out()},//yes_yes_just_finishing_capstone & ok_ready_lets_record
  ()=>{src(s0)
    .invert()
    .kaleid(()=>(cc[1]*10))
    .layer(src(s2)
      .saturate()
      .mask(shape(()=>(cc[1]*10),0.9))
      .scale(0.7,()=>window.innerHeight/window.innerWidth,1)
      .scrollX(0, -0.05)
      .scrollY(9, ()=>cc[2]/6000+0.03))
      .out()}, //reADYYY
  ()=>{src(s0).invert().kaleid(()=>(cc[1]*10)).layer(src(s2).saturate().mask(shape(()=>(cc[1]*10),0.9)).scale(0.7,()=>window.innerHeight/window.innerWidth,1).scrollX(0, -0.05).scrollY(9, ()=>cc[2]/6000+0.03)).kaleid(8).rotate(0,0.1).out()},//play
  ()=>{src(s0)
    .invert()
    .kaleid(()=>(cc[1]*10))
    .layer(
      src(s2)
      .saturate()
      .mask(shape(()=>(cc[1]*10),0.9))
      .scale(0.7,()=>window.innerHeight/window.innerWidth,1)
      .scrollX(0, -0.05)
      .scrollY(9, ()=>cc[2]/6000+0.03))
      .kaleid(8)
      .rotate(0,-0.1)
      .saturate(()=>cc[3])
    .out()}, // oops_lets_restart
    ()=>{src(s2)
      .scale(()=>cc[1],1)
      .out()}, // go_go_go, fun_stuff, but_MakeItCount
    ()=>{osc(4,0.4) 
          .thresh(0.9,0)
          .modulate(src(s2)
            .sub(gradient()),1)
            .out(o1)
      src(o0)
        .saturate(1.1)
        .modulate(osc(6,0,1.5)
          .brightness(-0.5)
          .modulate(
              noise(cc[1]*5)
              .sub(gradient()),1),0.01)
        .layer(src(s2)
          .mask(o1))
          .scale(1.01)
          .out(o0)},
    ()=>{gradient(4).add(
          src(s2)
          .modulateRotate(
            noise(()=>(cc[2]*3),0.9)
            .luma()))
          .out()},
    ()=>{src(s2) // tnl_featuring_nour_and_juanma
    .out()},
    ()=> {shape(100,0.5,1.5) 
        .scale(0.5,0.5)
        .color([0.5,2].smooth(1),0.3,0)
        .repeat(2,2)
        .modulateScale(osc(()=>(cc[2]),0.5),-0.6)
        .add(o2,0.5)
        .scale(0.9)
        .out()},
      ()=>{gradient(1). 
        add(shape(100,0.5,1.5)
            .scale(0.5,0.5)
            .color([0.5,2].smooth(1),0.3,0)
            .repeat(2,2)
            .modulateScale(osc(()=>(cc[2]),0.5),-0.6)
            .add(o2,0.5)
            .scale(0.9))
            .out()}
      ]

whichVisual = 0

Class Feedback & Updates

One change I made based on class feedback was the removal of one of the visuals during the main build-up. It was originally supposed to come prior to the fading of our video into red, orange and yellow, but I made a mistake when writing the code to trigger the visuals. Even through I would have liked to see the fading out or breaking down into a gradient (as the original plan), I believe this visual was significantly more effective in this part of the structure, and not later. Thus, I decided to remove the gradient all together. You can see on my code as visual number 8 (I believe).

LIVE Coding

For the in-person component, I practiced a LOT. I needed to make sure I knew exactly when to trigger all of the functions. Furthermore, I included a small explanation/in person introduction and ending to my project. By telling all about you about my podcast while “nour_are_you_ready” was displaying the tittle and saying that the podcast is only available in our google drives as the beat faded out I hoped to make the experience more immersive and engaging. This is not pictured in the video, so you’ll have to see me perform it again.

Final Product

I’m not sure if the video is working. In case it is not, see it here.

Tidal Sound

The two biggest things I want to have in the pieces that I make are to contain samples and have some sort of cultural tie to me (or just something I am interested in). For this piece I really wanted to find an old Korean song to sample. This would end up proving difficult since I cannot speak Korean whatsoever. After much digging and asking some family members for some old songs they knew, I finally found a song that I liked: 님은 먼 곳에 (Ni-meun Meon Go-se, You are far away I think is the translation?) by Kim Chu-Ja.

After listening to the song a bunch, I eventually tried to take parts of the song I thought sounded nice and tried to put them together. This would prove to be such a pain in Tidal, because I have to find specific timestamps for the entire song and try to make sure they are about one measure. To make this easier, I was able to find the bpm to the song and then multiply that by the speed I set the song to (1.2). After this, it was mostly experimenting to find what sounded good together. Eventually, I found vocals I liked and had them in a pattern I thought sounded good. After this, I put some percussion in just to make the song sound a bit more like a hip-hop/lofi song.

Most of the inspiration for my percussion came from listening to J Dilla’s Donuts and MF DOOM and Madlib’s Madvillainy, trying to emulate percussion patterns they had in a way that would also fit my song. I was deadass listening to Donuts on repeat for like a week straight, just trying to break down in my head how Dilla samples. I also attempted to make my percussion in “Dilla time,” by nudging the snare and the cymbals a bit so they don’t all play at the same time. I especially like the rushed snare that Dilla likes to do, where it comes in just a fraction before the other instruments on the same beat. Since a lot of hip-hop songs are pretty repetitive, I wanted to do the same by just having a long beat that would keep playing with little to no change between measures.

To make my song an actual composition, I wanted to have a beat switch that would be transitioned by some sort of speech. I had a hard time finding inspiration for a speech, so kinda just ripped a speech of some TTS meme that ironically talked about certain negative things that plague today’s society, like microplastics or the conspiracy about 5G radio waves. As for the beat switch, I wanted to have a slowed down tempo that would use the amen break to have a kind of weird breakcore-esque beat. I swapped around the order of some of the samples I had and added some new ones as well to make the second part of the beat. I wanted to add extra sounds to it but I had trouble finding sounds that kinda fit with the overall sound to me. The other difficult part with the break beat is that the beat didn’t totally line up when it repeated, and that I had to manually make the midi beat for it, since it was one long sample. I think I got it close enough, but I know it isn’t perfect either. My ending is kinda bad because it just kinda abruptly cuts out, I wanted to have an xfade but I couldn’t get it to work for whatever reason.

Hydra Visuals

As for my visuals, it took me a long time to find some inspiration for something I thought would sort of fit with the sound that I had created. All I knew is I thought it would be cool to have something really hectic visuals-wise. For both visuals, I ended up taking gameplay of two games I liked: one being a speedrun from the FPS Ultrakill, and the other being a pro match from the card game Magic: the Gathering. While I wanted to have hectic visuals, I also wanted it to have some clarity so you could actually tell what the games were being played. The reason I chose games were because I felt like this, combined with the hectic visuals and the Korean song sample, sort of represented who I am. After tinkering with the visuals for some time, I got the calm and chaotic visuals I wanted and managed to sync up certain visuals to the midi, I was essentially finished with my full composition.

Idea

The main idea behind my composition project was a combination of my previous TidalCycles audio and Hydra visuals demo for Week 2 and Week 3. I really liked the aesthetic of the sunset-beach vibe I had in my Hydra visuals, so I knew I wanted my audio to accompany this feeling of a relaxing, laid-back vibe.

Audio:

My previous TidalCycles demo was me trying to recreate Oh Honey by The Delegations, and I expanded on this idea by mashing it up with Groovin’ by The Young Rascals. The two songs I chose to mashup were songs from the 1960-1970s, and they were R&B/Blues/Soul songs which fit the aesthetic that I wanted to go for.

To mash up the songs, I listened to both songs and picked out elements I liked from both of them. I liked the main theme of Oh Honey more, so I took the bassline, guitars and chords from it. I liked the beat of Groovin’ more, so I took the maracas and clave beat pattern from it, and also included the melody line. Combining the two melodies did not work as well as I expected, as the two songs were in different keys. I attempted to transpose Groovin’s melody into Oh Honey’s G major key, but for some reason it didn’t sound good after the transposing so I kept Groovin’ in it’s original key even though it was off-key as I thought it sounded better. Combining the non-melody beats were much easier as they were non-key specific — drums, maracas, claves.

Visuals:

My previous Hydra demo was similar with a theme of a sunset/ocean view, but I expanded on the visuals from the previous demo by using a modified oscillator shader that starts off with larger bands that becomes smaller towards the end, adding sea reflections, and adding a sky to the background.

The visuals were controlled by three MIDI patterns from TidalCycles. The sun would groove to the melody, the sea to the drum pattern, and the sea reflections were affected by the chord patterns. I left the sky untouched, as I felt that moving too many elements at once might be too chaotic and difficult to discern the beats of the song. In the final version, I think it was easy to identify which element corresponded with which element in the music which I was pretty proud of.

Overall

Overall, I was really proud of my buildup and fadeout of the song, as the beat patterns worked really well when you introduce them one by one or fade them out one by one. I was a little disappointed with the drop, as I couldn’t find a way to make it sound more energetic without making it too muddy and messy, so I left it as it is.

Tidal Composition

In terms of my musical composition, I’ve developed six distinct sound layers that blend nicely together. To organize the composition, I initially introduced these sounds sequentially, building towards a climactic moment. Halfway through the performance, I muted the first three tracks, creating a jazz-y shift in the musicality. Later, I reintroduced the three muted sounds one by one, gradually building up their presence before delicately fading them out towards the end of the composition. This gradual decrescendo provided a satisfying sense of closure. I think this structure effectively showcased the diverse range of notes and layers present in the composition.

d1 $ qtrigger $ filterWhen (>=0) $ stack[
    fast "<0.5 2>" $ s "lt:3 lt lt ~" # gain 0.8,
    s "909!4" # gain 1,
    ccv "2 4 1 1" # ccn 0 # s "midi"
  ]


d2 $ qtrigger $ filterWhen (>=0) $ stack [
   s "hh*2 hh*2 hh*2 <hh*6 [hh*2]!3>" # gain 0.8,
    ccv "45 90 270 360" # ccn 1 # s "midi"
  ]


d3 $ qtrigger $ filterWhen (>=0) $ stack [every 3 (hurry 2) $ sound "bd sd [~ bd] [cp bd*2]",
  sound "kurt:4(3,8)" # shape "0 0.98" # gain "0.5" # speed 1.04,
  struct "t" $ ccv (irand 15) # ccn 2 # s "midi"
]

d4 $ qtrigger $ filterWhen (>=0) $ stack[
    fast "<0.5 2>" $ s "lt:3 lt lt ~" # gain 0.5,
    s "909!4" # slow 2 ("<1.5 1>") # gain 1.2,
    s "hh*2 hh*2 hh*2 <hh*6 [hh*2]!3>" # gain 0.7,
    sound "sax(3,8) sax(5,8)" # legato 1 # n 3 # note "<[9 7] 5 [9 12]>" # sz 0.8 # room 0.4 # gain 0.8,
    struct "t" $ ccv (irand 127) # ccn 3 # s "midi"
  ]

d5 $ qtrigger $ filterWhen (>=0) $ stack[
  s "bd*4" # gain 1.4 # krush 10,
  struct "t t t t" $ ccv (irand 4) # ccn 4 # s "midi",
  ccv "2 4 1 1" # ccn 6 # s "midi"
]

d6 $ qtrigger $ filterWhen (>=0) $ stack [
  sound "sax(3,8) sax(5,8)" # legato 1 # n 3 # note "<[9 7] 5 [9 12]>" # sz 0.8 # room 0.4 # gain 0.8,
  struct "t" $ ccv (irand 6) # ccn 5 # s "midi"
]

Hydra visuals

Each layer of sound is assigned a distinct CC number, which sends MIDI data to Hydra. This allowed me to introduce variations to the original rectangle shape depending on the sound being played. It was particularly satisfying because silencing specific sounds would also disable particular visual distortions since the corresponding CC number wouldn’t be transmitted.

To add variety throughout the performance, I would comment out some lines to create the effect of visuals gradually building towards a climax. During the jazz-y drop, I disabled the pixelate line to allow for subtle changes in the oscillating visuals. Similar to the music, I wanted a gradual buildup and fading of the sounds, so it was essential for me to begin and end my visual composition with just one line.

For the next performance, I’m eager to expand on the scope of visuals. I’m intrigued by how many visuals I was able to generate with just 14 lines of code.

update = () => {
  let n = ccActual[2]+1
  let k = ccActual[4]
  shape(2, ()=>ccActual[6]*0.1) //()=>ccActual[6]*0.1
    .modulate(osc(1)) //*0.2
    .pixelate(()=>ccActual[0])
    .rotate(()=>ccActual[1])
    .scrollX(1/n)
    .scrollY(1/k)
    .mult(shape(()=>ccActual[5],()=>ccActual[5]*0.2))
    .modulate(noise(()=>cc[3]*5,0,1))
    .scale(1, ()=>window.innerHeight/window.innerWidth,1)
    .out()
  }

“Creative Know-How and No-How” presents live coding as a vibrant tapestry of thought, where the act of coding transcends its technical underpinnings to become a medium of artistic expression. It challenges us to embrace the uncertainties of the creative process, to find value in the act of exploration itself, and to reconsider the ways in which we understand and engage with technology.

One of the most compelling aspects discussed is the concept of “play” within live coding. This notion, borrowed from Roger Caillois, characterizes live coding as an inherently uncertain activity, a form of artistic experimentation that defies the conventional purpose and function. It’s a self-regulating activity that exists for its own sake, devoid of the pursuit of material gain, which is a radical departure from traditional views on productivity and creativity. This perspective invites us to reconsider the value we have for creative acts, urging us to see beyond the tangible outcomes and appreciate the beauty of creation itself.

The parallels drawn between live coding and preindustrial loom weaving are particularly evocative. Both practices require a heightened awareness and a dynamic response to the evolving conditions of the creative process. This analogy not only highlights the deep-rooted connection between coding and weaving as forms of thinking-in-motion but also challenges the historical narrative that prioritizes the Jacquard loom’s role in conceptualizing computational logic. By doing so, it invites a reevaluation of ancient crafts as precursors to modern computational practices, suggesting a continuity in thought that transcends technological advancements.

Moreover, the reading delves into the philosophical underpinnings of live coding, drawing upon concepts like kairotic and mêtic intelligence, which emphasize the importance of timing in the creative process. These ideas underscore the adaptability and situational awareness crucial to live coding, where the coder navigates through a landscape of possibilities, guided by a sense of what could be rather than what is.

For me, the intension of Live Coding can be divided into “knowing what want to show and not knowing what want to show”. When I come up with a concept first, but constrained by the lack of knowledge, I often fall into self-doubt: “What did I learn in Live Coding ? At this time, there is no concept, and I am surprised and happy with whatever I make at random. After a month of learning, I often reflect on whether the things I make are just “to comfort myself”. However, after reading this article, one of the important features of Live Coding is “experiment”:Although many live coders acknowledge some formal training in computing, music, or artistic methods, the knowledge of the process required for live coding emerges often through experimentation, through the accumulation of trial and error, and through innumerable versions and iterations, tests, and attempts(261). For IM, a lot of software requires enough acknowledge to be able to produce. So I’m still not very comfortable with the learning process of Live Coding. Learning these skills is not about the anxiety of seeing someone else using the same skills and saying “they are doing it better than me”, it’s about the enjoyment of acquiring a skill at the same time. As the author says “Within live coding, the challenge seems less one of responding with learned behavior or an already rehearsed script than of how to harness the potential unique to every contingent situation”.