Here’s the YouTube link to my demo.

Here’s also my Hydra code…

//hydra

let p5 = new P5()
s0.init({src: p5.canvas})
// in a browser you'll want to hide the canvas
p5.hide();

// no need for setup
p5.noFill()
p5.strokeWeight(20);
p5.stroke(255);

let circlePositions = [
  { x: p5.width / 4, y: p5.height / 2, size: 300 }, // First circle
  { x: (p5.width / 4) * 3, y: p5.height / 2, size: 300 } // Second circle
];

p5.draw = () => {
  p5.background(0);

  // first circle
  p5.ellipse(circlePositions[0].x, circlePositions[0].y, circlePositions[0].size, circlePositions[0].size);

  // second circle
  p5.ellipse(circlePositions[1].x, circlePositions[1].y, circlePositions[1].size, circlePositions[1].size);
}
p5.draw = ()=>{
  p5.background(0);
  if (cc[1]==1){
    p5.ellipse(p5.width/2,p5.height/2,600*cc[0]+300*p5.noise(cc[0]),600*cc[0]+300*p5.noise(cc[0]));
  } else {
    p5.ellipse(p5.noise(cc[0]*2)*p5.width,cc[0]*p5.height,300,300);
  }
}

src(s0).modulate(noise(3, 0.6), 0.03).mult(osc(1, 0, 1)).diff(src(o1)).out()
src(s0).modulate(noise(2, 0.9), .3).mult(osc(10, 0, 1)).diff(src(o1)).out()
src(s0).modulate(noise(5, 5), .9).mult(osc(80, 30, 100)).diff(src(o1)).out()

// feedback effects --> .1 - .6, osc 0 - 10
src(s0).modulate(noise (4, 1.5), .6).mult(osc(0, 10, 1)).out(o2)
src(o2)
  .modulate(src(o1).add(solid(0, 0), -0.5), 0.005)
  .blend(src(o0).add(o0).add(o0).add(o0), 0.1)
  .out(o2)
  render(o2)

hush()

…and my Tidal!

//tidal

d3 $ s "superpiano" >| note (scale "major" ("[7 11 2 4 7 21 4 2]") + "15") # room 0.4
d1 $ juxBy 0.6 (slow 8) $ sound "bd cp sn hh:3" # gain 1.5
d4 $ juxBy 0.6 (slow 3) $ s "blip" # gain 1.7

-- 1 to 4, then to 8
d2 $ whenmod 16 8 (# ccv ((segment 128 (range 0 127 saw)))) $ struct "<t(8,8)>" $ ccv ((segment 128 (range 40 120 rand))) # ccn "0" # s "midi"

The theme I had for this composition project was recreating nature, or more specifically, a tropical jungle. The reason was pretty simple — I grew up in Taiwan, where we were always graced with sunlight and a hot, tropical climate, and this warm, happy feeling was what I missed the most during my time in Berlin and New York last year, so I wanted to recreate this atmosphere for all of us to enjoy. I also thought the idea of attempting to portray nature through technology — which is anything but natural — would be fun. 🙂

After a lot of fooling around, trying various audio samples, and playing with how each instrument will harmonize, I created a composition with a bright, happy, and somewhat dreamlike vibe, both in terms of its visuals and its audio. I first began by picking out some sounds that I found myself drawn to, such as the pelog scale, the sine instrument/audio, usage of claps, beats with unexpected irregular rhythms once in a while, etc., and then once I started assembling them together, I tweaked some parts to make them harmonize with each other.

Creating a build-up as well as brainstorming what my big “drop” was going to be was a bit more difficult because I felt this “pressure” to make them “catchy” and carry impact. While I did want it to be somewhat pleasant to the vast majority of the audience’s ears, I also didn’t want them to be too generic because there seemed to be certain formulas for both the build-up and the drop that a lot of people use in their compositions. I did end up using the classic acceleration of rhythm and the pitch for my build-up, but my drop ended up straying a lot from what a classic “beat drop” might look like because I didn’t want to use any overpowering bass or beat but rather wanted to create a more full and rich harmony of all the instruments I’ve been using up to this point with an addition of sounds like birds chirping, multiple sine melodies that mimic the songbirds, etc.

For my visuals, I used a lot of vibrant colors and waves/rounded lines/circles. I also imported an image of the jungle at the end to match my beat drop as a “big reveal” of the composition’s final destination.

Here’s the video:

Here’s the Tidal code:

d1 $ s "~ ~ cp ~" # gain 1
d2 $ ccv "<127 50 127 50>" # ccn "0" # s "midi"

d1 $ s "bd [~bd] cp ~" # gain 1
d2 $ ccv "17 [~50] 100 ~" # ccn "0" # s "midi"

d5 $ scramble 4 $ s "sine" >| note ((scale "minor" "<[-4 [10 3]]>"))
d2 $ scramble 4 $ ccv "10 127 5 30" # ccn "0" # s "midi"

d1 $ s "bd bd sd bd cp odx mt <[bd*2]!8>" # gain 1
d2 $ struct "t(4,8)" $ ccv ((segment 128 (range 127 0 saw))) # ccn "0" # s "midi"

d5 $ struct "<t(2,4)>" $ s "sine" # note ((scale "pelog" "c'maj f'maj")) # room 0.3
d2 $ ccv (segment 8 "0 84 10 127") # ccn "0" # s "midi"

d7 $ s "sine*16" # note ((scale "pelog" "-5 .. 10"))
d2 $ ccv (segment 8 "0 20 64 127 60 30 127 ~") # ccn "0" # s "midi"

d3 $ struct "<t(3,8) t(3,8,1)>" $ s "sine" # note "<[1 3 1] [5 13 10]>" # room 0.4
d2 $ struct "<t(3,120) t(3,27,120)>" $ ccv ((segment 128 (range 127 0 saw))) # ccn "0" # s "midi"

d5 $ s "cp*4" # gain 1
d2 $ struct "<t(4,8)>" $ ccv ((segment 128 (range 127 0 saw))) # ccn "0" # s "midi"

d1 $ s "[bd bd bd bd] [sd cp] [bd bd] [cp bd]" # gain 1.1
d2 $ ccv "[127 20 70 0] [100 10] [80 ~] [0 ~]" # ccn "0" # s "midi"
d3 $ s "hh hh hh <hh*6 [hh*2]!3>" # gain 1.5

-- d7 $ s "sine*8" # note "[[1 1] 5 8 10 5 8]" # room 0.2

-- BUILDUP

do {
  d9 silence;
  d7 silence;
  d8 silence;
  d9 $ qtrigger $ filterWhen (>=0) $ s "808cy" <| n (run 16); -- 25 808 cymbals
  d10 $ qtrigger $ filterWhen (>=0) $ seqP [
    (0,1, s "sine*2"),
    (1,2, s "sine*4"),
    (2,3, s "sine*8"),
    (3,4, s "sine*16")
  ] # gain (slow 4 (range 0.8 1.2 saw)) # speed (slow 4 (range 2 4 saw));
  d1 $ qtrigger $ filterWhen (>=0) $ seqP [
      (0,1, s "[bd bd] [sd cp]"),
      (1,2, s "[bd bd bd bd] [sd cp]"),
      (2,3, s "[bd bd bd bd bd bd] [sd cp]"),
      (3,4, s "[bd bd bd bd bd bd bd bd] [sd cp]")
  ] # gain (slow 4 (range 0.8 1 saw));
  d3 $ qtrigger $ filterWhen (>=0) $ seqP [
    (0, 1, s "cp:2*2"),
    (1, 2, s "cp:2*4"),
    (2, 3, s "cp:2*8"),
    (3, 4, s "cp:2*16")
  ] # room 0.3 # hpf (slow 4 (1000*saw + 100)) # speed (slow 4 (range 1 2 saw));
}

-- drop
nature_go_Crazy = do
  d8 $ qtrigger $ filterWhen (>=0) $ s "blip*8" # gain 1 # note "[[6 8] 13 10 18 6 10]"
  d3 $ qtrigger $ filterWhen (>=0) $ struct "<t(3,8) t(3,8,1)>" $ s "sine" # note "<[1 3 1] [5 13 10]>" # room 0.4
  d12 $ qtrigger $ filterWhen (>=0) $ s "sine*16"
      # note "[[15 17 10] [8 20] [27 8] 25]"
      # room 0.4
      # gain "0.8"
      # pan "<0.2 0.8 0.5>"
  d1 $ qtrigger $ filterWhen (>=0) $ s "[bd bd sd bd] [bd sd] [bd cp] [sd bd]" # gain 1
  d9 $ qtrigger $ filterWhen (>=0) $ s "birds"
  -- d14 $ slow 2 $ s "arpy" <| up "c'maj(3,8) f'maj(3,8) ef'maj(3,8,1) bf4'maj(3,8)"
  -- d15 $ s "bass" <| n (run 4) -- four short bass sounds, nasty abrupt release
  -- d15 $ slow 3 $ s "bassdm" <| n (run 24)
  d14 $ qtrigger $ filterWhen (>=0) $ s "can" <| n (run 8) # gain 2
  d16 $ qtrigger $ filterWhen (>=0) $ s "<808lt:6(3,8) 808lt:6(5,8,1)>" <| n (run 8) # squiz 2 # gain 2 # up "-2 -12 -14"
  d10 $ qtrigger $ filterWhen (>=0) $ s "[bd bd cp bd bd cp bd bd] [sd cp]"

d2 $ ccv "127 ~ 70 20 [120] [40] [90]" # ccn "0" # s "midi"

nature_go_Crazy

d8 silence
d1 silence
d2 $ ccv "20 ~ 80 40 [120 ~] [20 ~] 127" # ccn "1" # s "midi"
    d16 silence

d10 silence
d3 silence
d14 silence
d2 $ ccv "0 ~ [50 127] ~ 20 ~" # ccn "0" # s "midi"
d5 silence
d9 silence

hush

And here’s the Hydra code:

//start!!

// first shape
shape(999, 0.3, 0.01).modulate(noise(2, 0.5)).luma(()=>cc[0],0.0).modulate(noise(()=>(cc[0]+cc[1])*1,0.3)).out(o0)

// 2nd
osc(10, 0.1, 1)
  .modulate(noise(2, 0.5))
  .mask(shape(999, 0.3, 0.3))
  .scale(1.5)
  .luma(()=>cc[0],0.0).modulate(noise(()=>(cc[0]+cc[1])*1,0.3)).out(o0)

// 3rd shape; later add .repeat(2,2), and then change to .repeat(2,2)
osc(10, 0.1, 1).modulate(noise(20, 0.9)).mask(shape(99, 0.2, 1)).luma(() => cc[0] / 127, 0.2).modulate(noise(()=>(cc[0]+cc[1])*1,0.3)).repeat(2,2).rotate(() => cc[2] * 0.1, 0.6).out(o0)

// 5th shape --> at first noise (30), then change to 10
osc(10,0.1,1).rotate(2).layer(osc(30,0,1)).modulate(noise (10,0.03),.5).luma(()=>cc[0],0.0).modulate(noise(()=>(cc[0]+cc[1])*1,0.3)).out(o0)

// 6th
osc(10, 0.1, 1).modulate(noise(2,0.5).luma(0.4,.03)).modulate(noise(()=>(cc[0]+cc[1])*1,0.3)).out(o0)

//7th; change noise 3 to 9, osc 20 to 40, and add  .posterize(5, 0.5)
osc(40, 0.2, 1)
  .kaleid(4)
  .modulateScale(noise(9, 0.5), 0.2)
  .blend(noise(3, 0.5))
  .modulate(noise(()=>(cc[0]+cc[1])*1,0.3))
  .out(o0)

      shape(2, 0.6, 0.4)
        .repeat(3, 3)
        .modulateScale(noise(2, 0.1))
        .mult(gradient().hue(0.5))
        .modulate(noise(()=>(cc[0]+cc[1])*1,0.3))
        .out(o0)

        s0.initImage("https://i.pinimg.com/736x/83/8c/d6/838cd6e2a27f7887e49d869f1857742c.jpg")

        noise(2, 0.5)
          .contrast(2)
          .modulate(noise(2, 0.1))
          .brightness(-0.3)
          .colorama(0.1)
          .diff(o0, 0.1)
          .modulate(noise(()=>(cc[0]+cc[1])*1,0.3))
          .out(o0)

  src(s0)
  .modulate(noise(4, 0.2))
  .modulate(noise(()=>(cc[0]+cc[1])*1,0.3))
  .kaleid(5)
  .out(o0)

  src(s0)
    .modulate(noise(2, 0.2))
    .modulate(noise(()=>(cc[0]+cc[1])*1,0.3))
    .modulate(osc(()=>(cc[0]*10+5), 0.1).rotate(()=>(cc[1]*0.5)))
    .out(o0)

  src(s0)
    .modulate(noise(3, 0.2))
    .modulate(noise(()=>(cc[0]+cc[1])*1,0.3))
    .blend(src(o0).scale(1.01), 0.7)
    .out(o0)

Here’s my progress so far with my composition piece! I’m pretty satisfied with my visual progression, but I’m thinking of adding many more layers for my audio because I didn’t quite have enough time to experiment and develop my audio enough by last Thursday’s class.

I also had a question regarding syncing the tidal/hydra together — is there a way for me to match/sync the audio to the visuals without me setting a certain rhythm in front of ccv/ccn? What I’m doing in my code is d2 $ struct “t(3,8)” $ ccv ((segment 128 (range 127 0 saw))) # ccn “0” # s “midi” because I found that it was the easiest for me to do this, but I realized that it might not be necessary for me to give such a complicated/long code for the ccv/ccn code line.

Thank you, professor! And happy belated birthday. 🙂

“Composers sculpt time. In fact, I like to think of my work as time design.” — Kurokawa

Whether it be simultaneously stimulating multiple sensory systems or mixing up rational with irrational realities and emotions, one of the biggest themes in Kurokawa’s compositions seem to be tying multiple elements and worlds that are seemingly different from each other all at once; and to me, it’s hard to imagine what kind of unexpected harmony his technique might bring. It seems that Kurokawa likes to take things that are already existing and are accessible to him (i.e. nature’s disorderly ways, hyper-rational realities) and give them a twist that acts as a “shocker” factor that brings forth curiosity and confusion from the recipient, which I thought was similar to other artists who do the same when they’re drawing inspiration for their works. However, I also came to realize that a lot of these other artists are all more focused on producing visual works, such as photographs, paintings, installations, etc., which made me wonder how it’s possible for sound engineers/composers to apply this “mixing reality with hyper-reality” into their works — I imagine that it might be more difficult for them because making this collision of two worlds seems to be much more prominent and thus easier to tell in visual artworks rather than audio.

I found his remark about how computer acted as a gateway for him to dive into graphic design/the “art world” fascinating, because this was one of the discussions we had in class a few weeks ago on whether the technology tools we have these days make it easier or harder for us to create art/music, etc. Because I was already inseparable from the more “traditional” art techniques such as painting and sculpting since youth before I transitioned into “tech-y” art, I thought creating works on technology was harder/more limiting to artists like me because learning about computers was always a scary realm. However, I can see how to many, it can be the exact reverse, especially if they didn’t have any background information or experience in creating artworks/music. Regardless of which circumstance you relate to more, I think this new relationship between technology and art opens up entirely new fields to both sides that allows us to expand our creativity and explore all kinds of possibilities in a much broader way.

A web-based audio-visual programming system, Tweakable prompts users to “make interactive and generative music, sound and visual art” by making them choose from a wide range of components to design their own algorithmic systems while also setting up controls that allow them to adjust how the algorithm works in real-time, hence the name “Tweakable.” According to the Wayback Machine, Tweakable.org was first active on June 6, 2002, and then was inactive from 2003 till 2020; finally, the website we currently have was a version launched on December 17, 2020 (“Wayback”).

There are 3 big components in Tweakable: Data Input/Flow, Sequencing, and Audio, with an additional option to create custom modules as well. Data Input/Flow includes control inputs like sliders and MIDI and controls how information flows in the platform. Sequencing generates and transforms musical patterns using grids, scripts, and mathematical functions, while Audio converts sequences into sound through instruments, oscillators, and automated effects (Woodward). When the user wants to create a new project, they will first choose from a pre-built library made of sequences, audio, video, effects, etc., and they will then connect those components together to create an algorithmic system. Finally, after the system has been made, they will also need to build a user interface so that their algorithm can be tweaked (“Tweakable”).

With its main goal is lowering the entry barrier for programming music or visual art for all users, Tweakable invites users with no background knowledge to not only easily create their own works but also share their projects “without worrying about missing dependencies” since it’s web-based. By being a web where users can tweak and experiment with parameters in live time, Tweakable was one of the earliest pioneering live coding platforms at the time of its creation; and not only does it encompass live coding’s key characteristic of allowing users to write and modify code in real-time to create music and visuals, it also made the algorithmic generation of art more accessible and intuitive with its visual component-based approach, thus opening up the platform to anyone ranging from a total novice to an expert (Woodward).

Finally, here’s a video of me playing around with Tweakable, and the slides are here 🙂

Works Cited

“Tweakable.” IRCAM Forum, forum.ircam.fr/projects/detail/tweakable/#project-intro-anchor. Accessed 12 Feb. 2025. 

“Wayback Machine.” Expand Web Menu, web.archive.org/. Accessed 12 Feb. 2025. 

Woodward, Julian. Tweakable – NTNU, www.ntnu.edu/documents/1282113268/1290817988/WAC2019-CameraReadySubmission-10.pdf/bf702376-a6e4-a270-6581-f80f55bbbfec?t=1575408891372. Accessed 12 Feb. 2025.

Learning that groove is at the center of West African and African-American music and how it plays a critical role in giving “perception of a human, steady pulse in a musical performance” made me think that this might be the genre I’ll look into for when I’m creating music for our own live performance. As a dancer myself, I’d love to make the audience feel the urge to just break into a dance while they listen to our rhythm. It was interesting how altering such small details can completely change the nuance of the music, there wasn’t a lot of things to say or do.

The fact that the backbeat is presumed to be “some very ancient human musical behavior” that was one of the earliest musical attempts of humankind and that we’re still using it as the backbone of our music compositions after all these years made me wonder if having a backbeat is crucial for all types of music, or whether it can be omitted by choice. Is the reason why it sustained for this long solely due to it being a necessity in creating music, or is it because it’s helpful/personal choice of style/etc.?

Finally, the comment about the current music industries and how “rather convincing electronic tracks have replaced the drummer” in recorded tracks made me remember a question I’ve been harboring for a long time. I’ve always wondered whether the strings/orchestra in the background of songs were live recorded backtracks, or whether they were just electric keyboard synthesizers with keys that mimic the sound of strings playing. I just thought that using simple keyboards would save the musician’s budget by a lot more compared to hiring a live ensemble, and whether we’d be able to tell the difference between the two because nowadays, technology has evolved to the point where the tracks it produces are “rather convincing,” as the writer claims.

p.s. Here’s just a quote that I thought was really powerful — I wanted to write it here so that I’ll keep this in mind as I produce projects in the future. “For what is soul in music, if not a powerfully embodied human presence?”

I thought that the reading’s mention of how live coding is all about opening up rather than being exclusive was spot on with what I thought live coding was. Looking at the performance during my freshmen year, I felt included, almost as if I was part of the musical masterpiece that they were crafting right before my eyes because I could see the entire process of their codes, step by step. And I remember the anticipation, the thrill, as I predicted what was going to happen now — the beat might drop at this moment, or the visuals might change this way, etc. And I think this is what I want to replicate for the audience through my performance by the end of this semester as well, because a big part of live coding “involves showing the screen or making visible the coding process as part of a live performance.” So if the audience isn’t incorporated into my performance, then I believe it decreases the unique and special experience of the audience significantly.

I also liked how similar live coding is to what I think Interactive Media is as well because, in the center of live coding, there’s an element of exchanged feedback from the audience and the coders/performers, as well as being expressive, free, and being present at the moment, which is what I believe Interactive Media artworks strive to be. While there definitely is a rough guideline from the performers’ part, it’s always up for changes based on how the audience interacts and is feeling at the moment, thus adding a sprinkle of spontaneity by capturing the moment that the performance is being held in.