This is the link for the video

This is the code:

Hydra:

let p5 = new P5()
s0.init({src: p5.canvas})
// in a browser you'll want to hide the canvas
p5.hide();
// no need for setup
p5.noFill()
p5.strokeWeight(10);
p5.angleMode(p5.DEGREES);
p5.stroke(255);
p5.noiseDetail(2,1)
// with tidal
let start = 10
p5.draw = ()=>{
  p5.background(0);
  p5.translate(p5.width/2, p5.height/2)
  let space = 10;
  for (let i = 0; i < 360; i+= space){
    let xoff = p5.map(p5.cos(i), -1, 1, 0, 10)
    let yoff = p5.map(p5.sin(i), -1,1, 0, 10)
    let n = p5.noise(xoff + start, yoff + start);
    var h = p5.map(n, 0, 1, -150, 200)
    p5.rotate(space)
    p5.rect(200 + cc[0] * 5, ccActual[0],h,1)
  }
  start += 0.01
}
src(s0).rotate(ccActual[0], 0.5).out(o1) //rotate(10, 0.5).
osc(24,0.5,3.5).mult(o1).out()

TidalCycles:

d1 $ whenmod 16 8 (# note (scale "major" ("[0,2,4] [1,3,5] [5,7,9]") + "c5")) $ s "[hh*8?, <superhammond:5(3,8) gtr(5,8)>, <clubkick(5,8) [drumtraks:6(3,8), bd(5,8,1)]>]" # room 0.95 # gain 1.4 # speed (slow 4 (range 1 2 square))
d2 $ whenmod 16 8 (# ccv ((segment 128 (range 0 127 saw)))) $ struct "<t(3,8) t(5,8)>" $ ccv ((segment 128 (range 40 120 rand))) # ccn "0" # s "midi"

I really enjoy the sound of drums and low-pitched tones because they feel closer to a heartbeat. Throughout this project, I spent a lot of time searching for different low-pitched sounds to combine with each other. Initially, it was quite difficult to find the right combination because many of the sounds were too similar. To add more variation, I applied heavy distortion effects (using krush and squiz) to most of them. This helped create distinct textures and added more character to the overall composition.

I started the project using a Tidal file and then tried to connect the sound with Hydra. Since many of the music blocks were built from various rhythms, it was quite difficult to represent them visually in Hydra. One solution I came up with was to make each sound trigger a different visual change in Hydra. I especially enjoyed experimenting with basic shapes and movements, and I tried out different ways to make those shapes move in response to the sound.

It was quite challenging to bring everything together into a cohesive composition because I wasn’t sure how to create a strong drop. I ended up starting with a verse, which repeats a few times throughout the piece, then gradually layered in different drums and bass sounds to build the chorus. To create a bridge, I used a variation of the verse, which helped lead into the buildup and eventually the drop. I finished the piece by working backwards, transitioning from the drop back into the chorus, and ending with a softer, more minimal sound to bring the composition to a close.

Hydra code:

noise(20)
  .pixelate(50, 50) //()=>ccActual[4]
  .blend(
    osc(10, 0.001, 1)
      .hue(() => cc[4])
  ).blend(solid(), 0.5) //()=>cc[3]
  .out(o1)

gradient([1,2,4])
  .mult(osc(40, 0, 1).color(()=>cc[1]*2, ()=>cc[1], 1))
  .modulate(noise(()=>cc[1] + cc[0]*3))
  .brightness(0.4)
  .mask(shape(3, 0.2, 0.1).rotate(()=>cc[0]).scrollY(-0.25)) //()=>cc[2]/3
  .modulate(voronoi(25, 2, 0.5), 0.1)
  .add(
    shape(3, 0.2, 0.1) //()=>cc[2]/3
      .rotate(()=>(-(1-cc[0])))
      .color(()=>cc[1], ()=>cc[1], ()=>cc[1])
      .scrollY(0.25)
      .modulate(voronoi(25, 2, 0.5), 0.1)
  )
  .luma(0.05)
  .out(o0)

src(o1).layer(src(o2)).out(o0)

TidalCycle Code:

d1 $ slow 4 $ note "c3*4 d4*4 a3*8 g3*4" # s "superhammond:5" # legato 0.5
d4 $ ccv "[50 120] [50 120] [100 80 60 40] [50 120]" # ccn "0" # s "midi"

do
  d5 $ s "clubkick*4" # gain 0.8 # legato 1 # speed (slow 4 $ range 0.5 1 saw)
  d6 $ ccv "80 20 50 120" # ccn "1" # s "midi"

do
  d4 $ note "c3*2 d4*2 a3*4 g3*2" # s "arpy" # room 0.95 # gain 0.8 # speed "1.5 1"
  d3 $ ccv "80 40 [100 40] 120" # ccn "0" # s "midi"

do
  d1 $ s "~ cp" # gain 1.2 # room 0.5
  d4 $ s "<coins(3,8) hh(5,8,1)>" # room 1.2 # speed "2 1" # gain 1.3 # squiz 1.1 # up "-2"
  d3 $ struct "<t(3,8) t(5,8,1)>" $ ccv ((segment 128 (range 127 0 saw))) # ccn "0" # s "midi"

do
  d1 $ s "[glitch*4(3,8)]" # n (run 5) # vowel "a e i o" # room 0.45 # speed (slow 4 ("<2.5 1>")) # gain 1.2
  d6 $ ccv "[50 ~ ~ 120 ~ ~ 70 ~](3,8)" # ccn "0" # s "midi"
  d4 $ s "[sd <sd cp> ~ cp, hh*8]" # room 0.45 # speed (slow 4 ("<2.5 1>"))
  d7 $ ccv "50 <100 70> ~ 40" # ccn "1" # s "midi"

--verse 2 ADD cc[2] in hydra
do
  d1 $ degradeBy 0.4 $ s "808bd:1*8" # legato 1 # speed "1.5 1" # room 0.75
  d3 $ slow 4 $ note "c3*4 d4*4 a3*8 g3*4" # s "superhammond:5" # legato 0.5 # gain 1.1
  d4 $ ccv "[100 50] [100 50] [100 80 60 40] [100 50]" # ccn "2" # s "midi"
  d7 $ struct "t*8" $ ccv ((segment 128 (range 127 0 saw))) # ccn "0" # s "midi"
  d6 $ ccv "120" # ccn "1" # s "midi"

built_up = do {
  d3 $ qtrigger $ filterWhen (>=0) $ seqP [
  (0,1, s "[drum:5*4, hh*8]" # room 0.55),
  (1,2, s "[drum:5*4, hh*8]" # room 0.65),
  (2,3, s "[drum:5*4, hh*8]" # room 0.75),
  (3,4, s "[drum:5*4, hh*8]" # room 0.85),
  (4,5, s "[drum:5*4, hh*8]" # room 0.95)
] # speed (slow 5 (range 2 3 saw)) # gain (slow 5 (range 0.5 1.5 saw));
  d4 $ qtrigger $ filterWhen (>=0) $ seqP [
  (0, 5, struct "[t*50]" $ ccv (range 127 0 $ slow 5 saw) # ccn "3" # s "midi")
  ]
}

-- change hydra to cc[3]
built_up

bass = (stack[s "clubkick:5(3,8)", s "bassdm:22(3,8)"] # room 0.95 # speed ("<1 1.5>") # gain 1.2 # squiz 2 # up "-2" # krush 10)

do
  d2 $ bass
  d3 $ s "<bassdm:22(5,8) drumtraks:2(3,8,1)>" # room 0.95 # speed (slow 4 ("<1 1.5>")) # gain 1.1 # squiz 1.1
  d1 $ slow 4 $ note "c3*4 d4*4 a3*8 g3*4" # s "arpy:5" # legato 0.5 # gain 1.2
  d5 $ struct "<t(3,8) t(5,8,1)>" $ ccv ((segment 128 (range 127 20 saw))) # ccn "4" # s "midi"
  d4 $ ccv "[100 50] [100 50] [100 80 60 40] [100 50]" # ccn "2" # s "midi"

do
  d1 $ s "[<drumtraks:6(3,8)>, hh(5,8,1)]" # room 0.95 # krush 4 # speed (slow 4 ("<2 2.5>")) # up "-2"
  d2 $ s "stomp:5*4" # room 0.55 # krush 5 # speed (slow 4 "<2 1 1.5 2>")
  d3 $ struct "<t(3,8)>" $ ccv ((segment 128 (range 127 0 saw))) # ccn "2" # s "midi"
  d4 $ ccv "0 20 100 70" # ccn "0" # s "midi"

do
  d1 silence
  d5 $ s "clubkick*4" # gain 0.8 # legato 1 # speed (slow 4 $ range 0.5 1 saw)
  d6 $ ccv "80 20 50 120" # ccn "1" # s "midi"

do
  d4 $ note "c3*2 d4*2 a3*4 g3*2" # s "arpy" # room 0.95 # gain 0.8 # speed "1.5 1"
  d3 $ ccv "80 40 [100 40] 120" # ccn "4" # s "midi"

hush
d1 silence
d2 silence
d3 silence
d4 silence

Demo

Sorry, I can not get the embedding to work. Here is the link.

People often compare music to writing mathematical sentences, as both can convey a story of their own. Therefore, I think that it seems natural for sound and composition to represent both abstract and physical aspects of the world. Live coding, combined with visual elements, enhances this representation by engaging multiple senses, creating a richer, more immersive context for the audience. This multisensory experience aligns with Kurokawa’s central concept of “synaesthesia and the deconstruction of nature.” I believe that activating multiple senses simultaneously during a performance not only deepens the audience’s engagement but also highlights the intricate relationship between sound and visual representation. Similar to the way sound waves and color frequencies correspond, the synaesthesis theory leads me to wonder: if sound can be associated with specific colors, can colors, in turn, evoke specific sounds?

Furthermore, it is through the visual representation of sound that deeper meaning emerges. By transforming the auditory into the visual, the performance gains an additional layer of interpretation, embodying what Kurokawa refers to as the “decoding of nature.” This seamless fusion of “graphic analyses and re-syntheses” introduces a poetic quality to the performance, where sound and visuals breathe life into the piece. As a result, the work takes on the fluid, organic qualities of nature, embracing both its structure and its inherent noise and randomness.

Cascade is a web-based live coding environment introduced by Raphaël Bastide in 2021. It transforms built-in CSS and HTML in web browsers into a tool for creating sound and visuals. Since Cascade is entirely browser-based, no additional setup or external modules are required, users simply reference the core scripts in an HTML file and/or a CSS file to get started.

How it works?

Cascade generates sound and visuals based on the shapes and positions of elements on a webpage. Each property of an element (such as width, height, and position) influences specific musical attributes. For instance, the width-to-height ratio represents the number of beats and steps, following Godfried Toussaint’s Euclidean rhythm distribution. This integration of web design with music encourages users to think about both the visual aesthetics and how they contribute to sound production. As I tried it by myself, it was quite difficult to come up with a meaningful visual along with a good sound. However, thanks to the CSS properties, the animations are interpreted in real-time allowing more dynamic visuals.

c.add('width:10vh; height:30px; background:cyan; top:80vh;')

For example, the above line allows to add a div into the body so that it has width of 10vh and height of 30px. The background is cyan which will be match with the according instrument set by Cascade. The vertical position (top: 80vh) adjusts the note pitch, making the note higher and louder.

Why Cascade?

Cascade offers a gentle learning curve for those familiar with web development, as it introduces no new language or syntax beyond standard CSS and HTML. This makes it an accessible entry point for newcomers to live coding while providing a creative playground for experienced developers. It also fosters collaboration between performers and audiences. Performers can modify, see, and listen in real-time, while audiences can watch and listen to the evolving performance.

Cascade bridges the gap between web development and live coding, making it a powerful tool for exploring sound through visual design. It allows users to combine or learn both disciplines simultaneously. Every design decision affects the resulting sound, prompting a thoughtful approach to composition and layout. This blend of sound and visual design invites users to experience the intersection of aesthetics and music in new and exciting ways.

Demo


Link (in case the preview does not work)

Citation

“Cascade.” Raphaelbastide.com, 2025, raphaelbastide.com/cascade/#cascade-pool-collab-mode. Accessed 13 Feb. 2025.

Music originates from the human body. Every clap of the hands and stomp of the feet creates a rhythm, and these natural rhythms have inspired the development of instruments such as the snare drum and bass drum. Furthermore, the variations in pitch and frequency of notes mirror the complexity of human emotions, allowing music to carry and convey feelings that the audience can intuitively understand. Because music is born from the body and has the ability to transmit the abstract essence of human experience, I believe it comes alive through both those who create it and those who listen to it.

What makes music feel even more alive is the behavior we attribute to it. For instance, the slight delay between the bass drum and snare drum may stem from the natural coordination differences between our hands and feet, yet we accept this as an inherent quality of musical rhythm. Each performer is unique, and as a result, the same piece of music can be played in ways that evoke entirely different emotions. In this sense, music takes on the personality of the musician, becoming a deeply personal and expressive form of art. Thus, music serves as a powerful medium for self-expression in today’s world.

“To define something is to stake a claim to its future, to make a claim about what it should be or become,” said David Ogborn.

Viewing live coding as a performance rather than merely a code display, defining it too strictly could limit its creative potential. Imposing rules or confining them within a strict definition might restrict the freedom and diversity coexisting in live coding. In the creative field, as projects take on different forms and styles, I believe that leaving “live coding” undefined allows for more possibilities to emerge and evolve in real-time, during live performances.

That said, my own interpretation of live coding is that it is a space where code is alive and constantly evolving. The code transformation can grow or end, and the performance can be a solo performance or an interactive experience with the audience. It is “alive” in the sense that it changes dynamically before the audience’s eyes, but it is also static in a way, like a painting, where the computer is the canvas, and each line of code is a brushstroke. This dual nature makes live coding an exciting side of art for both developers and audiences, shaping it into a unique live performance and an artistic experience.