(The full visual appears at around 55 second)
I started this project with the music first and then creating and matching with the visuals.
At first, I followed Aaron’s instructions and just wrote some random blocks of compositions. Some of them are bright, some of them are creepy (is this the right word?) and they sounds irrelevant to each other. (Then I need to give credit to my amazing roommate that I got inspired from what she told me that it sounds like an agent lurking)
Then I basically applied three steps: I first adjust the samples to be the similar instruments, and then try to adjust the scale from major to minor, or other scales that I experiment to make them have similar styles. Lastly I fine-tuned some notes to make them more like a coherent whole. I also read through the documentation of the music structure and learned a bit of the power of silence between the build up and the drop so I kept it, but then to add relevance to the theme I added the female voice of “Warning”. Since this is like a story telling project to me, I added also a little bit of narrative samples and an opening ending to the structure.
For the visual part, I am always a fan of simple, red and black stuff, so I kept doing that. I have decided at a very early point to stick to one main visual and evolve different form over time. This cone-like visual gives me a sense of a tunnel/terminal where the agent is lurking around. Part of the visual evolution is due to how the sound sounds like, but an even bigger part of the visual is actually determined by the story in my mind (Agent enter the terminal -> terminal is like a maze with multiple entrance -> agent found the terminal and got in -> agent pull out the gun and break into the secret place -> the fight begin -> open ending) I made the visuals follow the basic plot and create them based on my imagination. Lastly, I tried to match them with the midi communication between Tidal and Hydra.
My Tidal Code
setcps(135/60/4)
hush
-- reset everything
do
d11 $ ccv 0 # ccn 1 # s "midi"
d12 $ ccv 1 # ccn 2 # s "midi"
d13 $ ccv 0 # ccn 3 # s "midi"
d14 $ ccv 0 # ccn 4 # s "midi"
do
d1
$ qtrigger $ filterWhen(>=0)
$ struct "<[[t@3 t] t t@2]>" $ s "808hc"
# cut 1
# room 0.4
# speed "<0.8 1 0.8 0.5>"
d11 $ ccv 0 # ccn 1 # s "midi"
do
d2
$ qtrigger $ filterWhen(>=0)
$ s "<hh(3,8,1) hh(5,8)>"
d3 $ qtrigger $ filterWhen (>=0) $ s "~ 808hc*2 ~!2" # note (scale "minor" "<15 21 15 3>") # room 0.6
d11 $ qtrigger $ ccv "<20 1>" # ccn 1 # s "midi"
d12 $ ccv "1" # ccn 2 # s "midi"
do
d4
$ qtrigger $ filterWhen (>=0)
$ struct "t!6 f [t t@2 f]"
$ whenmod 4 3 ((# note (scale "minor" "[0,2,4]!4 [0,2,6] [0,2,4]!3"-12)))
$ s "808hc"
# note ((scale "minor" "<[<<22 20> 15> 7 3 1]>")-12)
# legato 0.6
# gain (range 0.7 1.3 sine)
# room 0.6
d11 $ qtrigger $ filterWhen (>=0) $ ccv "<<40 60> <40 60>>" # ccn 1 # s "midi"
d12 $ ccv "<1 <2 3>>" # ccn 2 # s "midi"
hush
do
d5 $ stack [
s "bass3*8" # speed ("<1.2 1>") # room 0.4,
slow 2 $ s "glitch:3(3,8) glitch:2"
]
d6 $ s "hh*8"
d12
$ qtrigger $ filterWhen (>=0)
$ ccv "<127 20 60 [[50 35] [20 10] [40 10] 10]>" # ccn 1 # s "midi"
do
d1 silence
d6 silence
d8 $ s "bass3(12,16)" # speed ("<1.2 1>") # room 0.4
d5 $ qtrigger $ filterWhen (>=0) $ seqP [
(0,8, s "sharp:2*4"
# lpfbus 1 (segment 1000 (slow 8 (range 100 7000 saw)))
# lpq 0.3
# gain (slow 8 (range 0.4 2 saw))
-- # speed (range 0.3 4 saw),
# room 0.3)
]
d9 $ stack [
slow 2 (s "~ glitch:3"),
struct "t!2 [f t] [t f]" $ s "[909 sd]!2" # room 0.6,
qtrigger $ filterWhen (>=0) $ slow 4 $ s "hh*<4 8 16 32>"
]
d6 $ s "~!3 <sharp:6 ~>" # room 0.3 # gain (slow 2 "<1 1.5>") # (slow 2 (lpf "<1000 10000>"))
d11 $ qtrigger $ filterWhen (>=0) $ ccv (slow 8 (segment 8 (range 10 127 saw))) # ccn 1 # s "midi"
d12 $ qtrigger $ filterWhen (>=0) $ ccv "<1 2 4 8 12 18 23 100>" # ccn 2 # s "midi"
d13 $ qtrigger $ filterWhen (>=0) $ ccv "<0.1 1>" # ccn 3 # s "midi"
hush
do
d2 silence
d3 silence
d9 silence
d8 silence
d7
$ qtrigger $ filterWhen (>=0) $ seqP [
(1,25, every 2 (fast 2) $ s "bass3(5,8)" # note ("1..2") # room 0.3),
(25,30, every 2 (fast 2) $ s "bass3(5,8)" # note ("1..2") # room 0.3 # gain (slow 12 (range 1 0.1 saw)))
]
d6
$ qtrigger $ filterWhen (>=0) $ seqP [
(0,1, s "sharp:3" # gain 1.5 # room 1 # krush 5),
(1,25, s "sharp:2*2" # speed "0.7" # gain (slow 25 (range 1.3 0.8 saw))),
(1,17, slow 2 $ s "hardkick:5" # gain "<<2 1.3> 1.3 1.5 1.3>" # room 0.8 # lpf ("<10000 10000 800 10000>")),
(1,20, s "909*4" # room 0.3 # gain 1.3),
(1,25, s "hh*16" # gain (range 1 1.2 perlin)),
(25,30, s "sharp:2*2" # speed "0.7" # gain (slow 8 (range 0.8 0 saw))),
(31,33, s "<sharp:7 [~!3 sharp:9]>" # room "<1 1.5>" # gain "<1 1.2>")
] # room 0.5
d4
$ qtrigger $ filterWhen (>=0) $ seqP [
(0,30, slow 2 $ chop "<32 16>" $ "ade:6"
# speed ("<2 3>")
# room 0.2 # gain (range 1 1.8 saw))
]
d11 $ qtrigger $ filterWhen (>=0) $ seqP [
(0,25, ccv (segment 16 (range 100 1 saw))),
(25,30, slow 5 $ ccv (segment 5 (range 100 0 saw))),
(30,33, ccv "0")
] # ccn 1 # s "midi"
d12
$ qtrigger $ filterWhen (>=0) $ seqP [
(0,1, ccv "1 0!64"),
(1,10, ccv "<1 4 10 20>"),
(25,30, ccv "1"),
(30,33, ccv "1")
] # room 0.5 # ccn 2 # s "midi"
d14
$ qtrigger $ filterWhen (>=0) $ seqP [
(0,1, ccv "1"),
(1,2, ccv "0"),
(3,17, slow 2 $ ccv "1 0!12")
-- (25,33, ccv "1")
] # ccn 4 # s "midi"
hush
My Hydra Code
hush()
// intro
shape(30,()=>cc[1]*1.3)
.scale(1,()=>window.innerHeight/window.innerWidth,1)
.colorama(1)
.scrollY(0.1,0.1)
.diff(src(o1).scale(0.9),0.5)
// comment out for the 4th part
// .luma(0.12)
// .repeat(()=>ccActual[2],()=>ccActual[2])
.out(o1)
render(o1)
// build up
src(o1)
.repeat(()=>ccActual[2],()=>ccActual[2])
.modulate(noise(()=>ccActual[2]/3,()=>cc[2]*5))
.colorama(()=>ccActual[3])
.invert(()=>ccActual[4])
.out(o2)
render(o2)
//drop
src(o1)
.repeat(()=>ccActual[2],()=>ccActual[2])
.modulate(noise(()=>ccActual[2]/3,()=>cc[2]*5))
.hue(()=>cc[1]*50)
.colorama(()=>ccActual[3])
.modulateScale(noise(3,()=>cc[1]))
.invert(()=>ccActual[4])
.out(o2)
// final
shape(4,100).out(o0)
render(o0)
hush()
Still trying to make some tweak with the musical structure cuz I am not so satisfied with how fast it ends now.
And also visuals are pretty incomplete. Still working in progress.
Synesthesia allows artists to experience one sensory input while automatically perceiving it through multiple senses. While Ryoichi Kurokawa is not a clinical synesthete—nor am I—I find myself intrigued by how his artistic process unfolds. In true synesthesia, the brain acts as a bridge between senses, but Kurokawa seems to approach it differently. He envisions the brain not just as a connector but as a target, an audience, or even a collaborator. As he puts it, “I want to stimulate the brain with sound and video at the same time.”
Exploring his portfolio, I noticed a strong sense that his works are created using TouchDesigner (and turns out it’s not). As a new media student constantly exposed to different tools, my instinct was to research his technical choices. But then I came across his statement: “I don’t have nostalgia for old media like records or CDs, and I’m equally indifferent toward new technology.” This struck me. As an artist, he moves fluidly, guided not by tools but by his vision, much like the way he deconstructs nature’s inherent disorder only to reconstruct it in his own way. It’s not the medium that matters—it’s the transformation.
Watching Octfalls, I could already imagine the experience of standing within the installation, anticipating each moment, immersed in the sudden and precise synchronization of sound and visuals. As I explored more of his works, I noticed how they differ from what I have seen in many audiovisual performances, where sound usually takes precedence while visuals are more of a support role. In Kurokawa’s pieces, sound and visuals are equal partners, forming a unified whole. This made me reconsider my own approach—perhaps, instead of prioritizing one element over the other, a truly cooperative relationship between sound and visuals could be even more compelling.
Mercury is a beginner-friendly, minimalist, and highly readable language designed specifically for live-coding music performances. It was first developed in 2018 by Timo Hoogland, a faculty member at HKU University of the Arts Utrecht.
Mercury is structured similar to Javascript and ruby, being written highly abstracted. The audio engine and openGL visual engine is built based on Cycling ’74 Max 8, utilizing Max/MSP for real-time audio synthesis, while Jitter and Node4Max handle live visuals. Additionally, a web-based version of Mercury leverages WebAudio and Tone.js, making it more accessible.
How Mercury Work
The mercury language roots in Serialism, which is a musical composition style where all parameters such as pitch, rhythm and dynamics are expressed in a series of values (called list
in Mercury), used to adjust the instruments stage over time.
In Mercury, the code is executed sequentially from top to bottom, and variables must be declared before being used in an instrument instantiation, if the instrument relies on them. The functionalities are divided into three categories. To define a list of values, the list
command is used. To generate sound, an instrument—such as a sampler or synthesizer—can be instantiated using the new command.

Mercury in the Context of Live Coding
What makes Mercury outstanding is the following highlights:
- The minimal, readable and highly abstract language: It fosters greater clarity and transparency between performers and audiences. By breaking down barriers in live coding, it enhances the immediacy of artistic expression. True to its name, Mercury embodies fluidity and quick thinking, enabling artists to translate their mental processes into sound and visuals effortlessly.
- Code Length Limit and Creativity: In the early version of Mercury, there is a code length limit up to 30 lines for the performer, which encourage the innovation and constant dynamics by pushing the iteration of the existing code rather than writing another long list of code.
Demo
Below is the performance demo of me playing around in the Mercury Web Editor Playground:
The article discusses the importance of human embodied presence in music, emphasizing how both intentional and unintentional “imperfections”—along with the physical movements of musicians—play a crucial role in shaping the “soul” of music.
At first, I found myself wondering the same thing: Does electronic music really have a soul? The perfection in electronic music—precise timing, flawless pitch, and speeds that human musicians cannot physically achieve—often creates a robotic and somewhat inaccessible quality. This seems to contrast with the warmth and expressiveness of human-performed music.
However, as I reflected on my own experiences with techno and electronic music, I realized that we are actually drawn to its cold, half-human, and futuristic aesthetic. As the author describes, it embodies a “disembodied, techno-fetishistic, futuristic ideal.” In this sense, electronic music’s unique identity is not about replicating human imperfection but about embracing a different kind of artistic expression.
The evolution of electronic music challenges us to rethink the essence of musical “soul.” Does music require human musicians physically playing instruments to be considered soulful? Ultimately, both electronic sounds and traditional instruments are merely mediums for artistic expression. Defining musical soul solely based on the medium—whether electronic or acoustic—seems arbitrary. If digital music is purely “cold,” does that mean instrument is purely”warm”?
Even when electronic music fully embraces mechanical perfection, it can still be deeply expressive, depending on how the artist uses it. As I mentioned earlier, techno and other electronic genres transform cold precision into something deeply moving. The soul of music does not come from imperfection alone, but from the wild and imaginative ideas of the artist. Rather than rigidly defining musical soul based on how “human” a sound is, we should recognize that it is the artist’s vision that gives music its depth, emotion, and meaning.
P.S. When I was reading through the “backbeat” part and the microscopic line about the snare drum always played slightly later than the midpoint between two consecutive pulses, I tried to replicate the rhythm in Tidal (Not sure if it’s right but sounds so). Then I searched on Youtube about the drummer playing Backbeat. Unfortunately I wasn’t able to hear the tiny time difference between the two. Maybe I would need more listening training for this:)
d1 $ stack [
s "bd ~ sn ~ bd bd sn ~",
s "hh*8"
] # room 0.3
These paragraphs explore the concept of live coding and why it attracts people. As an interdisciplinary practice combining coding, art, and improvised performance, live coding appeals to both technicians and artists. It provides a unique medium to appreciate the beauty of coding and the artistic aspects often hidden within what is typically seen as a highly technical and inaccessible field.
I encountered live coding for the first time while working as a student staff member at ICLC2024. These performances gave me a basic understanding of live coding as a layperson. Reading this article later deepened my perspective and sparked new thoughts.
The article describes live coding as a way for artists to interact with the world and each other in real-time through code. Watching live coding performances, I initially assumed artists focused entirely on their work, treating the performance as self-contained and unaffected by external factors. However, I may have overlooked the role of the audience, the venue, and the environment in inspiring the artists and adding new layers to the improvisation. As someone who loves live performances, I now see live coding as another form where interaction between the artists and their surroundings is crucial.
The article also mentions how projecting code on the screen as the main visual makes the performance more transparent and accessible. While I agree with this, it also raises a concern. A friend unfamiliar with live coding once referred to it as a “nerd party,” commenting that it’s less danceable than traditional DJ performances and difficult for non-coders—or even coders unfamiliar with live coding languages—to follow. I wonder if this limits the audience’s ability to understand and fully appreciate the performance or the essence of the art form. Although this may not be a significant issue, it’s something I’m curious about.