let p5 = new P5()
s0.init({src: p5.canvas})
src(s0).out()
let x = 0;
let y = 5;
let gridSize = 1000;
let r = 255;
p5.hide();
p5.strokeWeight(1);
p5.stroke(0);
p5.noFill();
p5.frameRate(20);
p5.draw = ()=>{
p5.background(20);
for ( x = 0; x < p5.width; x += (gridSize*(0.7+cc[0]/2))){
for ( y = 0; y < p5.height; y += (gridSize*(0.7+cc[0]/2))){
p5.fill(p5.map(255,0,127,0,255));
let rand = p5.random(-5,5);
p5.rect(x,y,gridSize+rand,gridSize+rand);
}
}
}
src(o0)
.invert(()=>ccActual[1])
.modulateScale(osc(20,[0.1,0.6]))
.mask(shape(30,()=>cc[0]*1.5).scale(1,window.innerHeight/window.innerWidth,1))
.modulatePixelate(noise(20,10),400)
.mult(solid(1,()=>ccActual[1],()=>ccActual[1],1))
.out(o1)
render(o1)
Tidal Code
d1 $ slow 2 $ s "superhammond*8" # note (scale "lydian" ("[a3*2] ~ a3 a3 c4 [e4 g4]")) # gain 1.2 # lpf 1000
d2
$ fast 1
$ chop 64
$ "moog(5,8)"
# gain 1.2
# up "-1"
#legato 4
#crush (slow 2(range 1.2 1.8 sine))
d3
$ fast 1
$ chop 32
$ "moog(5,8)"
# speed "3 2"
# gain 0.8
# up 4
#legato 4
d4 $ fast 0.5 $ chop 32 $ "ade:6" #speed "4 3" #room 1
p "ccMaskSize" $ ccv "127 30 60 5" # ccn "0" # s "midi"
p "ccColor" $ ccv "<1 0>" # ccn "1" # s "midi"
d5 $ slow 2 $ s "blip:8(3,8)" # gain 1.2 # up "<-1 1>"
d6 $ slow 2 $ jux rev $ s "hh*8 sn*4" # crush 8 # gain 0.8 # legato 1
Ruiqi Liu:
In this performance, I am in charge of the visual. I sticked to my preferred minimalistic and geometrical style. I wrote the basic shape using for loop using p5 instances, and then added effects using hydra on top of that. I also tried with lines, circles, ellipse, and squares, and found out that grids look best.
Rebecca & Mike: We had three main iterations on our tidal script when experimenting with long samples from different genres as we wanted to play with the chop and striate functions. In the early iterations of in-group jams, we developed our drum circle from acid and ambient samples respectively as these two genres have a strong contrast. The acid samples feature a strong energy and a psychedelic feeling, while the ambient samples create a peaceful and calm atmosphere of the audio. These practices helped us open up our mind and had a better idea of how to arrange patterns according to the various textures from and potential emotions brought by the samples. In the final version, we decided to use built-in samples and synth, and utilize effects like legato, and chop to create our own sounds instead of directly using ambient samples. We also wanted to include a transition and an ending to make our project more complete, so we added a strong crush effect to let the synth sound raw and distorted, while adding a strong color transition for the pattern. For the ending, we gradually minimized the pattern and the audio at the same time, and eventually kept the ambient synth on going.
Reading about “the instrumental impulse” in this paper reminded me of the psychological concept of Perceptual Narrowing—a cognitive phenomenon where, for efficiency, humans develop schemas that help make fast decisions, but in doing so, lose access to a broader range of possibilities. When I was studying about surrealism, surrealists believed one way to return to authenticity was through Automatism—letting the mind wander freely, uncensored. This mirrors the spirit of improvisation, where intuition precedes logic and the unknown is welcomed. I believe this can be applied to live-coding, or, in broader concept, improvisation becomes a tool not just for performance, but for rediscovering what expression and creativity can mean beyond established norms.
I was especially moved by the author’s framing of the computer as a romantic, expressive instrument—not just a cold rational machine. Just like traditional instruments, laptops in live coding are personalized by each performer’s unique techniques and approaches. In this way, every computer becomes a partner, emotionally bonded with the artist. This echoes how traditional musicians develop intimate relationships with their instruments, shaping their identity over time.
Lastly, I found the contrast between live coding and tools like Ableton or Radial insightful. While those tools prioritize accessibility and user-friendly design, live coding embraces the rawness of syntax and structural logic. From my perspective, neither is better or worse—just different in intention. Live coding reflects the artists’ passion for exploring the pure beauty of machine, representing a more experimental, deeply embodied relationship with technology—one that reimagines both the machine and the human in the act of creation.
I started this project with the music first and then creating and matching with the visuals.
At first, I followed Aaron’s instructions and just wrote some random blocks of compositions. Some of them are bright, some of them are creepy (is this the right word?) and they sounds irrelevant to each other. (Then I need to give credit to my amazing roommate that I got inspired from what she told me that it sounds like an agent lurking)
Then I basically applied three steps: I first adjust the samples to be the similar instruments, and then try to adjust the scale from major to minor, or other scales that I experiment to make them have similar styles. Lastly I fine-tuned some notes to make them more like a coherent whole. I also read through the documentation of the music structure and learned a bit of the power of silence between the build up and the drop so I kept it, but then to add relevance to the theme I added the female voice of “Warning”. Since this is like a story telling project to me, I added also a little bit of narrative samples and an opening ending to the structure.
For the visual part, I am always a fan of simple, red and black stuff, so I kept doing that. I have decided at a very early point to stick to one main visual and evolve different form over time. This cone-like visual gives me a sense of a tunnel/terminal where the agent is lurking around. Part of the visual evolution is due to how the sound sounds like, but an even bigger part of the visual is actually determined by the story in my mind (Agent enter the terminal -> terminal is like a maze with multiple entrance -> agent found the terminal and got in -> agent pull out the gun and break into the secret place -> the fight begin -> open ending) I made the visuals follow the basic plot and create them based on my imagination. Lastly, I tried to match them with the midi communication between Tidal and Hydra.
Synesthesia allows artists to experience one sensory input while automatically perceiving it through multiple senses. While Ryoichi Kurokawa is not a clinical synesthete—nor am I—I find myself intrigued by how his artistic process unfolds. In true synesthesia, the brain acts as a bridge between senses, but Kurokawa seems to approach it differently. He envisions the brain not just as a connector but as a target, an audience, or even a collaborator. As he puts it, “I want to stimulate the brain with sound and video at the same time.”
Exploring his portfolio, I noticed a strong sense that his works are created using TouchDesigner (and turns out it’s not). As a new media student constantly exposed to different tools, my instinct was to research his technical choices. But then I came across his statement: “I don’t have nostalgia for old media like records or CDs, and I’m equally indifferent toward new technology.” This struck me. As an artist, he moves fluidly, guided not by tools but by his vision, much like the way he deconstructs nature’s inherent disorder only to reconstruct it in his own way. It’s not the medium that matters—it’s the transformation.
Watching Octfalls, I could already imagine the experience of standing within the installation, anticipating each moment, immersed in the sudden and precise synchronization of sound and visuals. As I explored more of his works, I noticed how they differ from what I have seen in many audiovisual performances, where sound usually takes precedence while visuals are more of a support role. In Kurokawa’s pieces, sound and visuals are equal partners, forming a unified whole. This made me reconsider my own approach—perhaps, instead of prioritizing one element over the other, a truly cooperative relationship between sound and visuals could be even more compelling.
Mercury is a beginner-friendly, minimalist, and highly readable language designed specifically for live-coding music performances. It was first developed in 2018 by Timo Hoogland, a faculty member at HKU University of the Arts Utrecht.
Mercury is structured similar to Javascript and ruby, being written highly abstracted. The audio engine and openGL visual engine is built based on Cycling ’74 Max 8, utilizing Max/MSP for real-time audio synthesis, while Jitter and Node4Max handle live visuals. Additionally, a web-based version of Mercury leverages WebAudio and Tone.js, making it more accessible.
How Mercury Work
The mercury language roots in Serialism, which is a musical composition style where all parameters such as pitch, rhythm and dynamics are expressed in a series of values (called list in Mercury), used to adjust the instruments stage over time.
In Mercury, the code is executed sequentially from top to bottom, and variables must be declared before being used in an instrument instantiation, if the instrument relies on them. The functionalities are divided into three categories. To define a list of values, the list command is used. To generate sound, an instrument—such as a sampler or synthesizer—can be instantiated using the new command.
Mercury in the Context of Live Coding
What makes Mercury outstanding is the following highlights:
The minimal, readable and highly abstract language: It fosters greater clarity and transparency between performers and audiences. By breaking down barriers in live coding, it enhances the immediacy of artistic expression. True to its name, Mercury embodies fluidity and quick thinking, enabling artists to translate their mental processes into sound and visuals effortlessly.
Code Length Limit and Creativity: In the early version of Mercury, there is a code length limit up to 30 lines for the performer, which encourage the innovation and constant dynamics by pushing the iteration of the existing code rather than writing another long list of code.
Demo
Below is the performance demo of me playing around in the Mercury Web Editor Playground: