I was really struck by the idea of live coding as a “technique of making strange.” The text mentions it in a creative sense, but what I noticed is how deep this simple act feels to me. When I use apps or websites, everything is designed to be smooth and invisible like the code is hidden and the choices are pre made. I just click and things happen. It feels easy, but also a little like I am being led.
Live coding by putting the raw code on screen for everyone to see, does the opposite. It makes the machine visible again. For me, that seems powerful. It turns coding from a private and technical task into a public conversation. That act of showing feels like a small act of resistance in a world that wants technology to feel natural and unchallengeable. It reminds me that I am a user, not just a consumer and there is a big difference.
The reading states that “Live coding is about making software live.” As an aspiring software engineer, my experience with coding has always involved writing an algorithm, testing it, debugging it, and deploying it. Essentially, I have always programmed software to execute tasks that, once deployed, perform the same actions repeatedly for users. However, I realize that throughout my programming experience, I have never once treated the software as a live entity with which I could interact in real time. The software was always pre-programmed, static. Therefore, I find the concept of live coding (where the software feels alive and can be interacted with during a performance) a fascinating way to blend artistic expression with such rigid field of programming.
To prove the liveliness of coding, the reading also points out that live coding is similar to live music performance. The real, social, and spiritual experience of music happens in the moment of performance, with the presence of the musician and the audience. Similarly, live coding embodies the same principle. It is a performance where the creation of music (through writing code happens) live, in front of an audience. There is no pre-recorded track, the music is generated in real time from the performer’s actions. Thinking about learning live coding seems a bit intimidating to me at this point. However, I find the idea of learning the algorithms and methods to manipulate my laptop screen to express my ideas truly exciting.
Our process began with a clear vision of the environment we wanted to create, an abstract narrative that subtly tells the story of a group of friends watching TV and embarking on a surreal, psychedelic trip. While we didn’t want to portray this explicitly, the goal was to evoke the strange sensations and shifting experiences they go through, using a mix of visual cues and atmospheric design. With the concept in place, we knew that the project would need creative and unusual visuals right from the start, paired with immersive psytrance or liquid drum and bass audio to match the tone and energy of the story.
Once we had a solid sense of the visual and sonic direction, we dedicated ourselves to an intense 12 hour live coding jam session (with a couple of runs to the Baqala for snacks), which we streamed on Instagram. This session became a space of spontaneous experimentation and rapid development, where we started shaping the core of the experience. Although we made significant progress during the jam, the following days, especially after Tuesday, revealed some lingering technical issues and problems with timing that still needed to be resolved. These challenges became the focus of our attention as we worked toward polishing the final piece.
Audio
Aadil and I mostly handled the audio. Here we focused on using some effective voice samples to bring out parts of the performance we thought needed more attention. We just used samples from our favourite songs (e.g., Everything In Its Right Place by Radiohead) and favourite genres (Techno, Hardgroove). We had a buildup using layers of ambient textures and chopped samples with increasing intensity to simulate anticipation. Specifically, we manipulated legato ambience, gradually intensified techno patterns, and used MIDI CC values to sync modulation effects like low pass filters and crush.
In the midsections, we used heavy 808 kicks, distorted jungle breaks, and glitchy acid lines (like the “303” patterns) to keep up the tension and energy. For the end sections, we wanted to have some big piano synths that brought home the feeling of a comedown. The tonal shift was meant to echo that feeling of emotional release, what it feels like when a trip starts to settle.
Visuals:
The goal was to mirror the full arc of a trip, with visuals locked to every change in the track. Mo Seif first sketched a concept for each moment, then built the look layer‑by‑layer, checking each draft against the audio until they matched perfectly. We had seven primary sections, each tied to a distinct musical cue.
Intro – “Sofa & Tabs”
Us slouched on a couch, half‑watching TV. They decide to take a journey of their lifetimes; the trip timer starts.
2. Onset – “TV Gets Wavy”
The first tingle hits. The TV image begins to undulate – colors drifting, lines bending. A slow warp effect hints that reality is about to buckle.
3. First Peak – “Nixon + Rectangles”
Audio: vintage Nixon sample followed by drum drop.
Visuals: explosion of rectangle‑shaped, ultra‑psychedelic patterns that sync to each snare hit. The crowd pops; everything feels bigger, faster, weirder.
4. Chiller Section
A short breather featuring three “curated” GIFs:
Lo Siento, Wilson – pure goofy laughter.
Sassy the Sasquatch – laughter + tripping out
Pikachu tripping – those paranoid, deep‑thought vibes.
Together they nail the mood‑swings of a trip.
5. Meta Moment – “Pikachu Breaks the 4th Wall”
Pikachu dissolves into a live shot of the same GIF playing on my laptop in the dorm while we’re editing. Filming ourselves finishing the piece made it hilariously meta; syncing it to the beat was a nightmare, but it clicked.
6. Street‑Fighter Segment – “Choose Your Fighter”
Inspiration: I was playing GTA once back and saw myself in the game as Trevor. So we wanted to recreate that feeling and put us in the video game.
Build: we took 4‑5 photos of each of us, turned them into looping GIFs, and dropped them onto the classic character‑select screen with p5.js.
Plot Twist: Mo’s fighter “dies” (tongue out), smash‑cut to an Attack on Titan GIF – like he resurrected.
7. Final Drop & Comedown – “Hard‑Groove + CC Sync”
The last drop pivots to a hard‑groove techno feel. Every strobe and colour hit is driven by MIDI CC values mapped to the track. We fade back to the original couch shot: the three of us staring straight into the lens, coming down – sweaty, wired, grinning.
Here are the gifs we made for the street fighter visuals:
Here is the code we used: Tidal:
--Lets watch some tv + mini build up + mini drop (SEC 1hey guys why dont we watch some tv
d1 $ chop 2 $ loopAt 16 $ s "ambience" # legato 3 # gain 1 #lpf (range 200 400 sine)
once $ "our:3" # up "-5"
d10 $ fast 2 $ "our:4" # up "-6"
d2 $ slow 1 $ s "techno2:2*4" # gain 0.9 # room 0.1
--cc (eval separately)
d16 $ fast 16 $ ccv "<0 100 80 20 0 100 80 20>" # ccn "2" # s "midi"
d4 $ ghost $ slow 2 $ s "rm*16" # gain 0.75 # crush 2 # lpf 2500 # lpq(range 0.4 0.6 sine)
d5 $ stack [
n "0 ~ 0 ~ 0 ~ 0 ~" # "house",
n "11 ~ 11 ~ 11 ~ 11 ~" # s "808bd" # speed 1 # squiz 0 # nudge 0.01 # release 0.4 # gain 0.3,
slow 1 $ n "8 ~ 8 8 ~ 8 ~ 8" # s "jungle"
]
d6
$ linger 1
$ n "[d3@2 d3 _ d3 _ d3 _ _ c3 _]/1"
-- $ n "[d3 d3 c3 d3 d3 d3 c3 d3 f3 _ _ f3 _ _ c3]/2"
-- $ n "[f3 _ _ g3 _ _ g3 _]*2"
# s "supergong" # gain 1.2 #lpf 100 # lpq 0.5 # attack 0.04 # hold 2 # release 0.1
d7 $ stack [randslice 8 $ loopAt 8 $ slow 2 $ jux (rev) $ off 0.125 (|+| n "<12 7 5>") $ off 0.0625 (|+| n "<5 3>") $ cat [
n "0 0 0 0",
n "5 5 5 5",
n "4 4 4 4",
n "1 1 1 1"
]] # s "303" # gain 0.9 # legato 1 # cut 2 # krush 2
d10 $ fast 2 $ ccn "0*128" # ccv (range 200 400 $ sine) # s "midi"
--at the end of first visual
d1 silence
d2 silence
d5 silence
d6 silence
d7 silence
-- drugs r enemy (before the drop)
once $ s "sample:2" # gain 1.2
-- THE drums (the drop)
d11 $ stack [fast 2 $ s "[bd*2, hh*4, ~ cp]"] # gain 1.2
--after drums
d9 $ stack [
slow 1 $ s "techno2:2*4" # gain 0.9 # room 0.1,
stack [
n "0 ~ 0 ~ 0 ~ 0 ~" # "house",
n "11 ~ 11 ~ 11 ~ 11 ~" # s "808bd" # speed 1 # squiz 0 # nudge 0.01 # release 0.4 # gain 0.3,
slow 1 $ n "8 ~ 8 8 ~ 8 ~ 8" # s "jungle"
],
linger 1
$ n "[d3@2 d3 _ d3 _ d3 _ _ c3 _]/1"
-- $ n "[d3 d3 c3 d3 d3 d3 c3 d3 f3 _ _ f3 _ _ c3]/2"
-- $ n "[f3 _ _ g3 _ _ g3 _]*2"
# s "supertron" # gain 0.8 #lpf 100 # lpq 0.5 # attack 0.04 # hold 2 # release 0.1 ,
stack [randslice 8 $ loopAt 8 $ slow 2 $ jux (rev) $ off 0.125 (|+| n "<12 7 5>") $ off 0.0625 (|+| n "<5 3>") $ cat [
n "0 0 0 0",
n "5 5 5 5",
n "4 4 4 4",
n "1 1 1 1"
]] # s "303" # gain 0.9 # legato 1 # cut 2 # krush 2,
fast 16 $ ccv "<0 100 80 20 0 100 80 20>" # ccn "2" # s "midi"
] # gain 0
d11 silence
d9 silence
-- START XFADE WHEN READY FOR GIF MUSIC
d10
$ whenmod 16 4 (|+| 3)
$ jux (rev . (# s "arpy") . chunk 4 (iter 4))
$ off 0.125 (|+| 12)
$ off 0.25 (|+| 7)
$ n "[d1(3,8) f1(3,8) e1(3,8,2) a1(3,8,2)]/2" # s "arpy"
# room 0.5 # size 0.6 # lpf (range 200 8000 $ slow 2 $ sine)
# resonance (range 0.03 0.6 $ slow 2.3 $ sine)
# pan (range 0.1 0.9 $ rand)
# gain 0.6
-- -->7
d16 $ fast 16 $ ccv "<0 100 80 20 0 100 80 20>" # ccn "2" # s "midi"
-- GIF section Sudden drop from the mini drop, chill background matching music + sample audios for gifs (SEC 2.1)
-- GIF SECTION MUSIC
-- 1) DONNY
-- 2) WILSONNNNNN
once $ s "wilson" # gain 1.4
-- 3) PIKAPIKA
once $ s "pikapika" # gain 1.4
--backgroung silence WHEN ICE SPICE
d10 silence -- aadil
-- Ice Spice Queen (SEC 2.2)
once $ "our:5" #gain 2.5 --j
-- Start boss music LOUD, reverse drop off glitchy into us fighting (SEC 3)
d1 $ fast 2 $ s "techno2:2*4" # gain 1.2 # room 0.1
--SELECT UR FIGHTER CC VALUES
d10 $ fast 4 $ ccn "0*128" # ccv (range 200 400 $ sine) # s "midi"
-- d16 $ slow 1 $ ccn "0*128" # ccv (range 0.9 1.2 $ slow 2 $ rand) # s "midi"
-----------------------------------------------------------
--cc for street fight
--d16 $ fast 16 $ ccv "0 60 0 70" # ccn "0" # s "midi"
--
-- WHEN STREET FIGHT MO VS AADIL
d2 $ stack [
sometimesBy 0.25 (|*| up "<2 5>") $
sometimesBy 0.2 (|-| up "<2 1>")
$ jux (rev) $
n "[a4 b4 c4 a4]*4" # s "superhammond" # cut 4 # distort 0.3 # up "-9"
# lpf (range 200 7000 $ slow 2 $ sine) # resonance ( range 0.03 0.5 $ slow 3 $ cosine) # octave (choose [4, 5, 6, 3]),
sometimesBy 0.15 (degradeBy 0.125) $
s "reverbkick*16" # n (irand(8)) # distort 0 # speed (range 0.9 1.2 $ slow 2 $ rand) # gain 0.9
] # room 0.5 # size 0.5 # pan (range 0.2 0.8 $ slow 2 $ sine) #gain 0.9
d5 $ fast 2 $ (|+| n "12")$ slowcat [
n "0 ~ 0 2 5 ~ 4 ~",
n "2 ~ 0 2 ~ 4 7 ~",
n "0 ~ 0 2 5 ~ 4 ~",
n "2 ~ 0 2 ~ 4 7 ~",
n "12 11 0 2 5 ~ 4 ~",
n "2 ~ 0 2 ~ 4 7 ~",
n "0 ~ 0 2 5 ~ 4 ~",
n "2 ~ 0 2 ~ 4 ~ 2"
] # s "supertron" # release 0.7 # distort 10 # krush 10 # room 0.5 #hpf 8000 # gain 0.7
-- silence d2 when the next DO starts playing
d2 silence
-- Quiet as in im dead, mini build up, mini drop after lick (SEC 4)
do {
d5 $ qtrigger $ filterWhen (>=0) silence;
d4 $ qtrigger $ filterWhen (>=0) $ stack[
s "hammermood ~" # room 0.5 # gain 1.8 # up "8",
fast 2 $ s "jvbass*2 jvbass*2 jvbass*2 <jvbass*6 [jvbass*2]!3>" # krush 9 # room 0.7
] # speed (slow 4 (range 1 2 saw));
d3 $ qtrigger $ filterWhen (>=8) $ seqP [
(0, 1, s "808bd:2*4"),
(1,2, s "808bd:2*8"),
(2,3, s "808bd:2*16"),
(3,4, s "808bd:2*32")
] # room 0.3 # hpf (slow 4 (100*saw + 100)) # speed (fast 4 (range 1 2 saw)) # gain 0.8;
}
d1 $ "[reverbkick(3,8), jvbass(3,8)]" # room 0.5#krush 6 # up "-9" # gain 1
--
drop_deez = do
{
d5 $ qtrigger $ filterWhen (>=0) $ fast 4 $ chop 2 $ loopAt 8 $ s "drumz:1" # gain 1.8 # legato 3 # cut 3;
d6 $ qtrigger $ filterWhen (>=4) $ s "jvbass" # gain 1 # room 4.5;
d10 $ qtrigger $ filterWhen (>=6) $ loopAt 4 $ s "acapella" # legato 3 # gain (range 0.7 1.5 saw);
d7 $ qtrigger $ filterWhen (>=0) silence;
d8 $ qtrigger $ filterWhen (>=0) silence;
d2 $ qtrigger $ filterWhen (>=0) silence;
d3 $ qtrigger $ filterWhen (>=0) silence;
d4 $ qtrigger $ filterWhen (>=0) silence;
d9 $ qtrigger $ filterWhen (>=8) $ s "amencutup*16" # n (irand(8)) # speed "2 1" # gain 1.8 # up "-2"
}
d11 $ slow 1 $ ccn "0*128" # ccv (range 1 62 saw) # s "midi"
drop_deez
--after drop settles
do
d1 silence
d6 silence
-- Quieting down with the couches, drug bad sample (SEC 5)
do {
d2 $ qtrigger $ filterWhen (>=6) silence;
d6 $ qtrigger $ filterWhen (>=4) silence;
d9 $ qtrigger $ filterWhen (>=0) silence;
d10 $ qtrigger $ filterWhen (>=0) silence;
d5 $ qtrigger $ filterWhen (>=8) silence;
d1 $ sound "our:1" # cut 4 # gain 1.5;
d12 $ qtrigger $ filterWhen (>=0) $ slow 4.1 $ ccv "10 5 4 3" # ccn "0" # s "midi";
}
d1 silence
once $ s "drugsrbad" # gain 1.4
hush
I think at one point we had two drops and we just couldn’t proceed creatively from that point on. But we went back to all of our previous blog posts and realized that we could just think of something and really we already have the tools to do it.
Thank you Aaron for all the help and I think I speak for all three of us when I say that we really needed the fun we had in this class as graduating seniors.
The 14-week journey has finally ended, and it was time to show everyone what we’ve been working on and how we grew throughout the semester! We were inspired by Alice in Wonderland when brainstorming for our final performance, hence our funky team name. However, while we were composing, we decided to stray off from following Alice in Wonderland’s narrative from the start to the end, and instead decided to mix in some random visuals and sounds in there as well while still keeping the general flow of the composition based on Alice in Wonderland.
We wanted to show contrast and build-up of the visuals and sounds between the starting point and the ending point, so we decided to start with black-and-white visuals and rather quieter, mysterious audio to go along with it to hint at what’s about to come later in the composition, which was Phase 1.
In Phase 2, we began to include very obvious Alice references (i.e. video and audio of the door closing and opening, teacups, Alice in Wonderland soundtrack, etc.), and our climax for Phase 2 was the appearance of the Cheshire cat image — I also added a sound clip saying “a Cheshire cat” from the movie, and this was the signal for transition into Phase 3, which was the “craziest” phase in our composition.
We tried to make the visuals and the audio as engaging as possible in Phase 3 because this was the final part of our performance, so there were a lot of fast beats and loud melodies. We also wanted to end with “a bang,” so we decided to use the chorus of the song “Gnarly” by Katseye and have a little surprise dance party to end our performance! The reason why we chose this song was because we thought the beats of it was very similar to something that we’d create in Tidal, and the whole repetition of the word “gnarly” seemed to fit well with our the intriguing, slightly unpredictable vibes we wanted for our last phase.
I want to give a special shoutout to Emilie and Rashed for being down to join me even though it was very last minute. 🙂 Because we wanted it to be a total surprise and make it seem like it was a “spontaneous” attempt to gauge audience engagement, they climbed up on stage when I gave them the signal so that it looked out of the blue rather than them waiting on stage beforehand, and we’re happy that it all worked out well!
Although there was a definite shift from serene/calm visuals/audio to the crazy/vibrant point we were at by the end of our composition, we still tried to keep the theme of mysterious, outworldly, fantastical, and intriguing vibes throughout the whole performance so that there was still a somewhat coherent picture portrayed to the audience.
With that being said, I’ll stop yapping and post the codes here now:
Hydra code (Adilbek — phase 1 + phase 2 till the Cheshire cat part; Jannah — ending of phase 2 + phase 3):
-- phase 1
-- hydra 1.1
d13 $ ccv "20 90" # ccn "0" # s "midi" -- change from 20 90 to 10 70
d2 $ slow 1 $ s "~ hh" # gain 2
d4 $ slow 2 $ s "superpiano" # n (range 60 72 $ sine) # sustain 0.1 # room 0.5 # gain 1.2 -- start as slow 2, then $ fast 2
d3 $ s "birds:3"
-- d3 $ "birds" -- alt between birds and birds:3
-- hydra 1.2
d1 $ n ("<c2 a2 g2>")
# s "notes"
# gain ((range 0.6 0.9 rand) * 1.2)
-- # legato 1
-- # room 0.8
# size 0.95
# resonance 0.5
# pan (slow 5 sine)
# cutoff (range 500 1200 $ slow 4 sine)
# detune (range (-0.1) 0.1 $ slow 3 sine)
d9 $ n ("<[c3,fs3,g3] [~ c4]>*2 <[a2,gs3,e3,b2]*3 [~ d4,fs4]>")
# s "notes"
# gain (range 0.6 0.9 rand)
# legato 0.7
# room 0.6
# size 0.8
# resonance 0.4
# pan (slow 5 sine)
-- hydra 1.3
d2 $ ccv "17 [~50] 100 ~" # ccn "0" # s "midi"
d3 $ n "e5 ~ ~ a5 fs5 ~ e5 ~ ~ ~ c5 ~ ~ a5 ~ ~"
# s "notes"
# legato 1
# gain 1
-- # pan (slow 4 sine)
# room 0.7
# size 0.9
-- d9 silence
-- hydra 1.4
do
d2 $ ccv (segment 8 "0 20 64 127 60 30 127 50") # ccn "0" # s "midi"
d3 $ n "[0 4 7 12]!4"
# s "notes"
# gain "1"
# legato "0.5"
# speed "1"
-- # room "0.8"
# lpf 2000
-- Hydra 1.5
d5 $ s "[~ drum]*2" -- drum *4, then *2
# gain "1.3"
# delay "0.3"
# delayfeedback "0.2"
# speed "1"
do
d1 $ s "bd(5,8)"
# n (irand 5)
# gain "1.1"
# speed "0.6"
# lpf 600
d2 $ struct "<t(3,120) t(3,27,120)>" $ ccv ((segment 128 (range 127 0 saw))) # ccn "0" # s "midi"
-- END OF PHASE 1
-- d3 silence
-- d5 silence
-- PHASE 2: alice in wonderland
-- Hydra 2.1
do
d2 $ ccv "[[0 ~ 50 127 127]]" # ccn "0" # s "midi"
d1 $ s "[[door:1 ~ door:2 ~ ~]]" # gain 3
# speed 1
d4 $ slow 2 $ s "superpiano" # n (range 60 72 $ sine) # sustain 0.1 # room 0.5 # gain 1.5
d10 $ s "alice" # gain 1
d3 $ slow 2 "mug" # gain 1.8
d10 silence
-- Hydra 2.2
do
d2 $ ccv "[[1 0 1 0]]" # speed 1.2 # ccn "10" # s "midi"
d4 $ slow 4
$ s "[ [glass:3 ~ glass:5 ~]]"
# gain 1.5
# speed 1.2
-- drum
# shape (choose [0.3, 0.6])
# room 0.4
# delay 0.25
-- glass
# lpf (slow 4 $ range 800 2000 sine)
# pan (slow 8 $ sine)
-- d10 silence
d5 $ every 3 (rev)
$ s "space"
# n (run 5 + rand)
# octave "<5 6>"
# speed (rand * 0.5 + 0.8)
# lpf (slow 16 $ range 800 2000 sine)
# resonance 0.4
# orbit "d5"
-- Hydra 2.3
do
d2 $ ccv "80 [10 ~] [30 ~] ~" # ccn "1" # s "midi"
d1 $ s "bd bd sd bd" # gain 1.5
d3 $ slow 2 $ s "superpiano"
# n (scale "minor pentatonic" "0 2 4 7 11" + "<12 -12>")
# octave 5
# sustain 8
# legato 0.8
# gain 1
# lpf (slow 16 $ range 800 2000 sine)
# room 0.7
# delay 0.75
-- # delayfeedback 0.8
# speed (slow 4 $ range 0.9 1.1 sine)
# pan (slow 16 sine)
-- # vowel "ooh"
# orbit "d5"
d7 $ stack [
slow 4 $ s "pad:1.5" # gain 0.9,
s "bass*2" # room 0.5 # gain 1.2
]
-- d1 silence
-- Hydra 2.4
do
d6 $ s "~"
d8 $ every 2 (|+ speed 0.2) $ slow 2 $ sound "hh*8" # gain 1.5 # hpf 3000 # pan rand
d5 $ s "~ bass:1" # gain 2 # speed 0.5 # lpf 300 # room 0.4
d6 $ every 2 (0.25 ~>) $
s "~ cp"
# gain 1.5
# speed 1.2
d8 $ s "hh hh hh <hh*6 [hh*2]!3>" # gain 1.5
d5 $ s "[~ ~ bass:1]"
# gain 1.8
# speed 0.6
# lpf 400
# room 0.3
d3 $ qtrigger $ filterWhen (>=0) $ seqP [
(0,1, s "[bd bd] [sd hh]"),
(1,2, s "[bd bd bd bd] [sd hh]"),
(2,3, s "[bd bd bd bd bd bd] [sd hh]"),
(3,4, s "[bd bd bd bd bd bd bd bd] [sd hh]")
] # gain (slow 4 (range 0.8 1 saw))
-- Hydra 2.5
d9 $ s "cat3" # gain 5.2
# orbit "-1"
# dry "1"
# room "0"
# delay "0"
# shape "0"
# resonance "0"
# delay "0"
# delayfeedback "0"
# lpf "20000"
d4 $ stack [
s "<bd sn cp hh>" # speed "1 1.5 2",
s "808bd:4(3,8) 808sd:7(5,8)" # gain 1.1
]
d8 $ stack [
s "moog" >| note (arp "up" (scale "major" ("[0,2,4,6]") + "a5")) # room 0.4 # gain 0.7,
ccv 0 # ccn 1 # s "midi"
]
do
d4 $ s "bd bd sd bd cp odx mt <[bd*2]!8>" # gain 1.5
d2 $ ccv (segment 8 "0 20 64 [50 90]") # ccn "0" # s "midi"
d9 $ fast 2 $ s "moog" >| note (arp "up" (scale "major" ("[0,2,4,6]") + "a5")) # room 0.4 # gain 1 # squiz 0.3
-- Hydra 2.6
do -- change from d4 to d1
d1 $ sound "feel:2*8"
# gain 1.9 # speed (range 1 3.5 $ saw)
# cutoff (range 800 2000 $ sine) # resonance 0.2
# room 0.5 # accelerate 0.5
# sz 0.5 # crush 1
d2 $ ccv "0 20 64 90 0 30 70 112" # ccn "0" # s "midi"
do
d2 silence
d3 silence
d6 silence
d7 silence
d8 silence
d9 silence
-- END OF PHASE 2
do
d3 $ jux rev $ "bass:4*2 <bass:4 [bass*4]!2>"
# room 0.3 # gain 5
# shape 0.7
d1 $ ccv "0 40 64 14 70 112" # ccn "0" # s "midi"
d4 $ iter 4 $ sound "hh*2 hh*4 hh*2 <[hh] hh*2!2>"
# room 0.3 # shape 0.4
# gain 1.7
# speed (range 1.3 1.6 $ slow 4 sine)
do
d5 $ fast 2 $ s "sine" >| note (arp "up" (scale "major" ("[2,0,-4,6]"+"<-8 4 -2 5 3>") + "f5"))
# room 0.4 # gain 1.4
# legato 3
# pan (slow 8 $ sine)
d1 $ ccv "40 12 60 25 <34 70>" # ccn "0" # s "midi"
do
d6 $ s "arpy*4 arpy@1~ ~"
# legato 2.5
# up "f6 a5 c3 g6" # shape 0.7 # gain 1.3
d7 $ s "hh*8 ~ ~ cp!2 ~"
# gain 3 # shape 0.5 # resonance 0.5 # krush 0.3
d1 $ ccv "55 14 20 ~ ~ 70 112" # ccn "0" # s "midi"
d8 $ s "gnarly:1@1.2"
# cut 1 # shape 0.9 # gain 7
do
d10 $ s "bd*2 drum*4 <sd:1 feel:16> [~bd?]"
# gain "4.5 5 4" # shape 0.8
d1 $ ccv "[23, 45] [45, 12] [51 90]" # ccn "0" # s "midi"
d9 $ iter 4 $ sound "{<arpy:3(4,8) arpy:5(3,8) arpy:2(7,8)>}%2"
# n "7 32 11 6 21 17 10 3"
# room 0.5 # speed 2 # gain 1.4 # shape 0.2
do
d2 $ silence
d3 $ silence
d4 $ silence
d5 $ silence
d6 $ silence
d7 $ silence
d10 $ silence
do
d2 $ slow 1.25 $ s "sine" >| note (arp "up" (scale "min" ("[7,5,8,3,2,7,8,3,9,5]") + "a5"))
# shape 0.9 # gain 7 # sz 0.4
d1 $ ccv "45 ~~ 12 ~~ 75" # ccn "0" # s "midi"
do
d3 $ s "[bd*2, drum:2*4, <sd:4(3,8) feel:12(5,8)>, [~ bd:7?]]"
# gain "5 4 6" # shape 0.5
# squiz (range 1.5 3 $ slow 8 sine)
d1 $ ccv "12 51 30 ~~ [90 37]" # ccn "0" # s "midi"
d4 $ s "[newnotes:3*2, ~ newnotes:5*4?]" # gain 3 # cut 1
# n "-1" # shape 0.2
# squiz (range 1 1.5 $ slow 8 sine)
do
d5 $ s "[~, sd:3*4, [~ sd:5@2 sd:6*2]?]"
# gain 4 # speed 1.5
# size 0.4
d6 $ s "[~, metal:2(5,8), ~, metal:4(3,8)]"
# gain 1.7 # speed 0.7
# pan (slow 16 sine) # room 0.6
d7 $ stack [
s "feel:2*8" # gain "1",
s "bass:11*8" # gain "1.6" # speed "1.2" # pan "-0.5",
s "hh:4*4" # gain "3" # speed "0.7" # pan "0.5"
] # krush (range 0 2 $ rand)
d8 $ silence
d1 $ ccv "32 15 ~ ~ 69 ~~ [15 37]" # ccn "0" # s "midi"
do
d9 $ stack [
s "feel:2*8" # gain "1.4",
s "bass:11*8" # gain "1.6" # speed "1.6" # pan "-0.5",
s "hh:4*4" # gain "3" # speed "1" # pan "0.5"
] # krush (range 0 2 $ rand)
d8 $ s "gnarly:1"
# cut 1 # shape 0.9 # gain 7 # speed 3
do
d2 $ silence
d3 $ silence
d5 $ silence
d9 $ silence
do
once $ s "msam:2"
# gain 5 # legato 4
hush
once $ s "msam:4"
# gain 5
Aaaand here’s our final performance video!!! Hope you guys enjoy it. 🙂
Last but not least, these are some future improvements we want to make/the limitations coming from Jiho:
I’ve always struggled with creating impactful beat drops, and I think they’re especially important in rave music because they really shape how the audience responds. Looking back, I feel I could’ve done a better job and spent more time developing that section. It was a similar experience with the composition project—I kept layering different sound lines because each previous version felt dull or lacking. Most of the added elements ended up contributing more to the buildup than the drop itself. Personally, I think the buildup ended up being stronger than the actual beat drop. Moreover, while the integration of the first gnarly sound worked well, the ending felt too abrupt. That’s partly because the idea of using “gnarly” music was added later in the process. If I had built my music code around those gnarly beats from the start, the overall transitions would’ve been smoother and more cohesive. On a similar note, another area I’d personally like to work on is incorporating external sound files into my compositions. I feel like this is where I currently lack creativity, and watching other groups, including Clara, really inspired me. They were able to integrate external audio so seamlessly, and it made their pieces feel more dynamic and refined. It’s something I want to explore further to expand the range and depth of my own sound work in the future.
Our concept was inspired by the musical Wicked. We wanted to create a piece that was playful but also something that showcased our personality. This project does not just encapsulate the skills we learnt across this course, but also how we learnt to improvise efficiently and stay on track with our vision.
Ziya and Linh were in charge of the visuals for this performance. As we are inspired by the musical “Wicked”, we want to develop a similar theme for the visual, including the images from “Wicked” itself, and also a theme of witches. As the audio starts with a simple and slow pace, we used simple patterns with simple changes using MIDI value to match with the sample. From the start, we knew we wanted to include a theme of the musical itself into our performance as we were big fans of it, but we of course did not want to make it entirely Wicked-centered, therefore, we thought of telling the story of Wicked through campus cats. We decided that we’d take pictures and videos, and edit them to make them related to the wicked theme. However, when we were executing this, we quickly grew sick of these images, and decided to draw images directly from the wicked musical itself.
One of our prominent challenges was working on transitions, particularly the part where we were switching between the two images. We found that we had to be very careful about what functions and details to include and at what time. Furthermore, we also had to listen carefully to the sound to ensure that the beat drop and the overall transitions were in sync. One of the strategies we approached was layering the first and second visuals on top of each other for transition. Then, we can just fade out the first visual to reveal our final intention. Also, we decided to change the visuals significantly after the drop as the musical color and timbre changes at that point.
Another challenge for us is to sync with the sound so that the audience can see the relations between the changes in visuals and the audio together. As Linh did not have any extensive experience in music, she could not catch on when the audio changes by listening. Therefore, we asked Luke and Rashed to signal us when they are moving to a new section so that the visual can adapt to change.
Staying on theme was important to us, and we had two: Brat and Wicked. And somewhere in the corner of Tiktok, this subculture exists, so we decided to bring it to the stage of NYUAD in our performance. This was exemplified through the sounds used such as the “365” which were sampled from Charlie XCX, who is the originator of “Brat”. Brat also fit well with Wicked, and both were prominent with the colour green — Brat was neon green, and Wicked is a darker green. Colour for us was just as important, and in Wicked, there are two main colours which are Green and Pink – representing the opposing sides of Glinda and Elphaba. Hencewhy, throughout our entire performance we referenced these two colours, hopefully in a manner that did not seem too repetitive.
voronoi(100, 0.15) //shape(2,0.15)
.thresh(0.8)
.modulateRotate(osc(10), 0.4, () => cc[0]*50) // cc
.thresh(0.5)
.diff(src(o0).scale(1.8))
.modulateScale(osc(10) // cc
.modulateRotate(o0, 0.74))
.diff(src(o0))
.mult(osc(()=>cc[0], 0.1, 3))
.out()
hush()
// VIDEO SECTION
s0.initVideo("https://blog.livecoding.nyuadim.com/wp-content/uploads/the-bratty-vid.mp4")
p5 = new P5()
vid = p5.createVideo("https://blog.livecoding.nyuadim.com/wp-content/uploads/the-bratty-vid.mp4");
vid.size(window.innerWidth, window.innerHeight);
vid.hide()
p5.draw=()=> {
let img = vid.get();
p5.image(img, 0, 0, width, height); // redraws the video frame by frame in p5
}
s0.init({src: p5.canvas})
vid.play()
src(s0).out()
s5.initImage("https://blog.livecoding.nyuadim.com/wp-content/uploads/wckd-scaled.png")
src(s5)
//.modulateRotate(osc(10), 0.4, () => cc[0]*50) // cc
.scale(0.6,() => cc[0]*1)
.scrollX(2, 1)
.out()
hush()
s0.initImage("https://blog.livecoding.nyuadim.com/wp-content/uploads/witch-hat.png")
s1.initImage("https://blog.livecoding.nyuadim.com/wp-content/uploads/oz-img.png")
s2.initImage("https://blog.livecoding.nyuadim.com/wp-content/uploads/witch-kingdom.png")
s3.initImage("https://blog.livecoding.nyuadim.com/wp-content/uploads/nessarose-cat.png")
// -- HAT SECTION
src(o0)
.layer(src(s0)
.add(o1)
.scale(()=>0.5 + cc[2])
)
.out(o1)
render(o1)
hush()
render()
// o3 -> o0 -> scale -> pixelate -> ccActual
src(s2)
.diff(src(s1).diff(src(o3).scale(()=>cc[0])))
.diff(src(o1))
// .blend(src(s1), ()=>ccActual[4])
// .diff(src(o0))
// .modulateRotate(o0)
// .scale(() => cc[0]*2)
.out(o3)
render(o3)
hush()
s0.initImage("https://blog.livecoding.nyuadim.com/wp-content/uploads/boq-img.png")
s3.initImage("https://blog.livecoding.nyuadim.com/wp-content/uploads/nessarose-cat.png")
// look glinda pt 2
src(s3)
.scale(()=>cc[5]/2)
.blend(
src(s0).invert().luma(0.3).invert().scale(0.5)
.rotate(()=> (cc[2] - 0.5)* 50 * 0.02)
.scale(()=>cc[3]*0.5)
//.modulateScale(osc(5, 0.1), () => cc[0])
, ()=>cc[6])
.out()
src(o2)
.layer(src(o0))
.out(o1)
render(o1)
render()
hush()
////////////////////////////////
let p5 = new P5()
let lastCountdown = null;
let ellipses = [];
p5.hide();
s4.init({src: p5.canvas})
p5.hide();
p5.noFill()
p5.strokeWeight(20);
p5.stroke(255);
p5.draw = () => {
p5.background(0);
p5.fill(255);
p5.textAlign(p5.CENTER, p5.CENTER);
p5.textSize(200);
// Get the current CC value
let ccValue = 1; // or cc[0] if it's from another source
// Decide which text to display based on the CC value
if (ccValue == 1) {
p5.text("wicked",cc[0]*p5.width, p5.noise(cc[0]*2)*p5.height);
}
}
src(s4).mult(osc(10,0,3)).modulate(voronoi((10, 0.5, 2)))
.luma(0.1)
.repeat(()=>cc[2]*10, ()=>cc[2]*10)
.out(o4)
render(o4)
/// NEW PRPOSED VISUALS
a.show()
a.setBins(8)
a.setSmooth(0.8)
solid(1, 0, 1) // pink
.mask(
shape(999, 0.5, 0.5)
.scale(() => a.fft[1] + 0.2)
.scrollX(-0.3)
)
.layer(
solid(0, 1, 0.5) // green
.mask(
shape(4, 0.5, 0.5)
.scale(() => a.fft[1] + 0.2)
// .scrollX(0.3)
)
)
// .modulate(voronoi(999,3),0.8)
// .modulatePixelate(noise(55,0.5))
// .modulate(noise(0.9, 0.1))
.out()
hush()
hush()
In terms of the audio, we initially wanted to combine the idea of campus cats and simultaneously tell the story of ACT 1 of Wicked the musical and after further experimenting, we realized that we would have to split our performance into 9 different sections (That is, for each of the songs in ACT 1, so we decided to abandon the campus cats and we decided to put our energy into three main songs in ACT 1 which are The Wizard and I, What is this Feeling, and Defying Gravity. However, Rashed could not handle making the performance about one singular thing because he likes, as he says, “mixing things together that don’t really make sense but they somehow also do make sense” So he suggested adding Brat. The thing is, we did not know how we would add that element because that is an entirely different concept and Rashed found a clip of Abby Lee Miller stating “Oh, that sounded really bratty” and after further discussion we decided that clip would be a perfect transition from wicked to Brat. But that was not enough, Rashed wanted to add more things and he brought up the idea of using “Crazy” by Le Sserafim because he also like voguing and that song would reach the Kpop lovers and all the gaming community since it’s been used in various games and edits and so we added it. Rashed suggested adding one more thing from a Nicki Minaj song but he wanted to make it Wicked themed to which we said yes but he had to censor the one curse word to which he agreed. After further discussion, we decided that Rashed and Ziya would say/ sing the first part of the song “What is This Feeling” to add a humorous aspect and Rashed would say the Nicki Minaj part and the ending War Cry which is a reference to Cythia Erivo’s Target Commercial.
At first when we approached the composition, we were not sure how it would sound. We know the musical songs from Wicked are already very theatrical and professionally composed and sung. What we came up with was our interpretation, our own twist of the music. We worked on the composition section by section, that is intro, build-up, drop, bridge, final ending, and I would say interlude. Starting out, we let every idea that came across our minds be realized into codes. In our first attempt, we ended up having a composition that had a runtime of roughly 7 – 8 minutes. After many more rehearsals, we realized that every section seemed to exist on its own term and didn’t connect much to the previous or next section, which was a little bit frustrating given that each section sounded so good on their own. We worked a lot on the transitions between sections. After multiple attempts at rehearsal, we realized the main issue with the transition was the sonic palette itself. We were using too many different samples; almost each pattern used a distinct sample. We figured out a good way to fix the problem is to narrow down the number of samples used, so we replaced the samples used in a few patterns with the samples that we already had, or reuse a few of them. Specifically, putting patterns that share the same sample next to each other helped a lot with smoothing out the transition. Moving forward, the lesson learned is that simplicity is better than complexity. At first in each section, we had a lot of sounds stuffed together. But as we progressed, we had to cut down the sounds and the patterns, either combining a few or removing a few completely. Critical thinking and feedback from the visual team, our classmates, and Professor Aaron helped us to reflect on the composition. Cleaning up and tightening the composition took a lot of time because we had to rearrange, add, remove patterns here and there. In addition to cleaning up, we also had to keep the filter and effects consistent. We ran into a few issues but eventually things got sorted out thanks to Professor Aaron’s help. One final thing, even though it is live coding, we also had a good time composing the music with code and mixing the sounds together. I wish we had more time to develop the live composing skill.
Overall, we are proud of our final composition and the way we were able to execute our idea in a unique yet smooth manner to the extent that the audience also enjoyed.
Tidal code:
setcps(135/60/4)
do
once "loath"
p "background" $ slow 2 $ ccv ((segment 10 (range 0 127 saw))) # ccn "0" # s "midi"
once "loath:5"
do
resetCycles
d11 $ loopAt 4 $ s "360b:3"
p "360b visuals" $ ccv "20 90 40" # ccn "0" # s "midi"
hush
--blonde--
do -- evaluate the hat section
d1 $ s "gra" # legato 1 -- add in scrollX
p "hat" $ ccv "50 65" # ccn "2" # s "midi"
d2 $ "hh*8"
--
do
d3 $ fast 2 $ n "1*2" # s "bd" # amp 1 -- comment out diff
p "background" $ fast 4 $ ccv ((segment 10 (range 0 127 saw))) # ccn "0" # s "midi" -- change voronoi to shape
do
d3 $ fast 2 $ n "0 1*2 2 1*2" # s "bd" # amp 7
p "switch" $ ccv "0 1" # ccn "4" # s "midi" -- comment out the blend
d4 $ s "909(5,16)"
do -- do the shape NOT BLEND -- EVALUATE PIXELATE SECTION
d5 $ s "bass1:11*4" # speed "2" # gain 1 # cutoff "70"
--p "pixelate" $ ccv ((segment 4 (range 0 100 saw))) # ccn "0" # s "midi"
p "pixelate" $ ccv "80 100 120 127" # ccn "0" # s "midi"
--
d6 $ swingBy (1/3) 4 $ sound "hh:13*4" # speed "0.5" # hcutoff "7000" # gain 1
d7 $ jux rev $ fast 0.5 $ s "crzy:6" # gain 0.6 # legato 1
hush
d1 silence
d2 silence
d3 silence
d4 silence
d5 silence
d6 silence
d7 silence
d8 silence
hush
-- build-up and beatdrop --
lookGlinda = do
d1 $ qtrigger $ filterWhen (>=0) $
seqP
[ (0, 1, s "bd*4" # room 0.3)
, (1, 2, s "bd*8" # room 0.3)
, (2, 3, s "bd*16" # room 0.3)
, (3, 4, s "bd*32" # room 0.3)
]
# hpf (range 100 1000 $ slow 4 saw)
# speed (range 1 4 $ slow 4 saw)
# gain 1.2
# legato 0.5
p "popkick visual" $ qtrigger $ filterWhen (>=0) $
seqP
[ (0, 1, ccv "30 60 90 120" # ccn "0" # s "midi")
, (1, 2, ccv "15 30 45 60 75 90 120" # ccn "0" # s "midi")
, (2, 3, ccv "30 10 20 40 50 60 70 80 90 100 110 120 10 30 60" # ccn "0" # s "midi")
, (3, 4, ccv ((segment 32 (range 0 127 saw))) # ccn "0" # s "midi")
]
d2 $ qtrigger $ filterWhen (>=0) $
seqP
[ (0, 4, stack
[ s "~ cp" # room 0.9
, fast 2 $ s "hh*4 ~ hh*2 <superchip*2>"
])
]
# room 0.4
# legato 1
# gain (range 1 6 rand)
# speed (range 1 2 $ slow 4 saw)
d3 $ qtrigger $ filterWhen (>=0) $
seqP
[ (4, 5, s "boq:1" # room 0.3 # gain 2) ]
lookGlinda
-- LOOK GLINDA P2 --
do
lookGlinda
p "disappear" $ qtrigger $ filterWhen (>=0) $
seqP
[ (0, 4, ccv "0" # ccn "6" # s "midi") ]
p "disappearcat" $ qtrigger $ filterWhen (>=0) $
seqP
[ (0, 4, ccv "0" # ccn "5" # s "midi") ]
p "me" $ qtrigger $ filterWhen (>=0) $
seqP
[ (4, 5, ccv "[0 40 80 100 120] ~" # ccn "5" # s "midi") ]
p "boq" $ qtrigger $ filterWhen (>=0) $
seqP
[ (4, 5, ccv "0 127" # ccn "6" # s "midi") ]
once $ "boq:1"
d11 silence
-- beat drop --
hush
-- change to :1 --
do -- uncomment the rotate
d1 $ fast 1 $ s "crzy" # legato 1 # gain 1.5
p "rotate" $ ccv "100 20" # ccn "2" # s "midi"
d1 silence
-- Act like witches, dress like crazy --
d5 $ fast 2 $ sound "bd:13 [~ bd] sd:2 bd:13" #krush "4" # gain 2
do
setcps(120/60/4)
resetCycles
d5 silence
d1 $ loopAt 4 $ s "crzy:4" # gain 1.2
p "rotate" $ ccv "100 20" # ccn "2" # s "midi"
hush
d1 silence
d2 silence
d3 silence
hush
-- LOOK AT GLINDA PART 2
do --comment out scale
d4 $ slice 32 "2" $ sound "twai:1" # gain 1
p "scale" $ slow 2 $ ccv "100 20" # ccn "3" # s "midi"
d6 $ striate 8 $ s "ykw" # legato 1 # gain 1.2
do
d7 $ fast 0.5 $ s "oiia" # gain 1.1 # speed 0.8
p "shape-popular" $ fast 1 $ ccv "40 120 40" # ccn "0" # s "midi"
-------------------------------------------------------------------------
hush
do
d10 $ sound "bd:13 [~ bd] sd:2 bd:13" #krush "4" # gain 1.8
p "d10 sound" $ ccv "124*3 [~10] 10*2 30*13" # ccn "2" # s "midi"
d10 silence
d12 $ s "gra"
hush
d1 silence
d2 silence
d3 silence
d4 silence
d5 silence
d6 silence
d7 silence
d8 silence
d9 silence
d10 silence
hush
--ENDING VORONOI--
do
d12 silence
once $ s "chun"
-- defying gravity --
once $ s "defy:2" # gain 1.5
d1 $ loopAt 1.2 $ "defy:6"
hush
d2 $ fast 2 $ sometimes (|+ n 12) $ scramble 4 $ n "<af5 ef6 df6 f5> df5 ef5 _" # s "superpiano" # legato 2
# pitch2 4
--change 4 to 8
# pitch3 2
# voice 0
# orbit 2
# room 0.1
# size 0.7
# slide 0
# speed 1
# gain 1.2
# accelerate 0
# cutoff 200
d3 $ slow 4 $ n "af5 ~ ef5 ~ df5 ~ f5 ~"
# s "supersaw"
# gain 0.6
# attack 0.2
# sustain 2
# release 3
# cutoff 800
# room 0.9
# size 0.8
d4 $ n "<[0 ~ 1 ~][~ 0 1 ~]>" # s "tink:4"
do
d5 $ slow 2 $ sound "superpiano:2" <| up "af5 af5 ef6 df6 ~*4 f5 af5 ~*24 df5 ~*2 f5 ef5 ~*12" # gain "0.6" # room "0.9"
p "endingpiano" $ slow 2 $ ccv "30 30 40 35 ~*4 50 50 ~*24 30 ~*2 50 40 ~*12" # ccn "0" # s "midi"
--d5 $ sound "superpiano:2" <| up "g5 f6 ~ [e6 c6]"
d5 $ slow 2 $ sound "superpiano:2" <| up "af5 af5 ef6 df6 ~*4 f5 af5 ~*12 df5 ~*2 f5 ef5 ~*8" # gain "0.6" # room "0.9" $ slow 2 $ sound "superpiano:2" <| up "af5 af5 ef6 df6 ~*4 f5 af5 ~*12 df5 ~*2 f5 ef5 ~*8" # gain "0.6" # room "0.9"
hush
once $ s "defy:5"
In our group, Mike was in charge of the music, Ruiqi worked on the visuals, and Rebecca worked on both and also controlled the midi value, and the Ascii texts.
Visual
Personally, I’ve started thinking of Hydra more as a post-processing tool than a starting point for visuals. I’ve gotten a bit tired of its typical abstract look, but I still love how effortlessly it adds texture and glitchy effects to existing visuals. That’s why I chose to build the base of the visuals in Blender and TouchDesigner, then bring them into Hydra to add that extra edge.
As always, I’m drawn to a black, white, and red aesthetic—creepy and dark visuals are totally my thing. I pulled inspiration from a previous 3D animation I made, focusing on the human body, shape, and brain. In the beginning, I didn’t have a solid concept. I was just exploring faces, masks, bodies—seeing what looked “cool.” Then I started bringing some renders into Hydra and tried syncing up with what Mike was creating. We quickly realized that working separately made our pieces feel disconnected, so we adjusted things a bit to make the whole thing feel more cohesive.
At one point, I found myself overusing modulatePixelate() and colorama()—literally slapping them on everything. That’s when I knew I needed to change things up. So I went for Touch Designer and used instancing to build a rotating visual with a box, which gave the piece a nice shift in rhythm and form.
In the end, I’m proud of what I made. The visuals really reflect my style, and it felt great combining tools I’ve picked up along the way—it made me feel like a real multimedia artist. I’m also super thankful for my teammates. Everyone put in so much effort, and even though some issues popped up during the final performance, it didn’t really matter. We knew we had given it our all. Big love to the whole cyber brain scanners crew.
Here are some images and videos we made in Blender and TouchDesigner for the performance:
Audio
For the whole performance, we were trying to create upon several keywords: space, cyberpunk, and huge distortion. I drew inspiration from Chicago house music, glitch, and industrial music for how to make the sounds raw and wild, to correspond to the sketches for the visual.
At the early iterations of the performance, our theme was a space odyssey for cyborgs. So I thought a continuous beeping sound from a robot would fit in to start the performance. Though later we built something slightly different, we still agree this intro is effective in grasping the audience’s attention, so we chose to keep it.
For the build up, I really like the idea of using human voices to serve as a transition into the second part. And to echo with the theme, I picked a recording from crews on Discovery, a space shuttle orbiter with NASA on testing the communication system.
The aesthetic for the visual reminded me to keep the audio minimalistic. Instead of layering too many different tracks as the performance progressed, I used different variants of the main melody by adding effects like lpf, crush, and chop. The original sample for creating the main melody is a one-shot synth, and these effects helped make it sound intense, creepy and distorted.
In the second part, we wanted to make the audience feel hyped, so I focused more on the sound design for drums. The snare created a depth for the sound, and the clap can make the audience interact with us. And the glitch sample was adopted according to the pixel noise cross from the visual.
It’s really amazing to see how we have evolved as a group since the very first drum circle project, and it is a pleasure to work together and exchange ideas to make everything better.
Communication with the audience
To do live coding as a performance, we decided to use some extra methods to communicate with the audience. Typically, in a performance, the performer might communicate with the audience directly via microphone, which might undermine the consistency of the audio we are creating. Live coders might also type something in comments, which takes advantage of the nature of live coding, but the comments might be too small compared to the visual effects, and it might be hard for the audience to notice them.
Finally, we came up with the idea of creating ASCII art. ASCII art has been a part of the coding community for a long time, especially when it comes to live coding. In one of the most well-known live coding platforms, Sonic Pi, users will encounter an ASCII art title of this software. We would like to hype up the audience by adding some ASCII art to our flok panel, which could also utilize the flok layout and let those who don’t read code pay attention to the code panel.
We really managed to hype up the audience and express our big thanks and love to the community that has been supporting us throughout this semester.
Reading Artist-Musicians, Musician-Artists made me think about how blurry the line between disciplines really is, and maybe always has been. Looking up Paul Klee’s work was also interesting as he literally structured his paintings like musical compositions. It reminded me of how we use TidalCycles and Hydra, where coding becomes a tool to create a hybrid performance, a balance between live, rhythmic, and visual elements. Also, the part about intensity over virtuosity also stood out. It made me think of how, in live coding, it’s not about being super polishedl it’s about being present and responsive. Mistakes, randomness, and improvisation are part of the experience, and sometimes even enhance it. Sometimes in Tidal, we throw in randomness just to see what the system gives back. That unpredictability feels exciting, like giving up some control and letting the tool collaborate with you. What I found especially interesting was how often artists, like Cornelia Schleime, shifted between disciplines because they had to, whether it was due to censorship, economics, or needing a new form to express something. It made me realize that interdisciplinary practice isn’t always just an aesthetic choice, it often carries a sense of urgency or necessity. Are labels like artist, musician, or performer even useful anymore? Or are they just there for institutions and funding applications? When we do live coding, these lines feel less and less relevant.