Process

Our process began with a clear vision of the environment we wanted to create, an abstract narrative that subtly tells the story of a group of friends watching TV and embarking on a surreal, psychedelic trip. While we didn’t want to portray this explicitly, the goal was to evoke the strange sensations and shifting experiences they go through, using a mix of visual cues and atmospheric design. With the concept in place, we knew that the project would need creative and unusual visuals right from the start, paired with immersive psytrance or liquid drum and bass audio to match the tone and energy of the story.

Once we had a solid sense of the visual and sonic direction, we dedicated ourselves to an intense 12 hour live coding jam session (with a couple of runs to the Baqala for snacks), which we streamed on Instagram. This session became a space of spontaneous experimentation and rapid development, where we started shaping the core of the experience. Although we made significant progress during the jam, the following days, especially after Tuesday, revealed some lingering technical issues and problems with timing that still needed to be resolved. These challenges became the focus of our attention as we worked toward polishing the final piece.

Audio

Aadil and I mostly handled the audio. Here we focused on using some effective voice samples to bring out parts of the performance we thought needed more attention. We just used samples from our favourite songs (e.g., Everything In Its Right Place by Radiohead) and favourite genres (Techno, Hardgroove). We had a buildup using layers of ambient textures and chopped samples with increasing intensity to simulate anticipation. Specifically, we manipulated legato ambience, gradually intensified techno patterns, and used MIDI CC values to sync modulation effects like low pass filters and crush.

In the midsections, we used heavy 808 kicks, distorted jungle breaks, and glitchy acid lines (like the “303” patterns) to keep up the tension and energy. For the end sections, we wanted to have some big piano synths that brought home the feeling of a comedown. The tonal shift was meant to echo that feeling of emotional release, what it feels like when a trip starts to settle.

Visuals: 

The goal was to mirror the full arc of a trip, with visuals locked to every change in the track. Mo Seif first sketched a concept for each moment, then built the look layer‑by‑layer, checking each draft against the audio until they matched perfectly. We had seven primary sections, each tied to a distinct musical cue.

  1. Intro – “Sofa & Tabs”

Us slouched on a couch, half‑watching TV. They decide to take a journey of their lifetimes; the trip timer starts.

2. Onset – “TV Gets Wavy”

The first tingle hits. The TV image begins to undulate – colors drifting, lines bending. A slow warp effect hints that reality is about to buckle.

3. First Peak – “Nixon + Rectangles”

Audio: vintage Nixon sample followed by drum drop.

Visuals: explosion of rectangle‑shaped, ultra‑psychedelic patterns that sync to each snare hit. The crowd pops; everything feels bigger, faster, weirder.

4. Chiller Section 

A short breather featuring three “curated” GIFs:

Lo Siento, Wilson – pure goofy laughter.

Sassy the Sasquatch – laughter + tripping out 

Pikachu tripping – those paranoid, deep‑thought vibes.

Together they nail the mood‑swings of a trip.

5. Meta Moment – “Pikachu Breaks the 4th Wall”

Pikachu dissolves into a live shot of the same GIF playing on my laptop in the dorm while we’re editing. Filming ourselves finishing the piece made it hilariously meta; syncing it to the beat was a nightmare, but it clicked.

6. Street‑Fighter Segment – “Choose Your Fighter”

Inspiration: I was playing GTA once back and saw myself in the game as Trevor. So we wanted to recreate that feeling and put us in the video game. 

Build: we took 4‑5 photos of each of us, turned them into looping GIFs, and dropped them onto the classic character‑select screen with p5.js.

Plot Twist: Mo’s fighter “dies” (tongue out), smash‑cut to an Attack on Titan GIF – like he resurrected.

7. Final Drop & Comedown – “Hard‑Groove + CC Sync”

The last drop pivots to a hard‑groove techno feel. Every strobe and colour hit is driven by MIDI CC values mapped to the track. We fade back to the original couch shot: the three of us staring straight into the lens, coming down – sweaty, wired, grinning.

Here are the gifs we made for the street fighter visuals:

Here is the code we used:
Tidal:

--Lets watch some tv + mini build up + mini drop (SEC 1hey guys why dont we watch some tv

d1 $ chop 2 $ loopAt 16 $  s "ambience" # legato 3 # gain 1 #lpf (range 200 400 sine)
   
once $ "our:3" # up "-5"
     
d10 $ fast 2 $ "our:4" # up "-6"

d2 $ slow 1 $ s "techno2:2*4" # gain 0.9 # room 0.1 

--cc (eval separately)
d16 $ fast 16 $ ccv "<0 100 80 20 0 100 80 20>" # ccn "2" # s "midi"


d4 $ ghost $ slow 2 $ s "rm*16" # gain 0.75 # crush 2 # lpf 2500 # lpq(range 0.4 0.6 sine)

d5 $ stack [
    n "0 ~ 0 ~ 0 ~ 0 ~" # "house",
    n "11 ~ 11 ~ 11 ~ 11 ~" # s "808bd" # speed 1 # squiz 0 # nudge 0.01 # release 0.4 # gain 0.3,
    slow 1 $ n "8 ~ 8 8 ~ 8 ~ 8" # s "jungle"
]


d6
  $ linger 1
      $ n "[d3@2 d3 _ d3 _ d3 _ _ c3 _]/1"
     -- $ n "[d3 d3 c3 d3 d3 d3 c3 d3 f3 _ _ f3 _ _ c3]/2"
     -- $ n "[f3 _ _ g3 _ _ g3 _]*2"
  # s "supergong" # gain 1.2 #lpf 100 # lpq 0.5 # attack 0.04 # hold 2 # release 0.1 


  d7 $ stack [randslice 8 $ loopAt 8 $ slow 2 $ jux (rev) $ off 0.125 (|+| n "<12 7 5>") $ off 0.0625 (|+| n "<5 3>") $ cat [
  n "0 0 0 0",
  n "5 5 5 5",
  n "4 4 4 4",
  n "1 1 1 1"
  ]] # s "303" # gain 0.9 # legato 1 # cut 2 # krush 2

d10 $ fast 2 $ ccn "0*128" # ccv (range 200 400 $ sine) # s "midi"


--at the end of first visual
d1 silence
d2 silence
d5 silence
d6 silence
d7 silence


-- drugs r enemy (before the drop)
once $ s "sample:2" # gain 1.2

-- THE drums (the drop)
d11 $ stack [fast 2 $ s "[bd*2, hh*4, ~ cp]"] # gain 1.2


--after drums 
d9 $ stack [
    slow 1 $ s "techno2:2*4" # gain 0.9 # room 0.1,
   stack [
      n "0 ~ 0 ~ 0 ~ 0 ~" # "house",
      n "11 ~ 11 ~ 11 ~ 11 ~" # s "808bd" # speed 1 # squiz 0 # nudge 0.01 # release 0.4 # gain 0.3,
      slow 1 $ n "8 ~ 8 8 ~ 8 ~ 8" # s "jungle"
  ],
     linger 1
        $ n "[d3@2 d3 _ d3 _ d3 _ _ c3 _]/1"
       -- $ n "[d3 d3 c3 d3 d3 d3 c3 d3 f3 _ _ f3 _ _ c3]/2"
       -- $ n "[f3 _ _ g3 _ _ g3 _]*2"
    # s "supertron" # gain 0.8 #lpf 100 # lpq 0.5 # attack 0.04 # hold 2 # release 0.1 ,
   stack [randslice 8 $ loopAt 8 $ slow 2 $ jux (rev) $ off 0.125 (|+| n "<12 7 5>") $ off 0.0625 (|+| n "<5 3>") $ cat [
    n "0 0 0 0",
    n "5 5 5 5",
    n "4 4 4 4",
    n "1 1 1 1"
    ]] # s "303" # gain 0.9 # legato 1 # cut 2 # krush 2, 
    fast 16 $ ccv "<0 100 80 20 0 100 80 20>" # ccn "2" # s "midi"
] # gain 0

d11 silence
d9 silence 

-- START XFADE WHEN READY FOR GIF MUSIC

d10
$ whenmod 16 4 (|+| 3)
$ jux (rev . (# s "arpy") . chunk 4 (iter 4))
$ off 0.125 (|+| 12)
$ off 0.25 (|+| 7)
$ n "[d1(3,8) f1(3,8) e1(3,8,2) a1(3,8,2)]/2" # s "arpy"
# room 0.5 # size 0.6 # lpf (range 200 8000 $ slow 2 $ sine)
# resonance (range 0.03 0.6 $ slow 2.3 $ sine)
# pan (range 0.1 0.9 $ rand)
# gain 0.6 

-- -->7


d16 $ fast 16 $ ccv "<0 100 80 20 0 100 80 20>" # ccn "2" # s "midi"



-- GIF section Sudden drop from the mini drop, chill background matching music + sample audios for gifs (SEC 2.1)
-- GIF SECTION MUSIC

-- 1) DONNY 

-- 2) WILSONNNNNN
once $ s "wilson"  # gain 1.4

-- 3) PIKAPIKA
once $ s "pikapika" # gain 1.4 

--backgroung silence WHEN ICE SPICE 
d10 silence -- aadil


-- Ice Spice Queen (SEC 2.2)
once $ "our:5" #gain 2.5 --j


-- Start boss music LOUD, reverse drop off glitchy into us fighting (SEC 3)
d1 $ fast 2 $ s "techno2:2*4" # gain 1.2 # room 0.1
   --SELECT UR FIGHTER CC VALUES 
d10 $ fast 4 $ ccn "0*128" # ccv (range 200 400 $ sine) # s "midi"
-- d16 $ slow 1 $ ccn "0*128" # ccv (range 0.9 1.2 $ slow 2 $ rand) # s "midi"
-----------------------------------------------------------

--cc for street fight
--d16 $ fast 16 $ ccv "0 60 0 70" # ccn "0" # s "midi"
-- 

-- WHEN STREET FIGHT MO VS AADIL
d2 $ stack [
  sometimesBy 0.25 (|*| up "<2 5>") $
  sometimesBy 0.2 (|-| up "<2 1>")
  $ jux (rev) $
  n "[a4 b4 c4 a4]*4" # s "superhammond" # cut 4 # distort 0.3 # up "-9"
  # lpf (range 200 7000 $ slow 2 $ sine) # resonance ( range 0.03 0.5 $ slow 3 $ cosine) # octave (choose [4, 5, 6, 3]),
  sometimesBy 0.15 (degradeBy 0.125) $
  s "reverbkick*16" # n (irand(8)) # distort 0 # speed (range 0.9 1.2 $ slow 2 $ rand) # gain 0.9
] # room 0.5 # size 0.5 # pan (range 0.2 0.8 $ slow 2 $ sine) #gain 0.9


d5 $ fast 2 $ (|+| n "12")$ slowcat [ 
n "0 ~ 0 2 5 ~ 4 ~", 
n "2 ~ 0 2 ~ 4 7 ~", 
n "0 ~ 0 2 5 ~ 4 ~", 
n "2 ~ 0 2 ~ 4 7 ~", 
n "12 11 0 2 5 ~ 4 ~", 
n "2 ~ 0 2 ~ 4 7 ~", 
n "0 ~ 0 2 5 ~ 4 ~", 
n "2 ~ 0 2 ~ 4 ~ 2"
] # s "supertron" # release 0.7 # distort 10 # krush 10 # room 0.5 #hpf 8000 # gain 0.7

-- silence d2 when the next DO starts playing
d2 silence

-- Quiet as in im dead, mini build up, mini drop after lick (SEC 4)

do {
  d5 $ qtrigger $ filterWhen (>=0) silence;
  d4 $ qtrigger $ filterWhen (>=0) $ stack[
    s "hammermood ~" # room 0.5 # gain 1.8 # up "8",
    fast 2 $ s "jvbass*2 jvbass*2 jvbass*2 <jvbass*6 [jvbass*2]!3>" # krush 9 # room 0.7
  ] # speed (slow 4 (range 1 2 saw));
  d3 $ qtrigger $ filterWhen (>=8) $ seqP [
    (0, 1, s "808bd:2*4"),
    (1,2, s "808bd:2*8"),
    (2,3, s "808bd:2*16"),
    (3,4, s "808bd:2*32")
  ] # room 0.3 # hpf (slow 4 (100*saw + 100)) # speed (fast 4 (range 1 2 saw)) # gain 0.8;
}

d1 $ "[reverbkick(3,8), jvbass(3,8)]" # room 0.5#krush 6 # up "-9" # gain 1

--

drop_deez = do
{
  d5 $ qtrigger $ filterWhen (>=0) $ fast 4 $ chop 2 $ loopAt 8 $ s "drumz:1" # gain 1.8  # legato 3 # cut 3;
  d6 $ qtrigger $ filterWhen (>=4) $ s "jvbass" # gain 1 # room 4.5;
  d10 $ qtrigger $ filterWhen (>=6) $ loopAt 4 $ s "acapella" # legato 3 # gain (range 0.7 1.5 saw);
  d7 $ qtrigger $ filterWhen (>=0) silence;
  d8 $ qtrigger $ filterWhen (>=0) silence;
  d2 $ qtrigger $ filterWhen (>=0) silence;
  d3 $ qtrigger $ filterWhen (>=0) silence;
  d4 $ qtrigger $ filterWhen (>=0) silence;
  d9 $ qtrigger $ filterWhen (>=8) $ s "amencutup*16"  # n (irand(8))  # speed "2 1" # gain 1.8 # up "-2"
 }

d11 $ slow 1 $ ccn "0*128" # ccv (range 1 62 saw) # s "midi" 

drop_deez 

--after drop settles
do
  d1 silence
  d6 silence

-- Quieting down with the couches, drug bad sample (SEC 5)

do {
  d2 $ qtrigger $ filterWhen (>=6) silence;
  d6 $ qtrigger $ filterWhen (>=4) silence;
  d9 $ qtrigger $ filterWhen (>=0) silence;
  d10 $ qtrigger $ filterWhen (>=0) silence;
  d5 $ qtrigger $ filterWhen (>=8) silence;
  d1 $ sound "our:1" # cut 4 # gain 1.5;
  d12 $ qtrigger $ filterWhen (>=0) $ slow 4.1 $ ccv "10 5 4 3" # ccn "0" # s "midi";
}


d1 silence

once $ s "drugsrbad" # gain 1.4

hush

Here is the hydra code:

//SEC 1 

s0.initImage("https://i.imgur.com/enIHg2L.jpeg"); //start scene
s1.initImage("https://i.imgur.com/SRNmgQR.jpeg"); //tv

src(s0).scale(.95).out()

// SYnced with Radiohead sample
inc=1;
startZoom = 1;
update = ()=>{
  if(startZoom){
    inc+=0.01;
  }
  if (inc >= 3.4)
  {
    inc = 3.4;
    return; //fix this
  }
  console.log(inc);
}
//tv zoom
src(s1) 
  //.scale(()=>inc)
   //.scale(3.4)
   //.modulate(noise(()=>cc[2],1))
   //.modulate(s1,()=>cc[2]*5)
   //.colorama(()=>cc[2]*0.1)
   //.modulateKaleid(osc(()=>cc[2]**2,()=>ccActual[2],20),0.01) //changed to 0.1 to 0.01
  .out() 


//first visual
src(o0)
	.hue("tan(st.x+st.y)")
	.colorama("pow(tan(st.x),tan(st.y))")
	.posterize("sin(st.x)*10.0", "cos(st.y)*10.0")
    .luma(()=>cc[2],()=>cc[2])
	.modulatePixelate(src(o0)
		.shift("cos(st.x)", "sin(st.y)")
		.scale(1.01), () => Math.sin(time / 10) * 10, () => Math.cos(time / 10) * 10)
	.layer(osc(1, [0, 2].reverse()
			.smooth(1 / Math.PI)
      .ease('easeInOutQuad')
			.fit(1 / Math.E)
			.offset(1 / 5)
			.fast(1 / 6), 300)
		.mask(shape(4, 0, 1)
			.colorama([0, 1].ease(() => Math.tan(time / 10))
				.fit(1 / 9)
				.offset(1 / 8)
				.fast(1 / 7))))
	.blend(o0, [1 / 100, 1 - (1 / 100)].reverse()
		.smooth(1 / 2)
		.fit(1 / 3)
		.offset(1 / 4)
		.fast(1 / 5))
	.out()

//SEC 2.1 (GIFS)

//gorilla
s2.initVideo("https://i.imgur.com/YMApoXd.mp4");
src(s2).out() 

//wilson
s2.initVideo("https://i.imgur.com/lEVy8F2.mp4");
src(s2).luma(0.1).colorama(0.1).out()

//pikachu
s0.initVideo("https://i.imgur.com/TS10M9l.mp4"); //pikachu
src(s0).out()

//load next clips
s1.initImage("https://i.imgur.com/mW62iXz.jpeg"); //still pikachu

//still pikachu
src(s1).out()

//SEC 2.2 (QUEEN)

//gang
s2.initVideo("https://i.imgur.com/i7msTNY.mp4"); //GANG
src(s2)
  .modulate(s0,0.05)
  .colorama(0)
  .out()

//SEC 3

//choose sf
s2.initImage("https://i.imgur.com/E4SPvyb.jpeg"); //choose your player
src(s2).colorama(()=>ccActual[0]*0.001).modulate(noise(0.2,()=>cc[0]*0.001)).out()

//gang street fighter
let sf1 = new P5();
s3.init({src: sf1.canvas})
sf1.out
sf1.hide()
let mo = sf1.loadImage("https://media0.giphy.com/media/v1.Y2lkPTc5MGI3NjExMDc2NGU0aGEwZmJwZXQ0ZWttZng5aGNqdWp6azlja2ZmeXBlMGs0aiZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9cw/3I1aZCTDfNCd0Fclr4/giphy.gif")
let aadil = sf1.loadImage("https://media.giphy.com/media/gQFjPwZm0pMqW8yoUY/giphy.gif")
let back = sf1.loadImage("https://blog.livecoding.nyuadim.com/wp-content/uploads/background.jpg")
mo.play();
aadil.play();
sf1.scale(0.9);
sf1.fill(255);
sf1.draw = ()=>{
  //sf1.fill(255)
  //et img = ba.get();
  sf1.image(back, 0, 0, sf1.width, sf1.height);
  //sf1.video(ba,0,0 sf1.width sf1.height);
  // sf1.image(mo, 250, sf1.height - mo.height); // bottom-left
  //sf1.image(aadil, sf1.width - aadil.width-100, sf1.height+5 - aadil.height); // bottom-right
//sf1.height/1.3 - mo.height
  sf1.image(mo, sf1.width/6, sf1.height/1.3 - mo.height, sf1.height/1.5, sf1.width/2.5); // bottom-left
  sf1.image(aadil, (sf1.width/1.5) - aadil.width, sf1.height/1.3- aadil.height, sf1.height/1.5, sf1.width/2.5); //
  sf1.fill(255);
  // uncomment below to sync the gif with tida
  // mo.setFrame(Math.floor((()=>cc[0]*mo.numFrames())));
  // aadil.setFrame(Math.floor((()=>cc[0]*mo.numFrames())));
}
src(s3).out()


//SEC 4

//load clips
s0.initImage("https://i.imgur.com/dPIOZnu.jpeg"); // mo dead
s1.initImage("https://i.imgur.com/Ukz54v3.png"); //mo really dead


//mo dead
src(s0).out()

//mo really dead
inc=1;
startZoom = 1;
update = ()=>{
  if(startZoom){
    inc+=0.005;
  }
  if (inc >= 2.1)
  {
    inc = 2.1;
    return; //fix this
  }
  console.log(inc);
}
src(s1)
  //.scale(()=>inc)
  .out()

//lickkkk
s2.initVideo("https://i.imgur.com/hvNHjEg.mp4"); //lick
src(s2).scale(1.3)
  //.diff(s2,0.005)
  //.colorama(1)
  //.modulate(s2,0.05)
  .out()

//post lick
osc(()=>cc[1]*100, 0.003, 1.6)
.modulateScale(osc(cc[1]*10, 0.7
  , 1.1)
      .kaleid(2.7))
  .repeat(cc[1]*3, cc[1]*3)
.modulate(o0, 0.05)
  .modulateKaleid(shape(4.2, cc[1], 0.8))
  //.add(o0,cc[1])
  .out(o0);

//SEC 5

//couch
s3.initImage("https://i.imgur.com/dGW1rt5.png"); //couch
inc=1;
startZoom = 1;
update = ()=>{
  if(startZoom){
    inc+=0.01;
  }
  if (inc >= 9)
  {
    inc = 999.5;
    return; //fix this
  }
  console.log(inc);
}
src(s3)
.repeat(()=>ccActual[0],()=>ccActual[0])
  //.scale(()=>inc)
  .out()

Heres is a frame from our livestream:

I think at one point we had two drops and we just couldn’t proceed creatively from that point on. But we went back to all of our previous blog posts and realized that we could just think of something and really we already have the tools to do it.

Thank you Aaron for all the help and I think I speak for all three of us when I say that we really needed the fun we had in this class as graduating seniors.

Synesthesia is very commonly seen in contemporary art and I think it plays an even bigger role in the context of live coding. In the narrative of the artist-musician/musician artist, being able to interact with one form of sense as an artist and another as a musician becomes particularly potent when the tools themselves facilitate this blend. 

Abstract motifs in music and visuals and especially the idea of counterpoint in visuals as well as musically is something that I would look into as a tool for expression, as artists sought a universal language beyond representation. Just as musical counterpoint involves the interplay of independent melodic lines, visual counterpoint can be looked at through the juxtaposition and interaction of distinct visual elements like colour, form, or rhythm within a composition like Hans Richter’s work. 

Programming personality that moved into music. This is something we see greatly in the live coding community, again going back to Orca and Devine Lu Linvega from earlier in the semester, we see a lot of individuals being really good programmers and creating tools that help them bring musical ideas and motifs they inherit. The ability as a programmer to be able to wield coding languages as instruments for their artistic voice have made these individuals really good musicians/performers too. So in a sense in the scope of live coding, it goes beyond just touching on the two disciplines but for one ability to really be able to nourish the other.

In the very wide scope of topics that the article covers, art as an expression and art as a tool for fun is explored. Like in Fiorucci Made Me Hardcore (1999) there is one section where the dance looks entirely performative, while this may be besides the point. I think its very cool to see the performative side of the English club scene, in the same way as there are youtube tutorials on how to dance at a rave.

For the peformance we wanted to go in with a theme. We decided to center our performance around a drum and bass vibe, however, when we met up and started our first jam session, it all kind-of deviated and became more of a robo-core or break core type of sound and we decided to just let our creative juices flow.

Mohamed was in charge of the hydra and P5 visuals, while Aadil and I were handling the Tidal parts. We decided that this was probably the best workflow as a group because going back and forth from coding in Haskell to Javascript resulted in us doing alot of run time errors and keeping track of the syntax was quite difficult.

We used tried to play and experiment with cutting up whole samples and to try to create some sharp and distinct sounds that we thought would go well with a DnB energy. While for the visuals we tried to flow into something more distorted and bring out a sense of entropy.

Here is our final code from the last Jam:

setcps(0.75)

d9 $ ccv "127 30 60 5" # ccn "0" # s "midi"

d10 $ fast 1 $ ccn "1*128" # ccv (range 200 400 sine)  # s "midi"

d1 $ splice 27 ("8 3 4!1 2*2 4*6 4!4") $ s "ade"
   # lpf (range 200 400 sine)
  # pan "<1 ,-1>"
# gain 1

d2 $ slow 2 $ "jvbass" <| n (run 16) # lpf (range 200 400 sine)

d4 $ s "hh27"<| n (run 8) # lpf 1000

d5 $ sometimes (off 0.125 (# speed 2))
   $ jux (# nudge 0.03)
    $ s "superhoover(<3 5>, 8, <0 1 0 0 3>)"
    # gain 0.8 # hpf 200 # n "<0 1 2 3 4>" # speed 2 
  # krush 1 # lpf (range 200 800 sine) # amp 0.7

d6 
$ sometimesBy 0.15 (chop 50)
$ sometimesBy 0.3 (jux rev)
$ every 4 rev
$ every 4 (#pan 0) 
$ every 5 (# speed (smooth "1 1 1  0.98 0.96 0.93 1.4 1.9")) 
$ s "[[~ notes:28*16 | ~ amencutup*6], [notes:2(5,8) | notes:7*8]]"
# speed "[0.5 1 1 1.26 2 0.25 1]/5"

d7 $ jux rev $ loopAt 16 $ chop 128 $ s "bev:1" # room 0.5
   # gain 1.2 # legato 2 

d5 silence
d6 silence
d3 silence
d1 silence
d2 silence
d4 silence

--cc
d9 $ fast 4 $ 
  ccn "0*16" 
  # ccv "[0 127]*8"
  # s "midi"

--dnb
d7 $ palindrome $ sound "[<amencutup:0 amencutup:1*4> <amencutup:2*2 amencutup:3>] [<amencutup:1*4 amencutup:7> <amencutup:6 amencutup:5*2>] [amencutup:2*4 <amencutup:4 amencutup:3*2>] [<amencutup:5*2 amencutup:4*2> <amencutup:6*2 amencutup:1>]" # speed 2 # release 0.1 # lpf (range 200 600 saw)
shape(4).color(0.8,0.3,0.3).rotate(()=>cc[1]).scale(()=>cc[0]).modulate(noise(3,0.1)).diff(o0,cc[1]).out()

hush()

let p5 = new P5()
p5.hide() 
s0.init({ src: p5.canvas }) 
p5.frameRate(30);
p5.pixelDensity(1);
let glitchDensity = 0; 
let glitchInstability = 0; 
let primaryHue = 180; 
p5.draw = () => {
  glitchDensity = cc[0]
  glitchInstability = cc[1];
  p5.background(0, 0, 0, p5.map(glitchInstability, 0, 1, 30, 10)); 
  p5.noFill();
  p5.strokeWeight(p5.map(glitchInstability, 0, 1, 1, 3)); // go from 1 to 3
  let numElements = p5.floor(p5.map(glitchDensity, 0, 1, 2, 150)); //go from 1 to 150
  for (let i = 0; i < numElements; i++) {
    let x = p5.random(p5.width);
    let y = p5.random(p5.height);
    let w = p5.random(5, 50) * (1 + glitchDensity);
    let h = p5.random(5, 50) * (1 + glitchDensity);
    let angle = p5.random(p5.TWO_PI) * glitchInstability; 
    let hueShift = p5.map(p5.sin(p5.frameCount * 0.05 + i * 0.1), -1, 1, -30, 30) * glitchInstability;
    let currentHue = (primaryHue + hueShift) % 360;
    let saturation = p5.map(glitchInstability, 0, 1, 50, 100);
    let brightness = p5.map(glitchDensity, 0, 1, 70, 100);
    let alpha = p5.map(glitchDensity, 0, 1, 150, 250);
    p5.push();
    p5.translate(x+p5.random(-10,10)*glitchInstability, y+p5.random(-10,10)*glitchInstability);
    p5.rotate(angle);
    p5.stroke(currentHue, saturation, brightness, alpha);
    p5.rect(0,0,w,h);
    p5.pop;
  }
}
src(s0)
  //.pixelate(()=> 5 + cc[1]*20 + cc[0]*30 , ()=> 5 + cc[1]*20 + cc[0]*30 )
  //.kaleid(()=> 1 + Math.floor(cc[0]*6))
  //.modulate(o0, ()=> ccActual[1]*0.05 )
  //.colorama(()=> 0.1 + ccActual[1]*0.3)
  .out() 


hush()

Here is the link to the video:

The first Deadmau5 song I heard was strobe, and it was quite a while after the song’s release. What really captivated me about the song was the flow of the composition and almost intricate story that the song was trying to convey that is hard to find in mainstream EDM. Looking at the current landscape of edm performance it’s quite evident that artists who are talented in making good drops and build-ups in a sonic sense really dominate the scene (eg: Sara Landry or Oguz in the dark techno genre). These artists also play almost identical sets at different venues. And as the reading suggests the logistical advantage of a predefined set at a large venue is an attractive upside. 

I feel that while DJs create an atmosphere that attracts audiences with pre planned sets. They can use things like strong track selection and manipulation of the tracks to really make a compositional performance. Almost like when an opera singer, who would be singing the same song every interaction but different aspects of the performance would express the singer’s identity better. For example some mainstream Djs like Patrick Mason incorporate extreme dance moves (for a DJ at least) to bring a different dimension to performance that most likely is prerecorded. 

I feel that live coding due to its unique position as a field that is not yet mainstream, but has a rich culture and community position itself in the middle of “free” improvised as seen with Derek Bailey and mainstream Djs who just press play. 

Live coding provides the performer with a larger range of tools through the computer to manipulate what might be a pre-prepared work. A performer can come with basically a vibe or a raw idiomatic motive (for example like “ I want to make every here at the club to feel euphoric today because I remembered a nice memory from my childhood”) that they want share with the audience and then write a few lines of code or even change the current preparation so drastically with a few modifications to steer the expression of the performance, 

Design P_2_2_1_01 (Herman Schmidt et al., 2018) from the generative design book caught my eye and I decided that it would be a good base for the work I was planning to make with p5. I felt that abstract patterns like the one seen below were a great start to the project. I took the algorithm of how the patterns were drawn and then I customized the code to be able to respond to midi events instead of mouse position. 

There are 4 patterns on tidal. One is a pattern of higher sounds and the other is a bit lower sounding pattern where I experimented with distortion using the someCycles function. The other two patterns control the midi output.

Here is the demo of the visuals controlled by tidalCycles:

I drew inspiration from Hans Zimmer’s work in Dune, with the industrial but desert sounds and the deep base. The sounds for the evil side of the track were also insipred by the Sardukar chant from the epilogue of each of the movies. The aesthetics of the hero soundtracks in games I played growing up.

The story being told is of a battle between evil and good a where good win after the beat drops (hopefully on time..). I have attached a video to the composition.

Looking at the video one more time, I realise that you can see the cursor bing visible in the middle of the screen. This is evidence of the p5js visualization shown as a video and there not being a p5 instance created during the performance.

The background visual is made with p5.js by taking muse mind monitors eeg readings of me playing chess and taking time-series data for each type of brain wave (alpha, delta, thetha) which are correlated to different types of emotions. The wave values, while in .csv had lost a lot of information and context where the color pf the particles is the aggregation of the wavetypes and the size is dictated by purely delta waves associated with high stress. The ring in the middle pulses on the heart-rate data from the headband.

The code code was used to generate the p5js video:

let brainData = [];
let headers = [];
let particles = [];
let currentIndex = 0;
let validData = [];
let heartBeatTimer = 0;
let lastHeartRate = 70; //default heart rate

class Particle {
  constructor(x, y, delta, theta, alpha) {
    this.pos = createVector(x, y);
    this.vel = p5.Vector.random2D().mult(map(delta, -2, 2, 0.5, 3));
    this.acc = createVector(0, 0);

    this.r = map(delta, -2, 2, 50, 255);
    this.g = map(theta, -2, 2, 50, 255);
    this.b = map(alpha, -2, 2, 50, 255);

    this.size = map(abs(delta), 0, 2, 2, 10);
    this.lifespan = 255;
  }

  update() {
    this.vel.add(this.acc);
    this.pos.add(this.vel);
    this.acc.mult(0);
    this.vel.add(p5.Vector.random2D().mult(0.1));
    this.lifespan -= 2;
  }

  applyForce(force) {
    this.acc.add(force);
  }

  display() {
    noStroke();
    fill(this.r, this.g, this.b, this.lifespan);
    ellipse(this.pos.x, this.pos.y, this.size, this.size);
  }

  isDead() {
    return this.lifespan < 0;
  }
}

function preload() {
  brainData = loadStrings("mindMonitor_2025-02-26--20-59-08.csv");
}

function setup() {
  createCanvas(windowWidth, windowHeight);
  colorMode(RGB, 255, 255, 255, 255);

  headers = brainData[0].split(",");
//filterinf dataset for valid rows
  validData = brainData.slice(1).filter((row) => {
    let cells = row.split(",");
    return cells.some((cell) => !isNaN(parseFloat(cell)));
  });

  console.log("Total valid data rows:", validData.length);
}

function draw() {
  //trailing effect for particles
  background(0, 20);

  if (validData.length === 0) {
    console.error("No valid data found!");
    noLoop();
    return;
  }

  let currentLine = validData[currentIndex].split(",");

  let hrIndex = headers.indexOf("Heart_Rate");
  let heartRate = parseFloat(currentLine[hrIndex]);

  if (!isNaN(heartRate) && heartRate > 0) {
    lastHeartRate = heartRate;
  }

  let beatInterval = 60000 / lastHeartRate;

  heartBeatTimer += deltaTime;

  if (heartBeatTimer >= beatInterval) {
    heartBeatTimer = 0;

    let deltaIndex = headers.indexOf("Delta");
    let thetaIndex = headers.indexOf("Theta");
    let alphaIndex = headers.indexOf("Alpha");

    let delta = parseFloat(currentLine[deltaIndex]) || 0;
    let theta = parseFloat(currentLine[thetaIndex]) || 0;
    let alpha = parseFloat(currentLine[alphaIndex]) || 0;

    let particleCount = map(abs(delta), 0, 2, 5, 30);
    for (let i = 0; i < particleCount; i++) {
      let p = new Particle(
        width / 2 + random(-100, 100),
        height / 2 + random(-100, 100),
        delta,
        theta,
        alpha
      );
      particles.push(p);
    }

    currentIndex = (currentIndex + 1) % validData.length;
  }

  push();
  noFill();
  stroke(255, 0, 0, 150);
  strokeWeight(3);
  let pulseSize = map(heartBeatTimer, 0, beatInterval, 100, 50);
  ellipse(width / 2, height / 2, pulseSize, pulseSize);
  pop();

  for (let i = particles.length - 1; i >= 0; i--) {
    particles[i].update();
    particles[i].display();

    if (particles[i].isDead()) {
      particles.splice(i, 1);
    }
  }
}

function windowResized() {
  resizeCanvas(windowWidth, windowHeight);
}

I encapsulated the tidal code so that i would only have to excute few lines during the performance which didn’t go the way I wanted:( , so in a way i feel like when it comes to live coding performance, having a few refactored functions work while working with code blocks lives gives a better setup for performing and debuggin on the fly.

Below is the tidal code:


-----------------hehe----

setcps (135/60/4)

-- harkonnen type beat
evilPad = slow 8
$ n (cat [
"0 3 7 10",
"3 6 10 13",
"5 8 12 15",
"7 14 17 21"
])
# s "sax"
# room 0.8
# size 0.9
# gain (range 0.7 0.9 $ slow 4 $ sine)

--shardukar type beat
evilBass = slow 8
$ sometimesBy 0.4 (rev)
$ n "0 5 2 7"
# s "sax:2"
# orbit 1
# room 0.7
# gain 0.75
# lpf 800
# shape 0.3

--geidi prime type beat
evilAtmosphere = slow 16
$ sometimes (|+ n 12)
$ n "14 ~ 19 ~"
# s "sax"
# gain 0.8
# room 0.9
# size 0.95
# orbit 2

--shardukar chant type pattern chpped
evilPercussion = slow 4
$ striate "<8 4>"
$ n (segment 16 $ range 6 8 $ slow 8 $ perlin)
# s "speechless"
# legato 2
# gain 1.2
# krush 4

--shardukar chant type pattern 2 chopped
evilVoice = slow 4
$ density "<1 1 2 4>/8"
$ striate "<2 4 8 16>/4"
$ n (segment 32 $ range 0 6 $ slow 8 $ sine)
# s "speech"
# legato (segment 8 $ range 2 3 $ slow 16 $ sine)
# gain (segment 32 $ range 0.8 1.2 $ slow 4 $ sine)
# pan (range 0.3 0.7 $ rand)
# crush 8

hush

evilRhythm = stack [
s "~ arp ~ ~" # room 0.5 # krush 9 # gain 1.2,
fast 2 $ s "moog2 moog2 moog2 moog3 moog:1 moog2" # room 0.7 # krush 5, fast 4 $ s "wobble8" # gain 0.8 # lpf 1200
]

d1 $ evilRhythm
hush

-- Build-up to drop
evilBuildUp = do {
d1 $ qtrigger $ filterWhen (>=0) $ seqP [
(0, 1, s "moog:14"), (1, 2, s "moog:18"),
(2, 3, s "moog:116"), (3, 4, s "moog:132")
] # room 0.3 # krush 9 # lpf (slow 4 (3000saw + 200)); d2 $ qtrigger $ filterWhen (>=0) $ seqP [ (0, 1, s "bass1:74"),
(1, 2, s "bass1:88"), (2, 3, s "bass1:916"),
(3, 4, s "bass1:932") ] # room 0.3 # lpf (slow 4 (1000saw + 100)) # speed (slow 4 (range 1 4 saw)) # gain 1.3;
d3 $ qtrigger $ filterWhen (>=0) $ seqP [
(0, 4, evilVoice # gain (slow 4 (range 0.8 1.5 saw)))
];
d4 $ qtrigger $ filterWhen (>=0) $ seqP [
(3, 4, s "crash:2*16" # gain (slow 1 (range 0.7 1.3 saw)) # room 0.9)
]
}

d1 $ silence
d2 $ silence
d3 silence
d4 silence

theDrop = do {
d1 silence;
d2 silence;
d3 silence;
d4 silence
}

heroBassDrum = s "[808bd:3(3,8), 808bd:4(5,16)]" # gain 1.3 # room 0.4 # shape 0.4 #krush 9

heroSnare = s "~ sd:2 ~ sd:2" # room 0.6 # gain 1.1 # squiz 0.8 #krush 6

heroHiHats = fast 2 $ s "hh*4" # gain (range 0.8 1.0 $ rand) # pan (range 0.2 0.8 $ slow 3 $ sine) #krush 5

heroTom = s "~ ~ ~ [~ lt:1 lt:2 lt:3*2]" # gain 1.0 # room 0.5 # speed 0.9 #krush 9

heroCymbal = s "[~ ~ ~ crash:4]" # gain 0.95 # room 0.7 # size 0.8

heroicFill = s "[feel:14, [~ ~ ~ feel:64], [~ ~ ~ ~ crash]]"
# gain 1.2
# room 0.7
# speed (range 1.5 0.9 $ saw)
# crush 8

dramaticEntrance = do {
d1 $ s "808bd:3 ~ ~ ~ ~ ~ ~ ~" # gain 1.4 # room 0.9 # size 0.9;
d2 $ s "~ ~ ~ crash:4" # gain 1.3 # room 0.9;
d3 silence;
d4 silence
}

heroPattern = do {
d1 $ stack [
heroBassDrum,
heroSnare,
heroHiHats
];
d2 $ stack [
heroTom,
heroCymbal
] # shape 0.3;
d3 $ s "sine:3" >| note (scale "mixolydian" ("0,4,7") + "c4")
# gain 0.65 # room 0.7;
d4 silence
}

heroExpanded = do {
d1 $ stack [
heroBassDrum # gain 1.4,
heroSnare # gain 1.2,
heroHiHats # gain 1.1
];
d2 $ stack [
heroTom,
heroCymbal # gain 1.1
] # shape 0.3;
d3 $ s "sine:3" >| note (scale "mixolydian" ("0,4,7") + "")
# gain 0.75 # room 0.7 # lpf 3000;
d4 $ s "[~ ~ ~ [feel:6 feel:7]]" # gain 0.9 # room 0.6
}

heroEpic = do {
d1 $ stack [
s "[808bd:3(3,8), 808bd:4(5,16)]" # gain 1.4 # room 0.4,
s "~ sd:2 ~ [sd:2 sd:4]" # room 0.6 # gain 1.2,
fast 2 $ s "hh4" # gain (range 0.9 1.1 $ rand) ]; d2 $ stack [ s "~ ~ mt:1 [~ lt:1 lt:2 lt:32]" # gain 1.1 # room 0.5,
s "[~ ~ ~ crash:4]" # gain 1.0 # room 0.7
] # shape 0.4;
d3 $ s "sine:3" >| note (scale "mixolydian" ("0,4,7,9") + "")
# gain 0.8 # room 0.8 # lpf 4000;
d4 $ s "feel:6(3,8)" # gain 0.9 # room 0.6 # speed 1.2
}

evilIntro = do {
d1 $ evilPad;
d2 silence;
d3 silence;
d4 silence
}

evilBuilds = do {
d1 $ evilPad;
d2 $ evilBass;
d3 $ evilAtmosphere;
d4 silence
}

evilIntensifies = do {
d1 $ evilPad;
d2 $ evilBass;
d3 $ evilAtmosphere;
d4 $ evilPercussion
}

d9 $ ccv "0 40 60 120" # ccn "0" # s "midi"

evilFullPower = do {
d1 $ evilPad # gain 0.6;
d3 $ evilVoice;
d4 $ evilPercussion # gain 1.3
}

--for evil full power
d6 $ s "~ arp ~ ~" # room 0.5 # krush 9 # gain 1.2
d7 $ fast 2 $ s "moog2 moog2 moog2 moog3 moog:1 moog2" # room 0.7 # krush 5 d8 $ fast 4 $ s "wobble8" # gain 0.8 # lpf 1200

evilIntro
evilBuilds
evilIntensifies
evilFullPower

evilBuildUp
theDrop

dramaticEntrance
heroPattern
heroExpanded
heroEpic

hush

Below is the hydra code:

s0.initVideo("/Users/janindu/Documents/liveCoding/p5_demo/p5_js_eegViz.mp4")

src(s0)
  .blend(o1, () => cc[1])
  .out()

  src(o0)
    .blend(src(o0).diff(s0).scale(.99),1.1)
    .modulatePixelate(noise(2,0.8).pixelate(160,160),1024)
    .out(o1)



  src(o0)
    .blend(
      src(o0)
        .diff(s0)
        .scale(() => 0.99 + 0.1 * Math.sin(time * 0.1))
        .rotate(() => cc[1] * 0.1),
      () => 1.1 + cc[1] * 0.5
    )
    .modulatePixelate(
      noise(() => 2 + cc[1] * 3, () => 0.01 + cc[1] * 0.05)
        .pixelate(() => 16 - cc[1] * 8, () => 16 - cc[1] * 8),
      () => 1024 * (1 - cc[1] * 0.5)
    )
    .modulate(
      o0,
      () => cc[1] * 0.3
    )
    .kaleid(() => Math.floor(cc[1] * 4) + 1)
    .colorama(() => cc[1] * 0.1)
    .out(o1)


//---
    hush()

Im really happ something came out of the eeg pipieline but I honestly feel that that that time culd have been better spent relying on noise to generate visuals.

But nevertheless, I’m happy I could (pun intended..) leave my heart and soul on the screen.

Also this is how it looks like. from p5js: