I started this project by focusing on the sound first and then creating visuals to match. I had a bunch of ideas—I wanted some acid 303s, amen break drums, and to incorporate elements from my first demo in class. But I also wanted it to have a techno feel. So, I built all these sounds and then started stitching them together into a composition.

The biggest challenge was that some of the sounds didn’t really fit together. To fix that, I made some ambient sounds and drums that suited the composition better, which ended up making the track slower than I originally planned. I was aiming for more of an ambient techno vibe—something faster. I also wanted to use the amen break throughout the whole track, but it didn’t quite fit, so I just included it right before the chorus.

For me, the defining sound of this piece is the 303s—I’m a huge fan of them. They have this raw, chaotic energy, which is what I love about them. That’s also why I wanted the visuals to feel chaotic. The visuals have three sections, all messy and intense, which was exactly what I was going for. I usually focus more on sound, but this time, I actually had more fun working on the visuals.

Overall I am very happy with the composition. As for the sound, the drop right now feels a little too “stiff” (if that makes sense) but I find it to be a good transition to the superhoover.

Here is the code:

Tidal

setcps(120/60/4)


start = fast 2 $ s "feelfx"
  >| note (scale "<india>" ("[<-7 -3 -1>,0,2]*<1>") + "[c1,c2]")
  #legato 1
  #end 0.3
  #lpf(segment(100/0.75) $ fast (8/0.75) $ range 200 400 $ sine) #lpq 0.3 #hpf 200

start2 = slow 4 $ swingBy (1/10) 8 $ s "<dbs*198 dbs*386 dbs*780>" # gain "[1 0.8 1.3 0.8]*4" #lpf (segment(100/0.75) $ fast (8/0.75) $ range 200 400 $ sine) # amp 2
hush
--drums
sdrums = stack[struct ("1 1 1 [1 0 <0 0 1 0> <0 1>]") $ s "dbs:3" # gain 1.2 #room 2, whenmod 16 15 (stut 4 1 0.065) $ n "~ 12 ~ 10" # s "dbs:2" # legato (choose[0.2,1]) # gain 1] # gain 1.2
d12 $ fast 4 $ ccn "2*128" # ccv "<100 0 8 0 10 0>" # s "midi"

sfx = stack[sound "808bd [bd [bd dbs:1]] <dbs:1 dbs:1:2> [~ ~ <~ bd> bd]" # n "1" # speed "<1 1 1 0.8>",sound "~ glitch ~ [dbs:3 [glitch glitch]]" # n "2 2 2 4 " # speed "1.3" # pan sine,sound "[~ glitch ~ glitch] [~ hc] [~ [hc hc]] [glitch hc] " # n "[0 <9 7> 0 <8 9>] 0 0 [1 3] ",sound "glitch(<9 5 12>,16)" # n "0 1 2 3 4 5 6 7" #speed "<3 1> 2 1 1 <5 2>" # pan saw] #shape 0.2 # hpf 2000 #krush 4

acidfx =  whenmod 24 20 (# speed "1.2") $ every 2 (rev) $ whenmod 8 6 (stut' 6 0.065 (|* speed 0.2)) $ sometimesBy 0.1 (# room 0.9) $ struct ("1(5,16,<0 1 2>)") $ randslice 2 $ s "<acid:1 acid:0 acid:2>" |+ n "5!6 <14 0>" # legato (choose[0.6,0.8]) # sz 0.3 #room 0.3 #hpf 400 #gain 1
d13 $ ccn "3*128"  # ccv "< 50 127 <100 127> 10 100 12 45 25 90 >"  # s "midi"


--ambient
do{
d1 $ start;
d10 $ slow 12 $ ccn "0*128" # ccv (segment(100/0.75) $ fast (8/0.75) $ range 200 400 $ sine) # s "midi"
}

hush

--dbs
do{
d2 $ start2;
d11 $ slow 64 $ ccn "1*128" # ccv (segment(100/0.75) $ fast (8/0.75) $ range 200 400 $ saw) # s "midi"
}

d2 silence

--DRUMS
do{
d3 $ sdrums;
d12 $ fast 4 $ ccn "2*128" # ccv "<100 0 8 0 10 0>" # s "midi"
}

hush

--acid fx
do{
d4 $ acidfx;
d13 $ ccn "3*128"  # ccv "< 50 127 <100 127> 10 100 12 45 25 90 >"  # s "midi"
}

--sfx
 d5 $ sfx

-- takeout drums
do{
d3 $ silence;
d12 $ silence
}

--takeout sfx
d5 $ silence

-- takeout  dbs
do{
d2 $ silence;
d11 $ silence
}

--takeout acid (not before riser)
do{
d4 $ silence;
d13 $ silence
}

-- superhoover (last)
 d15 $ silence

--takeout acid + riser
do{
  d1 $ silence; 
  d9 $ qtrigger $ filterWhen (>=0) $ seqP [
    (0, 1, randslice 2 $striate 8 $ s "acid:0"),
    (1, 2, randslice 2 $striate 8 $ s "acid:0"),
    (2, 3, randslice 2 $striate 8 $ s "acid:0*2"),
    (3, 4, randslice 2 $striate 8 $ s "acid:0*4"),
    (4, 5, randslice 2 $striate 8 $ s "acid:0*8"),
    (5, 7, randslice 2 $striate 8 $ s "acid:0*16" #pan saw),
    (7,18, slow 2 $ s "amenbrother:1" #gain 1.2)
  ]  #cut 5;
}

-- This is drop part I
 d15 $ s "superhoover" >| note ((scale "major" "<2 4 0 6>" + "f4")) #distort 1
 #lpf(range 200 600 $ sine) #hpq 1


-- Drop part II
do{
  d10 $ slow 12 $ ccn "0*128" # ccv (segment(100/0.75) $ fast (8/0.75) $ range 200 400 $ sine) # s "midi";
  --dbs
  --d2 $ start2;
  d11 $ slow 64 $ ccn "1*128" # ccv (segment(100/0.75) $ fast (8/0.75) $ range 200 400 $ saw) # s "midi";
  --DRUMS
  d3 $ sdrums;
  d12 $ fast 4 $ ccn "2*128" # ccv "<100 0 8 0 10 0>" # s "midi";
  --acid fx
  d4 $ acidfx;
  d13 $ ccn "3*128"  # ccv "< 50 127 <100 127> 10 100 12 45 25 90 >"  # s "midi";
  --sfx
   d5 $ sfx;
}

d1 $ silence


hush

Hydra



//start without dbs
speed = 0.5
shape(2,0.2,0.9)
.color(10,0,()=>cc[0],100)
.scale(0.4)
.repeat(1.0009)
.modulateRotate(o0, ()=>cc[0]*-5)
.scale(0.9).modulate(noise(()=>cc[0],2))
.rotate(3) // DBS ->>>.diff(src(o0).scale(0.9).mask(shape(4,0.9,0.01)).rotate(()=>cc[1]**2))
.out(o0)

hush()

//DRUMS
s0.initImage('C:/Users/Zakarya/Downloads/LARRY.gif')
src(s0).blend(src(s0).diff(o1).scale(1.01),1.0005)
.layer(
  src(s0)
  .scale(()=> 0.09 + ccActual[2]*0.01)
  .luma(.2)
  .invert()
  .contrast(2)
  .scrollX(.1, -0.01)
  //.modulatePixelate(src(s0), [250, 500, 10001]
)
.out(o0);

hush()

//ACID
speed = 0.5
shape(2,0.02)
.modulate(noise(1, 10))
.color(10,0,()=>Math.random(),100)
.scale(0.09) //interesting (0.09) & scale (100)
.repeat(1.9)
.modulateRotate(s0,()=>ccActual[3]*-50)
.scale(10).modulate(noise(()=>cc[3]**1001,1))
.rotate(1)
.diff(src(o0).scale(0.9).mask(shape(2,0.999,0.01)).rotate(()=>cc[3]*0.01))
//.pixelate(1,1000)
.out(o0)


hush()

The reading prompted me to explore some of Ryoichi Kurokawa’s work, and I found “Re-Assembli” at the ETERNAL Art Space Exhibition really interesting. What’s cool about it is his approach to de-naturing – transforming familiar landscapes of trees and buildings by altering their settings to black and white or inverting their colors, then presenting these transformed scenes through striking, unconventional camera views. As the images move, they often blink rapidly in sync with industrial-like sounds, creating an uncanny, almost synesthetic experience. This synthesis of audio and visuals not only deconstructs traditional notions of nature but also immerses the viewer in a unique sensory journey. Another aspect of “Re-Assembli” that resonated with me was the juxtaposition he created by the two screens placed side by side. On one screen, visuals of trees and nature were played, while on the other, he played images of buildings and interior spaces. This contrast was particularly fascinating because it accentuated the tension between the organic and the constructed, inviting viewers to reflect on how nature and human-made environments coexist and interact. By deliberately placing these two narratives in parallel, Kurokawa challenges our conventional perceptions and encourages us to consider the impact of urbanization and technological intervention on the natural world.

Another thing that I found really cool was Kurokawa’s approach is his choice to work without internet in his studio, even though he uses technology as a tool for his art. This detail made me wonder if he deliberately avoids the internet to minimize distractions or to protect his originality from being influenced by the endless stream of external ideas. I was recently discussing with a friend how ChatGPT can generate creative suggestions that might, paradoxically, lead to a decrease in overall creativity by making us less inclined to think of new ideas on our own. In this light, Kurokawa’s decision to forgo internet access might be a conscious effort to create a focused, unmediated space for artistic exploration, where his creative process remains untouched by the constant influx of digital information.

Also, his indifference toward both old media and the latest innovations highlights his focus on the essence of creativity itself. By working in a fluid, adaptable manner, much like the gradual evolution of nature, he ensures that his artistic process remains open to new ideas and free from the constraints of technological trends. This philosophy not only protects his originality but also allows his work to develop at its own pace, echoing the natural, unpredictable progression of life.

For the research project, the live coding platform that I picked is Motifn. Motifn enables users to make music using JavaScript. It has 2 modes: a DAW mode and a fun mode. The DAW mode lets users connect their digital audio workstation, like Logic, to the platform, by which a user can orchestrate synths into their DAW using JS. The fun mode on the other hand lets you start producing music in the browser right away. I used the fun mode for the project.

The coolest feature about Motifn is that visualises the music for you. Similar to how we see the selected notes in a MIDI region in Logic, Motifn lays out all the different tracks along with the selected notes underneath the code. This allows the user to better understand the song structure and is an intuitive way to lay out the song which makes it user friendly.

To get started, I started reading the examples on the platform. There is a long list of examples right next to the coding section on the website. All of the examples are interactive, which makes it easier to experiment with different things. Since it is right next to the coding section of the website it is also convenient to try out a lot of examples because there was no need to open different tabs to refer to the documentation. Having an interactive, short, and to-the-point documentation enabled me to experiment with different things Motifn has to offer.

After playing around it for a while, I discovered that the platform let’s you decide the structure of the song before you even finish coding the song itself. So, using the let ss = songStructure ({}), I decided to a song structure.

Motifn has a lot of synth options (some of them made using Tone.js) and I am huge fan of synths. So I started my song with that. Followed by the addition of bass in the first bridge, synth + bass + notes in 2nd chorus, bass + hihats in the 2nd bridge, kicks + snare + hihats + bass + chords in the first and second verse, remove the drums in the third chorus and then bring them back in the next one. After that I just take out thre instruments one by one and the song finishes.

Here is the demo of what I made on Motifn.

There isn’t a lot of information about Motifn online. I was unable to find the year it was developed in or even the founders. I would place this platform somewhere in the middle of live coding and DAW music production. I felt as if there was less flexibility to experiment and make music on the fly as compared to TidalCycles. Motifn seems more structured and intentional. But there are a lot of cool sounds and controls on the platform, like adding groove to instruments making it play either behind (like we read in class) or ahead of the beat by a few ms or modulating the harmonicity of a synth over time. Its integration of JavaScript for music composition makes it accessible to a broad range of users which reflects the live coding community’s values of openness and innovation. Overall, it is a fun platform to use and I am happy with the demo song that I made using Motifn.

In techno, the drum components—particularly the kick drum and hi-hats—are arguably the most foundational elements. They act as a constant foundation, driving the rhythm forward and maintaining momentum.

Groove is highly subjective, but in my opinion, adding microtiming or changing velocity of kick drums or hi hats makes a track sound much better. It introduces subtle variations that prevent the beat from feeling too stiff or mechanical. Robotic rhythms aren’t inherently bad, but over time, they can become predictable or monotonous.

However, microtiming isn’t the only thing that gives electronic music its soul. Another major factor is the emotion it evokes in listeners and the culture: Berlin style underground techno will sound very different from Detroit style underground techno.

For example:

Ambient techno has a different kind of soul—it’s deep, introspective, and atmospheric.
Hardstyle has an intense, energetic soul, built around distortion and high-energy kicks.
Hardgroove brings a driving, hypnotic pulse that feels more tribal and raw.


Each subgenre carries its own emotional weight, and that emotional impact is just as important as rhythmic complexity. While microtiming can enhance the feel of a track, other elements like sound design, harmonic progression, and energy levels also contribute to the overall experience of the record.

Electronic music doesn’t need microtiming to have soul, but it benefits from it—especially in genres where groove is key.

As a computer science student and a DJ, I find live coding to be very intriguing because it will enable me to do something creative around my passion for electronic music with code – something that is typically never used in creative fields like music production.

Live coding allows anyone to see and understand the process of creating music through code. Unlike traditional DJing, where music is often mixed from pre-recorded tracks, live coding enables real-time composition, making each performance unique and dynamic. This improvisatory nature mirrors the spontaneity of live music while using the precision and power of programming.

What appeals to me the most is the deeply human aspect of all this. The algorave scene, where people come together to dance to music generated in real time through live coding, is a perfect example of how tech can serve us rather than the other way around. It’s not just about writing code—it’s about using that code to create shared experiences, to bring people together, and to foster a sense of connection. Seeing live coding facilitate something communal through algoraves, subreddits, and GitHub pages reinforces the idea that code isn’t just about logic, structure, and money. It can also be a powerful tool for expression, emotion, and collective joy.