Originally, I had gone for one of the visuals from tgh wesbite that was shared with us. However, Pulsar afterwards, crashed towards the end so I decided to use a simple visual I had made during my Intro to IM class. It’s a little over 2 mins (sorry :/ )

https://youtu.be/XYRRBaNS35w

Here is my Tidal code!

d2 $ struct "<t(3,8) t(5,8)>" $ s "casio" # n (run 8)

d4 $ struct "<t(3,8) t(5,8)>" $ ccv "<169 109 120 127>" 
  # ccn "0" 
  # s "midi"


d1 $ n ("e4 d c a b a b c7" |+ "<2 2 7 12>") # s "[superpiano, cp, bd, arpy, bd]"  # room 1

d3 $ struct "<t(3,8) t(5,8)>" $ ccv "<127 115 66 107>" 
  # ccn "0" 
  # s "midi"

d2 silence


hush

Hydra:

let p5 = new P5(); 
s0.init({ src: p5.canvas }); 
src(s0).out(); 

p5.hide(); 

let bubbles = [];

p5.draw = () => {
  if (bubbles.length === 0) {
    p5.createCanvas(window.innerWidth, window.innerHeight);
    for (let i = 0; i < 30; i++) {
      bubbles.push(new Bubble(p5.random(p5.width), p5.random(p5.height), p5.random(20, 100)));
    }
  }
  
  p5.background(137, 207, 240, 50);

  for (let i = 0; i < bubbles.length; i++) {
    bubbles[i].move();
    bubbles[i].display();
  }

  if (p5.frameCount % 15 === 0) {
    bubbles.push(new Bubble(p5.random(p5.width), p5.height, p5.random(20, 100)));
  }
};

class Bubble {
  constructor(x, y, r) {
    this.x = x;
    this.y = y;
    this.r = r;
    this.speed = p5.map(this.r, 20, 100, 2, 0.5);
    this.color = p5.color(p5.random(100, 255), p5.random(100, 255), p5.random(255), p5.random(100, 200));
  }

  move() {
    this.y -= this.speed;
    this.x += p5.random(-1, 1);
  }

  display() {
    p5.fill(this.color);
    p5.noStroke();
    p5.ellipse(this.x, this.y, this.r);
  }
}

src(s0)
    .mult(osc(2, () => cc[0] * 2, 3))
    .modulate(noise(() => cc[1] * 0.5))  
    .rotate( () => cc[0] * 0.5 )        
    .colorama(() => cc[0] * 1)       
    .out();

src(o2)
  .modulate(src(o1)
  .modulate(noise(() => cc[1] * 0.05))  
  .rotate( () => cc[2] * 0.2 ))
  .colorama(() => cc[0] * 2)       
  .blend(src(o0))
  .out(o2)

  
render(o2)

hush()

Apologies for the late submission, it slipped my mind to post one, though I had recorded my video prior to the class.

All in all, I had to idea what to expect with the composition. I had no idea what my personal stylistic choices are, so I kind of struggled at the start with a concept. Therefore, I simply began with crafting a funky upbeat solid rhythm. I took my time to become more familiar with the visuals, therefore, I had spent quite a bit of time experimenting with them, but not many of my results felt like they aligned with the direction my piece was heading. Then I thought to bring in a personal sound sample to spice things up. As a result, I went to the first thing that came to my mind — Pingu. I included the Noot Noot sample as I find Pingu to be the perfect embodiment of chaos but also a playful character (and also he is one of my favourite characters to exist).  I wanted to ensure the visuals were in sync with the sound, and at the start I had struggled, especially with finding the right sort of ccv values, however, through a brute iterative trial and error session, I found a neat balance. I had started going with a more subtle approach, however I found that it was quite challenging to recognise this, and I was worried that given the time limit during the demos, I would not be able to execute in a proper manner. Therefore, I went for more bolder visuals, with simpler beats. I noted that in class you said that the sync between the visuals and the audio was not as evident, so I hope from this video you are able to find a more distinguishable link between them.

From the 0:27 part, I introduce a new melody, and I wanted to represent that with squiggly lines to indicate its playful nature. This is then followed by even funkier and playful beats such as Casio and blip. Once I had found an interesting synchrony with casio and blip, I understood how I wanted to go ahead — as this made it easy for me to create something that reflects the feeling lightheartedness with a tinge of a spirited and lively approach, however, as I had Pingu in my vision, around the end of my video (4:00) I began to truly mess with the visuals and create something that is quite disorderly in nature despite it being in sync with my sound.

I hope that you enjoyed!

Here is my code! (It’s a bit changed from the video since it is from the class demo)

Tidal

--- FINAL CODE

hush
d1 $ s "{808bd:5(3,4) 808sd:2(2,6)} " # gain 2 # room 0.3

d1 silence
d2 $ struct "{t(3,4) t(2,6) t(2,4)}" $ ccv ((segment 128 (range 127 0 saw))) # ccn "0" # s "midi"
hush

d3 $ fast 2 $ s "pluck" <| n (run 4) # gain 1 # krush 2
d2 $ ccv "0 20 64 127" # ccn "0" # s "midi"

d4 $ s "glasstap" <| n (run 4) # gain 1.5

d5 $ slow 2 $ s "arpy" <| up "c d e f g a b c6" # gain 1.5
d2 $ ccv " 9 19 36 99 80 87 45 100" # ccn "0" # s "midi"

d6  $ fast 2 $ s "casio" <| n (run 4) # gain 2
d3 $ qtrigger $ filterWhen (>=0) $ seqP [
  (0, 1, s "blip:1*4"),
  (1,2, s "blip:1*8"),
  (2,3, s "blip:1*12"),
  (3,4, s "blip:1*16")
] # room 0.3

d4 silence
hush
nooty = once $ sound "nootnoot:Noot" # squiz 1 # up "-2" # room 1.2 # krush 2
nooty
-- PART 2

d5 $ s "blip"  <| n (run 4)
  # krush 3
  # gain 1

d2 $ ccv "30 80 120 60" # ccn "0" # s "midi"
d6 silence

hush

d6 $ fast 2 $ s "control" <| n (run 2)
d7$ fast 2 $ s "casio" <| n (run 4) #gain 0.9 



d8 $ s "{arpy:5(3,4) 808sd:(2,4)} " # gain 1

d2 $ struct "{t(3,4) t(2,4) t(2,4)}" $ ccv ((segment 128 (range 127 0 saw))) # ccn "0" # s "midi"
nootynooty = once $ sound "nootnoot:Noot" # legato 0.2 # squiz 1 # up "-2" # room 1.2 # krush 2

d6 silence
d10 $ qtrigger $ filterWhen (>=0) $ seqP [
  (0, 1, s "control:1*4"),
  (1,2, s "control:1*8"),
  (2,3, s "control:1*12"),
  (3,4, s "control:1*16")
] # room 0.3
nooty

hush

Hydra


//SHAPE ONE 

osc(20, 0.4, 1)
  .color(0.3, 1.2, 1.2)
  .rotate(() => cc[0] * 0.9 * 0.8)
  .kaleid(10)
  .modulateRotate(noise(() => (cc[0]) * 0.7, 0.6))
  .rotate(() => cc[0] * 1.1 * 1.8)
  .kaleid(30)
  .modulateRotate(noise(() => (cc[0]) * 0.9, 0.6))

.out()
hush()

//SHAPE TWO 
osc(20, 0.3, 3)
  .color(1.3, 1.8, 2.9)
  .modulate(noise(() => (cc[0] + cc[1]) * 3, 1.4))
  .layer(
    osc(70, 0, 1)
      .luma(0.5, 0.1)
      .kaleid(10)
      .modulate(noise(() => (cc[0] + cc[1]) * 2, 0.4))
  )
  .out(o0)
hush()
//SHAPE THREE
shape(10,0.5).scale(1,1,2).repeat(30,9).modulate(noise(() => (cc[0] + cc[1]) * 9, 0.9)).out()

solid().out()
//SHAPE IV
osc(15, 2.6, 1.8)
  .color(1.2, 1.4, 1.2)
  .rotate(() => cc[0] * 0.9 *0.5)
  .kaleid(20)
  .modulateRotate(noise(() => (cc[0]) * 1.2))
.out()

hush()
//SHAPE V
osc(10, 30, 10)
.kaleid(99)
.modulate(noise(() => (cc[0] + cc[1]) * 1.9, 0.2))
.out(o0)

// noot
hush()

“Nature is disorder. I like to use nature to create order and show another side of it. I like to denature.” Kurokawa bends over the iMac, clicks through examples of his work on a hard drive, and digs out a new concert piece that uses NASA topographic data to generate a video rendering of the Earth’s surface. Peaks and troughs dance overa geometric chunk on the black screen, light years from the cabbie’s satnav. “The surface is abstract, but inside it’s governed by natural laws,” he says.

I find Kurokawa’s perspective on nature as disorder, and his desire to “denature” it very interesting, particularly how it resonates deeply with the tension between chaos and order that exists in both art and science. His use of natural data such as NASA’s topographic information to create structured, perhaps even arguably a surreal presentation of the Earth highlights the duality between the organic and the artificial. It suggests that while nature may appear unpredictable, it operates within a framework of fundamental laws that can be harnessed and reshaped through human interpretation. 

Kurokawa ultimately challenges our perception of what is ‘natural’ and what is ‘artificial.’ His work demonstrates that the act of imposing order on nature does not necessarily strip it of its essence but rather reveals another dimension of its beauty—one that we might not perceive in its raw, untamed state.  

For my research project, I chose to experiment with the platform LiveCodeLab 2.5

LiveCodeLab 2.5 is an interactive, web-based programming environment designed for creative coding and live performances. It allows users to create 3D visuals and generate sounds in real-time as they type their code4. This unique platform is particularly suited for live coding, more particularly visuals as the range of sound samples that are offered are not much.

Live coding lab has however, many samples to work with, meaning it is an excellent introduction for perhaps younger audiences or those that are beginning their journey with live coding.

Unfortunately, I was looking forward to experimenting with sound manipulation, however, I found that this platform worked mainly with manipulating and editing visuals. Therefore, I decided to expand and start polishing my skills with live coding visuals.

https://drive.google.com/file/d/1YrtH6dgI-Y8YJtzzENxbCvzYfVMTkSlP/view?usp=sharing

What Is Live Coding? 

From the reading, I managed to gain an insightful understanding of what Live Coding is.From my own perspective, I would claim that it is a practice of improvisatory live performance through the use of code. Ultimately, we use code to connect ourselves to our artistic desires and visions, and doing it in real time means that there is a level of improvisation that live coders indulge in. Therefore I do agree with Ogborn’s resistance to define live coding — as it gives it a fixed state, and does not acknowledge its flexible nature. 

Live coding removes the curtain between the audience and the performer — to project the code from the screen, the audience can connect with the performer by being able to visualise how the programmer thinks in real time. Thus the act of writing in public adds an element of interactivity, honesty, and even creativity — all of which are pillars to the process of live coding.

This article explores a nuanced perspective on the nature of rhythms and patterns in African American music. However, I found the exploration of technology’s role in music production particularly thought-provoking. It made me realise that the evolution from early drum machines to sophisticated sampling techniques reflects a fascinating interplay between technological advancement and the desire to capture human-like expressiveness. The use of technology in music can be a tool to sharpen or elevate or even make the process of music composition easier. The examples of artists like Miya Masaoka and Laetitia Sonami, who blur the lines between acoustic and electronic sounds, demonstrate how technology can extend rather than replace human creativity. 

I find that there is an ongoing dialogue between technology and human expression in music – yet it continues to challenge our understanding of creativity, embodiment, whilst also dangerously pushing the boundaries – if those exist. For example, electronic music often plays with the tension between human and machine rhythms, creating a continuum between bodily presence and an electronic rhythm.