https://drive.google.com/file/d/1ZPple_A4kVd4Ttdrza2H5VuWWIC72h00/view?usp=sharing

Visuals (Linh & Ziya)

We want to start with a simple pattern, a circle. As we start from something simple, we add more layers on top of it. As we started with one circle, we tried to make simple changes such as scale and modulateScale. We also wanted to add feedback to the visual, so we added another layer of s0 to the screen. Finally, because we have 2 people work on the visual, we kind of create different visuals and mult them together to get the final visual. Ziya decided to work with the ccn and ccv values to add more dynamic feel to the performance, but overall our aim was to synthesis all our performances into one.

shape(200, 0.5, 1.5)
  .scale(0.5,0.5)
  .color([0.5, 2].smooth(1),()=>cc[1], ()=>cc[0])
  .repeat(2,2)
  .modulateScale(osc(3,0.5), -0.6)
  .add(o0, 0.5)
  .scale(0.9)
  .out()

osc(5, 1, 90)
.kaleid(99)
.modulate(noise(1.9, 0.1))
.color(0.8,0.9,0.9)
.brightness(0.5)
.out(o1)

render()

render(o2)

voronoi(100, 0.15)
  .modulateScale(osc(8).rotate(Math.sin(time)), .5)
  .thresh(0.8)
  .modulateRotate(osc(7), 0.4)
  .thresh(0.)
  .diff(src(o0).scale(1.8))
  .modulateScale(osc(2).modulateRotate(o0, 0.74))
  .diff(src(o0).rotate([-0.012, 0.01, -0.002, 0]).scrollY(0,[-1/19980, 0].fast(0.7)))
  .brightness([-0.02, -0.17].smooth().fast(0.5))
  .out()
hush()

Sound (Rashed & Luke)

Rashed:

When I started working on the audio, I wanted to use a sample. I really like music that starts in a very angelic and harmonic vibe and then it would shift into chaos. I was listening to a song the other day called GOLDWING and I decided to sample the first couple of seconds. I, then, started experimenting with legato along with chop and striate. I decided to play with the vocals instead of having my main sound be a drum sound or a bass sound (inspired by your washing machine). I feel like at first, improvising was really hard and it felt like this really fixed and concrete method of live coding which is why I really disliked it at first. However, after meetings with my group mates, I realized that some things are just not my strong suits and some things are my strong suits which is why I decided to go with audio rather than visuals. I found it really entertaining to experiment with audio as we were recording. Even when it came to the parts we had previously written, I wanted to experiment with what i could do with what I already had to the point where I found myself digging inside of my memories and finding random samples that I have previously used for previous projects and implementing them without taking a second to think if they fit the vibe we were going for. 

Also! I decided to add another sample from one of the best songs ever created called Headlock (hence the sample name) by Imogen Heap because I found one of the instruments she used in her song very interesting and I wanted to somehow incorporate that into the performance. I decided to do this last second but I really pushed for it and thank goodness everyone agreed. Very fun indeed.

Luke:

I was stuck in an endless loop of capstone and all the materials related to it so I wasn’t able to meet with my groupmate. The music on my end has no script, I instead had to improvise my part during the drum circle performance, I followed Rashed’s cue and tried to figure out how to come in so that it matched with visuals and added to the music; his music led the way for me to think on the spot how I could accompany in real time environment.

once $ jux rev $ s "gold"

d1 $ s "~ ~ ~ tink*2"
d8 $ ccv "~ ~ ~ 120 60" # ccn "1" # s "midi"
d9 $ ccv " 124 0 124 0 120" # ccn "0" #s "midi"
d10 $ ccv ((segment 128 (range 127 0 saw))) # ccn "0" # s "midi"
d1 $ s "tink" >| n (scale "major" ("[4*2 3*4 2*2 1*2]")+"1") 

d2 $ s "bass1" # room 0.9 # legato 1 # gain 1

d3 $ chop 2 $ s "gold*2" # legato 0.4

d4 $ jux rev $ striate 6 $ s "gold ~ ~ bass:2*3" # cut 1

d2 $ s "~ ~ bass1*2" # room 0.9 # legato 1 # gain 1

hush

d3 silence 

d4 silence 
 
d1 silence 

d4 $ jux (striate 8) $ jux rev $ s "gold*2"  # legato 1
d9 $ ccv "120 20" # ccn "0" # s "midi"


d5 $ jux rev $ "hh*12 ~ hh*4 ~ hh*8"

d5 $ "~ ~ ~ hh*2" # room 0.2 # legato 0.5 # gain 1.5

hush
d4 silence 
d6 silence
d7 silence
d5 silence 

d6 $ s "tink" >| n (scale "minor" ("[4 3 2 4 1]")+"0.8")
once $ s "lock"
d7 $ jux rev $ chop 6 $ s "lock*4" # legato 1 


hush

Originally, I had gone for one of the visuals from tgh wesbite that was shared with us. However, Pulsar afterwards, crashed towards the end so I decided to use a simple visual I had made during my Intro to IM class. It’s a little over 2 mins (sorry :/ )

https://youtu.be/XYRRBaNS35w

Here is my Tidal code!

d2 $ struct "<t(3,8) t(5,8)>" $ s "casio" # n (run 8)

d4 $ struct "<t(3,8) t(5,8)>" $ ccv "<169 109 120 127>" 
  # ccn "0" 
  # s "midi"


d1 $ n ("e4 d c a b a b c7" |+ "<2 2 7 12>") # s "[superpiano, cp, bd, arpy, bd]"  # room 1

d3 $ struct "<t(3,8) t(5,8)>" $ ccv "<127 115 66 107>" 
  # ccn "0" 
  # s "midi"

d2 silence


hush

Hydra:

let p5 = new P5(); 
s0.init({ src: p5.canvas }); 
src(s0).out(); 

p5.hide(); 

let bubbles = [];

p5.draw = () => {
  if (bubbles.length === 0) {
    p5.createCanvas(window.innerWidth, window.innerHeight);
    for (let i = 0; i < 30; i++) {
      bubbles.push(new Bubble(p5.random(p5.width), p5.random(p5.height), p5.random(20, 100)));
    }
  }
  
  p5.background(137, 207, 240, 50);

  for (let i = 0; i < bubbles.length; i++) {
    bubbles[i].move();
    bubbles[i].display();
  }

  if (p5.frameCount % 15 === 0) {
    bubbles.push(new Bubble(p5.random(p5.width), p5.height, p5.random(20, 100)));
  }
};

class Bubble {
  constructor(x, y, r) {
    this.x = x;
    this.y = y;
    this.r = r;
    this.speed = p5.map(this.r, 20, 100, 2, 0.5);
    this.color = p5.color(p5.random(100, 255), p5.random(100, 255), p5.random(255), p5.random(100, 200));
  }

  move() {
    this.y -= this.speed;
    this.x += p5.random(-1, 1);
  }

  display() {
    p5.fill(this.color);
    p5.noStroke();
    p5.ellipse(this.x, this.y, this.r);
  }
}

src(s0)
    .mult(osc(2, () => cc[0] * 2, 3))
    .modulate(noise(() => cc[1] * 0.5))  
    .rotate( () => cc[0] * 0.5 )        
    .colorama(() => cc[0] * 1)       
    .out();

src(o2)
  .modulate(src(o1)
  .modulate(noise(() => cc[1] * 0.05))  
  .rotate( () => cc[2] * 0.2 ))
  .colorama(() => cc[0] * 2)       
  .blend(src(o0))
  .out(o2)

  
render(o2)

hush()

Apologies for the late submission, it slipped my mind to post one, though I had recorded my video prior to the class.

All in all, I had to idea what to expect with the composition. I had no idea what my personal stylistic choices are, so I kind of struggled at the start with a concept. Therefore, I simply began with crafting a funky upbeat solid rhythm. I took my time to become more familiar with the visuals, therefore, I had spent quite a bit of time experimenting with them, but not many of my results felt like they aligned with the direction my piece was heading. Then I thought to bring in a personal sound sample to spice things up. As a result, I went to the first thing that came to my mind — Pingu. I included the Noot Noot sample as I find Pingu to be the perfect embodiment of chaos but also a playful character (and also he is one of my favourite characters to exist).  I wanted to ensure the visuals were in sync with the sound, and at the start I had struggled, especially with finding the right sort of ccv values, however, through a brute iterative trial and error session, I found a neat balance. I had started going with a more subtle approach, however I found that it was quite challenging to recognise this, and I was worried that given the time limit during the demos, I would not be able to execute in a proper manner. Therefore, I went for more bolder visuals, with simpler beats. I noted that in class you said that the sync between the visuals and the audio was not as evident, so I hope from this video you are able to find a more distinguishable link between them.

From the 0:27 part, I introduce a new melody, and I wanted to represent that with squiggly lines to indicate its playful nature. This is then followed by even funkier and playful beats such as Casio and blip. Once I had found an interesting synchrony with casio and blip, I understood how I wanted to go ahead — as this made it easy for me to create something that reflects the feeling lightheartedness with a tinge of a spirited and lively approach, however, as I had Pingu in my vision, around the end of my video (4:00) I began to truly mess with the visuals and create something that is quite disorderly in nature despite it being in sync with my sound.

I hope that you enjoyed!

Here is my code! (It’s a bit changed from the video since it is from the class demo)

Tidal

--- FINAL CODE

hush
d1 $ s "{808bd:5(3,4) 808sd:2(2,6)} " # gain 2 # room 0.3

d1 silence
d2 $ struct "{t(3,4) t(2,6) t(2,4)}" $ ccv ((segment 128 (range 127 0 saw))) # ccn "0" # s "midi"
hush

d3 $ fast 2 $ s "pluck" <| n (run 4) # gain 1 # krush 2
d2 $ ccv "0 20 64 127" # ccn "0" # s "midi"

d4 $ s "glasstap" <| n (run 4) # gain 1.5

d5 $ slow 2 $ s "arpy" <| up "c d e f g a b c6" # gain 1.5
d2 $ ccv " 9 19 36 99 80 87 45 100" # ccn "0" # s "midi"

d6  $ fast 2 $ s "casio" <| n (run 4) # gain 2
d3 $ qtrigger $ filterWhen (>=0) $ seqP [
  (0, 1, s "blip:1*4"),
  (1,2, s "blip:1*8"),
  (2,3, s "blip:1*12"),
  (3,4, s "blip:1*16")
] # room 0.3

d4 silence
hush
nooty = once $ sound "nootnoot:Noot" # squiz 1 # up "-2" # room 1.2 # krush 2
nooty
-- PART 2

d5 $ s "blip"  <| n (run 4)
  # krush 3
  # gain 1

d2 $ ccv "30 80 120 60" # ccn "0" # s "midi"
d6 silence

hush

d6 $ fast 2 $ s "control" <| n (run 2)
d7$ fast 2 $ s "casio" <| n (run 4) #gain 0.9 



d8 $ s "{arpy:5(3,4) 808sd:(2,4)} " # gain 1

d2 $ struct "{t(3,4) t(2,4) t(2,4)}" $ ccv ((segment 128 (range 127 0 saw))) # ccn "0" # s "midi"
nootynooty = once $ sound "nootnoot:Noot" # legato 0.2 # squiz 1 # up "-2" # room 1.2 # krush 2

d6 silence
d10 $ qtrigger $ filterWhen (>=0) $ seqP [
  (0, 1, s "control:1*4"),
  (1,2, s "control:1*8"),
  (2,3, s "control:1*12"),
  (3,4, s "control:1*16")
] # room 0.3
nooty

hush

Hydra


//SHAPE ONE 

osc(20, 0.4, 1)
  .color(0.3, 1.2, 1.2)
  .rotate(() => cc[0] * 0.9 * 0.8)
  .kaleid(10)
  .modulateRotate(noise(() => (cc[0]) * 0.7, 0.6))
  .rotate(() => cc[0] * 1.1 * 1.8)
  .kaleid(30)
  .modulateRotate(noise(() => (cc[0]) * 0.9, 0.6))

.out()
hush()

//SHAPE TWO 
osc(20, 0.3, 3)
  .color(1.3, 1.8, 2.9)
  .modulate(noise(() => (cc[0] + cc[1]) * 3, 1.4))
  .layer(
    osc(70, 0, 1)
      .luma(0.5, 0.1)
      .kaleid(10)
      .modulate(noise(() => (cc[0] + cc[1]) * 2, 0.4))
  )
  .out(o0)
hush()
//SHAPE THREE
shape(10,0.5).scale(1,1,2).repeat(30,9).modulate(noise(() => (cc[0] + cc[1]) * 9, 0.9)).out()

solid().out()
//SHAPE IV
osc(15, 2.6, 1.8)
  .color(1.2, 1.4, 1.2)
  .rotate(() => cc[0] * 0.9 *0.5)
  .kaleid(20)
  .modulateRotate(noise(() => (cc[0]) * 1.2))
.out()

hush()
//SHAPE V
osc(10, 30, 10)
.kaleid(99)
.modulate(noise(() => (cc[0] + cc[1]) * 1.9, 0.2))
.out(o0)

// noot
hush()

“Nature is disorder. I like to use nature to create order and show another side of it. I like to denature.” Kurokawa bends over the iMac, clicks through examples of his work on a hard drive, and digs out a new concert piece that uses NASA topographic data to generate a video rendering of the Earth’s surface. Peaks and troughs dance overa geometric chunk on the black screen, light years from the cabbie’s satnav. “The surface is abstract, but inside it’s governed by natural laws,” he says.

I find Kurokawa’s perspective on nature as disorder, and his desire to “denature” it very interesting, particularly how it resonates deeply with the tension between chaos and order that exists in both art and science. His use of natural data such as NASA’s topographic information to create structured, perhaps even arguably a surreal presentation of the Earth highlights the duality between the organic and the artificial. It suggests that while nature may appear unpredictable, it operates within a framework of fundamental laws that can be harnessed and reshaped through human interpretation. 

Kurokawa ultimately challenges our perception of what is ‘natural’ and what is ‘artificial.’ His work demonstrates that the act of imposing order on nature does not necessarily strip it of its essence but rather reveals another dimension of its beauty—one that we might not perceive in its raw, untamed state.  

For my research project, I chose to experiment with the platform LiveCodeLab 2.5

LiveCodeLab 2.5 is an interactive, web-based programming environment designed for creative coding and live performances. It allows users to create 3D visuals and generate sounds in real-time as they type their code4. This unique platform is particularly suited for live coding, more particularly visuals as the range of sound samples that are offered are not much.

Live coding lab has however, many samples to work with, meaning it is an excellent introduction for perhaps younger audiences or those that are beginning their journey with live coding.

Unfortunately, I was looking forward to experimenting with sound manipulation, however, I found that this platform worked mainly with manipulating and editing visuals. Therefore, I decided to expand and start polishing my skills with live coding visuals.

https://drive.google.com/file/d/1YrtH6dgI-Y8YJtzzENxbCvzYfVMTkSlP/view?usp=sharing

What Is Live Coding? 

From the reading, I managed to gain an insightful understanding of what Live Coding is.From my own perspective, I would claim that it is a practice of improvisatory live performance through the use of code. Ultimately, we use code to connect ourselves to our artistic desires and visions, and doing it in real time means that there is a level of improvisation that live coders indulge in. Therefore I do agree with Ogborn’s resistance to define live coding — as it gives it a fixed state, and does not acknowledge its flexible nature. 

Live coding removes the curtain between the audience and the performer — to project the code from the screen, the audience can connect with the performer by being able to visualise how the programmer thinks in real time. Thus the act of writing in public adds an element of interactivity, honesty, and even creativity — all of which are pillars to the process of live coding.