Hi, flock-ers! Or should we say, hackers? Happy finals season — enjoy our documentation:

How We Started, aka the Chaotic Evil:

During the first two weeks of our practice, we approached the final project with the drum circle mindset. For every meeting, we would split into Hydra (2 ppl) and Tidal (2 ppl) teams and improvise, creating something new and enjoying it. When it came time to show the build-ups and drops, we struggled, because we had a lot of sounds and visuals going on separately, but not in one sequence. One evening, we created a mashup which later turned into our first build-up and drop music, yet without cohesive visuals or any other connecting tissue.

How We Proceeded, aka Narrowing the Focus:

A week later, Amina and Shreya were still improvising with Tidal, perfecting the first build-up and drop along with composing the second one, while Dania and Thaís were working on visualizing the first build-up and drop. One moment, Shreya was modifying Amina’s code, and a happy accident happened. That turned into our second drop, with a little magic of chance at 11 PM in the IM Lab.

At the same time, we also decided to narrow our focus only to certain sounds or visuals, critically choosing the ones that would go with our theme and not sound disintegrated or chaotic. 

The Connecting Tissue, aka Blobby and Xavier:

While working on the visuals, we decided to use simple shapes to make it as engaging and as aesthetically pleasing as possible. We narrowed down the visuals that we made during our meetings into 2 sections, circles and lines, which we later decided to name Blobby and Xavier respectively. The choice of circles growing was inspired by our dominating sounds – ‘em’ and ‘tink’ in the first build-up. When we thought of these sounds, Blobby is the visual we imagined. Similarly, Xavier was given its form. Dania and Thaís came up with this names. 

Initially, we wanted to tell the story of the interaction between Blobby and Xavier but the sounds we had did not quite match up with the stories we had in mind. From there, we started to experiment with different ways we could convey a story that had both Blobby and Xavier. Since we already had the sound, we started thinking of what visuals looked best with the sound that we had, and then it all started coming together almost too perfectly. 

We had the sounds and visuals for our 2 build-ups and drops but we needed some way to connect the two. Because Blobby and Xavier had no connection with each other, we tried finding different ways to connect them so the composition would look cohesive. This is when we decided to stick with one color scheme throughout the entire composition. We chose a very bright and colorful palette because that’s the vibe we got from the sounds we created. To transition from Blobby to Xavier, Dania came up with the idea of filling the entire screen with circles after the first drop. The circles would then overlap and create what looks like an osc() that we can then slowly shrink till it becomes a line that can then be modulated to get to xavier. Although this sounded like a wonderful idea, it was a painful one to execute. But in the end, we managed to do it and the result was absolutely lovely <3

Guys, let’s…. aka Aaron’s “SO”:

As we were playing around with the story behind Blobby and Xavier, Shreya and Dania came up with an idea… “Guys, what if we add Aaron’s voice into the performance?” Of course we could not resist, especially when Thaís happened to have a few of his famous lines recorded. This idea quickly became the highlight of our performance and provided us with a way to transition between the different buildups and drops we created while also adding some spice and flavour to our performance.

The Midi Magic aka How It All Came Together:

We had the sounds and the visuals but we still needed some way to connect the 2. This is where the midi magic happened. Because we had a slow start to the music, we decided to map each sound to a visual element, using midi, so that each sound change can be accompanied by a visual change and it doesn’t get monotonous either on the sonic or the visual side. But after the piece builds up, we thought it would be too much to have each sound correspond to a visual element so we grouped the sounds we had together into different sections, for example, all the drum sounding sounds would correspond to one specific visual element. For example, clubkick and soskick would both modulate Blobby instead of one of them modulating it and the other having some other effect. We also thought it would be better to have the dominant sounds have the biggest visual change – something that was inspired by the various artists’ work we saw in class. We applied the same concept to Xavier. While linking the sounds with the visuals, we also put a lot of thought into what the visual effect of each sound should be. We used midi to map the beat of the sounds to the visual change and also to automate the first half of the visuals and somehow, we ended up using around 25 different midi channels (some real midi magic happened there). 

one_min_pls(), aka the story of our composition

Once we had our composition ready, now was time for the performance, after all, it is live coding! So we had to decide on how we wanted our composition to look on screen, how to call the functions or evaluate the line, while also having some story. One thing all of us were super keen on was for it to have a story, and not random evaluation of lines. After much discussion, we decided to make the composition a story of the composition itself – how it came to life and how we coded it. To do this, we decided to make the performance a sort of conversation between us, where the function name would sometimes correspond to something we would usually say while triggering that specific block of code (ie. i_hope_this_works() for p5 bec it would usually crash) and other times they would be named based on what we were saying at the time (ie. i_can_see_so()). Because the function names were based on our conversations, it was really easy (and fun) to follow and remember – all we had to do was respond to each other as we usually would. It was a grape 🍇 bonding experience

Reflection(do we include this? ) CAUTION: the amount of cheese here is life threatening:  

Our group was very chaotic most of the time, but that somehow seemed to work perfectly for us and we’re glad we were able to somehow showcase some of this chaos and cohesiveness through our composition.Through our composition our own personalities are very prominent. Everytime we see a qtrigger, we think of Shreya. A clubkick reminds us of Thaís and the drop reminds us of how we accidentally made our first drop very late at night and couldn’t stop listening to it. The party effect after the drop reminds us of Amina (i’m not sure why?) and everytime we hear our mashup we start dying from laughter. At times we could even see the essence of ourselves through this composition. What we really liked the most about it is that we would usually get excited to work on it – it didn’t feel like a chore but rather it felt like hanging out with some friends and jamming. 

P.S: 

The debate has finally been settled by vote from the class, hydra-tidal supremacy hehe

– someone who is obviously wrong

No it hasn’t, I yielded because of Shreya’s mouse, nothing else. 

– the voice of reason

Documentation video of our (live) performance:

Documentation video of our (not SOOO live) performance:

Final Hydra script of our performance:

https://github.com/daniaezz/liveCoding/blob/main/aaaaaaaa.js

Final Tidal start-up code of our performance:

https://github.com/daniaezz/liveCoding/blob/main/finalCompositionFunctions.tidal

The evolution of our code can be seen here:https://github.com/ak7588/liveCoding

Someone asked us for the code of the visuals. The bulk of it is P5, the height map is inspired by Shiffman’s tutorial. The height values of the terrain are then multiplied by a factor of the distance from the center (hence the flat part in the middle). Square movement is two sine functions, one for their y positions and the other for their rotation. The sun is the shader in the class tutorial.

Hydra is used for feedback and coloring and the final transition. The transition is a Voronoi-modulated oscillator.

Inspiration for the composition of the visual was drawn from this video.

Here’s the full code. Steps:

  1. run shader
  2. run p5
  3. run for loop
  4. run draw()
  5. run o1
  6. run o3
  7. run render(o3)
  8. tinker with o3 as commented

 

 

// glow circle
setFunction({
  name: 'glowCircle',
  type: 'src',
  inputs: [{
      type: 'float',
      name: 'locX',
      default: 0.,
    },
    {
      type: 'float',
      name: 'locY',
      default: 0.,
    },
    {
      type: 'float',
      name: 'glowAmount',
      default: 50.,
    },
    {
      type: 'float',
      name: 'r',
      default: 0.6,
    },
    {
      type: 'float',
      name: 'g',
      default: 0.3,
    },
    {
      type: 'float',
      name: 'b',
      default: 0.5,
    },
  ],
  glsl: `
  vec2 loc = vec2(locX,locY);
  // loc is in screen spaces, but _st is in normalized space
  float dist = glowAmount/distance(_st*resolution, loc);
  return vec4(r*dist,g*dist,b*dist,0.1);
`
})


p5 = new P5({width: window.innerWidth, height:window.innerHeight, mode: 'WEBGL'})
s0.init({src: p5.canvas})
src(s0).out(0)
p5.hide();
scl = 50;
w = 4200;
h = 3000;
//set m as 300
m = 100;
cols = w / scl;
rows = h / scl
flying = 0
terrain = []
spikes = []
toggle = 0
toggle2 = 0
size = 3;
pink = p5.color(255, 34, 240);
blue = p5.color(23, 200, 255);
neon = p5.color(10, 220, 255);
prv = [0,0,0];
ctr = [0,0,0];

p5.remove()

//make electro sound go up with the other one
for (var x = 0; x < cols; x++) {
    terrain[x] = [];
  	spikes[x] = [];
    for (var y = 0; y < rows; y++) {
      terrain[x][y] = 0; //specify a default value for now
      spikes[x][y] = 0;
    }
  }

p5.draw = ()=> {
  blue = p5.color(1, 6, 40);
  m = 100;
  size = p5.random(2,5);
  fade = 0.8;
  //p5.lights();
  p5.background(blue);
  p5.translate(0, 300, -100);
  p5.rotateX(42*p5.PI/72);
  //p5.rotateZ(time*p5.PI / 3);
  //p5.fill(255*p5.noise(1), 190*p5.noise(1), 150 + 200*p5.noise(1), 255);
  p5.translate(-w/2, -h/2);
  p5.noStroke();
  //p5.stroke(255, 34, 240);
  //GRID
  for (var i = 0; i < cols; i++)
  {
    p5.line(i*scl, 0, i*scl, h);
  }
  for (var i = 0; i < rows; i++)
  {
    p5.line(0, i*scl, w, i*scl);
  }
  //p5.noStroke();
  flying -= 0.03;
  var yoff = flying;
  for (var y = 0; y < rows; y++) {
    var xoff = 0;
    for (var x = 0; x < cols; x++) {
      terrain[x][y] = p5.map(p5.noise(xoff, yoff), 0, 1, 0, m) + spikes[x][y];
      spikes[x][y] *= fade;
      //
      xoff += 0.03;
    }
    yoff += 0.04;
  }
  //big blocks
  let cn = 12;
  if (cc[cn] != toggle){
    toggle = cc[cn];
    x = p5.int(p5.random(0.4, 0.6)*cols);
    y = p5.int(p5.random(1)*rows);
    x = p5.constrain(x, 1, cols-size-2);
    y = p5.constrain(y, 1, rows-size-2);
    //spike it up
    for(let i = 1; i < size; i++)
    {
      for(let j =1; j< size; j++)
      {
        spikes[x+i][y] = ccActual[cn]*55;
        spikes[x+i][y] = ccActual[cn]*55;
        spikes[x+i][y+j] = ccActual[cn]*55;
        spikes[x][y+j] = ccActual[cn]*55;
      }
    }
  }
  //sharp spikes
  let cn2 = 10;
  if (cc[cn2] != toggle2){
    toggle2 = cc[cn2];
    x = p5.int(p5.random(0.4, 0.6)*cols);
    y = p5.int(p5.random(1)*rows);
    //spike it up
    spikes[x][y] = 105*ccActual[cn2];
  }
  //terrain
  for (var y = 0; y < rows - 1; y++) {
    //left side
    p5.fill(blue);
    //p5.noFill();
    //p5.stroke(pink);
    p5.noStroke();
    p5.beginShape(p5.TRIANGLE_STRIP);
    for (var x = 0; x < cols-1; x++) {
      let dist = p5.pow(x-cols/2,2)/20;
      p5.vertex(x * scl, y * scl, terrain[x][y]*dist);
      p5.vertex(x * scl, (y + 1) * scl, terrain[x][y + 1]*dist);
    }
  	p5.endShape();
    for (var x = 0; x < cols-1; x++) {
      p5.strokeWeight(10);
      p5.stroke(pink);
      if (x%10==0)
      {
        p5.stroke(neon);
      }
      let dist = p5.pow(x-cols/2,2)/20;
      p5.line(x*scl, y*scl, terrain[x][y]*dist, x*scl, (y+1)*scl, terrain[x][y+1]*dist);
      //p5.line(x*scl, y*scl, terrain[x][y]*dist, (x+1)*scl, (y)*scl, terrain[x+1][y]*dist);
    }
  }
  //translate
  p5.strokeWeight(5);
  p5.stroke(neon);
  p5.fill(pink);
  //central box
  p5.push();
  p5.translate(w/2,2300, 70 + 40*p5.cos(flying*7-3));
  p5.rotateX(-flying*3);
  p5.box(50 + ccActual[13]*0);
  prv[0]=cc[12];
  p5.pop();
  //box left
  p5.push();
  p5.strokeWeight(7);
  p5.translate(w/2-100,1700, 100 + 60*p5.cos(flying*7-1));
  p5.rotateX(-flying*3 - 1);
  p5.box(50 + ccActual[13]*0);
  prv[1]=cc[11];
  p5.pop();
  //box right
  p5.strokeWeight(10);
  p5.push();
  p5.translate(w/2-60,100, 80 + 60*p5.cos(flying*7 - 6));
  p5.rotateX(-flying*3);
  p5.box(50 + ccActual[13]*0);
  p5.pop();
  //box left2
  //box center
}

//o0
src(s0).out(o0)

//MY GPU CANTTT
osc(1,2,0).color(10, 10,10).brightness(-100).modulate(noise()).mask(src(o0).sub(osc(0.9, 0.4).modulate(voronoi(20,10), 0.9))).add(src(o0)).invert().out(o1)

//final output [MAIN VISUAL]
//option1: modulate by o0, become crystalline/transparent: v pretty
//option2: blend multiple times o3
//option3: switch source to o1, this is the main function then blend with o3.
src(o0).layer(glowCircle(31*p5.width/70, 7*p5.height/24, ()=> ccActual[13]*100+2500, ()=>0.3, 0.1 ,0.06)).blend(o3).out(o3)

render(o3)

//black screen
//track 1 => automatically switches
//build up => bring the sun out || sun moves
//drop the beat => sun automatically detects that
//on drop => o1
//modulate o(0) blend o3
//o1 source then hush

hush()


















 

Group Members: Eadin, Omar, Sarah

For our drum circle project, we started by getting together and deciding on the kind of feel and impact we were going for. We all agreed that an ambient sound matched with a retro look is our goal. Then, we went through our previous projects and found sounds that we already like or feel like would suit what we want to do. Then we experimented with them and tried new stuff. 

For our workflow, we didn’t particularly divide roles. Instead, we would meet in our usual classroom, jam, and give tasks to each other as we worked. But in general, Sarah focused more on audio and compositional structure, Omar did more with visuals, and Eadin looked more into audio and midi interaction. 

Working in a group this time has been an exciting experience, as it felt like we had more possibilities with 3 brains focusing and trying to put something together. It was also helpful to learn from each other and see how each of us tackles issues that came up with the code. Having more than one person genuinely helped with debugging as we tend to overlook the problems with our own code but with flok we had to resolve everything to make sure we had something on the screen. However, improvising from scratch remains a little bit of a challenge. We all found ourselves to be interested in kicking off our performance with something already complex and building on top of it every time we meet, which still does not leave much room for improvisation and chance. Hopefully, through the class-wide drum circle, we would see how others jump in and contribute, getting some examples of unplanned improv.

For next week’s assignment, although we already kind of developed a build-up this week, we are thinking of trying something new and playing with new sounds, maybe bringing in our own vocal samples!

Here’s a snippet of the composition code we attempt to live code with:

d8 $ qtrigger 8 $ seqPLoop[
--
(0,9.5, off 0.125 (# squiz 4) $ fast 1 $  s "sid" >| note (scale "minor" ("<[3,5,7] [4,2]>(5,8)"+"<2! 3 4>"+"f")) # gain (range 1.2 1.3 perlin) # room 0.5 # djf (range 0.3 0.4 perlin) #hold 0.1 # size 0.9),
(4,9, loopAt 1 $ sound "breaks125:1(5,8)" # legato 7 # room 0.25 # vowel "o"),
--break // everything off to build tension
--- come back faster / build up-ish
(12,13, s "gab:9(40,8)" # gain (range 0.5 1 saw) # speed (range 0.8 2 saw) # cut 1),
(11,12, s "cp*4" # room 0.6),
(12,13, s "cp*16" # room 0.6) ,
(13,24,off 0.125 (# squiz 4) $ fast 1 $  s "sid" >| note (scale "minor" ("<[3,5,7] [4,2]>(5,8)"+"<2! 3 4>"+"g")) # gain (range 1.2 1.3 perlin) # room 0.5 # djf (range 0.3 0.4 perlin) #hold 0.1 # size 0.9),
(13,23 , loopAt 1 $ sound "breaks125:1(5,8)" # legato 7 # room 0.25 # gain 0.8),
--still needs a smoother breakdown/slow down--
(24,30, off 0.125 (# squiz 4) $ fast 1 $  s "sid" >| note (scale "minor" ("<[3,5,7] [4,2]>"+"<2! 3 4>"+"f")) # gain (range 1.2 1.3 perlin) # room 0.5 # djf (range 0.3 0.4 perlin) #hold 0.1 # size 0.9)
]

     Our project includes multiple visual and audial experiences generated through Tidal, Hydra, and midi-communication done via a Flok server. As outlined in class, the heart of our project is improvisation and active listening to what is happening in the process.

 

     This is how we worked together: we met multiple times and IMPROVISED. For our first session, we started from scratch and just improvised, seeing where it took us. We found a jamming style that worked for our group, and it was splitting into groups of two in the beginning and then mixing it up towards the middle and end. This way, we could carefully listen and observe what was happening and adjust accordingly. Shreya and Dania were on Hydra, while Amina and Thaís were on Tidal. Each of us was creating simple shapes and beats, which were then used to build up to create a more complex composition. When midi was added and the audio-visual experience was past the beginning stage, each of us had the freedom to choose which part to modify, i.e. one could switch to Hydra, Tidal, or continue building up on the same part they were working on. We also helped each other with their codes or parts that were confusing and actively listened to each other to see what to add and when to trigger it to make a cohesive composition. Each subsequent time we met, we used our past code to experiment with and improvise on (assuming it to be another team’s code that we would be working with). Every time we practiced, we created a unique composition. 

 

     Below are a few pictures from our practice sessions:

     

     The complete set of our works can be found in this repository: https://github.com/ak7588/liveCoding       

     Our favorite jamming session was the following one, which we will also be presenting in class:

 

     P.S. – This was one of the highlights of our semester(<3<3)!!!!!!! Livecoding with others is such an amazing bonding experience: we went in as individuals and came out as a performance group.

And we also came to the conclusion that Tidal-Hydra >>>> Hydra-Tidal. The world came back to its senses. 😉

 

Group Members: Aalya, Alia, Xinyue, Yeji

Inspiration

Each of us brought a few visuals and sound samples together. Then we tried to adjust the parameters and midi so that the theme of the visuals and sounds matched with each other.
Who did what:

  • Tidal: Aalya and Alia
  • Hydra: Xinyue and Yeji

Project Progress

Tidal Process

  • First, we played around with the different audio samples in the “dirt samples” library and tested out code from the default samples table on the documentation on the official Tidal website.
  • We then mixed and matched several combinations and eliminated everything that didn’t sound as appealing as we wanted it to be.
  • As Yeji and Xinyue were working on the visuals on the same flok, we listened to all of the various soundtracks as we took the visuals into account and were able to narrow down all our ideas into one singular theme.
    We had a lot of samples that we thought worked well together, but we were lacking structure in our track, so that was the next step when working on the project.
  • We broke down the composition into four sections: Intro, Phase 1, Phase 2, and Ending.
  • After that, it was just about layering the audio in a way that sounded appealing and that transitioned smoothly. We were trying to figure out what sounds would be used when transitioning into a new section and what parts would be silenced. It was all about finding the balance and a matter of putting together tracks that worked well with the visuals and complimented them.


Hydra Process

  • We started with a spinning black and white image bounded by a circle shape. Our inspiration comes from the old-fashioned black-and-white movie.
  • We modified the small circle by adding Colorama, which introduces noise
  • .luma() introduced a cartoon effect to the circle, and reduced the airy outlines
  • The three other outputs depended on the o0 as the source.
  • We added an oscillation effect on top of the original source and added cc to create a slowly moving oscillating effect that goes with the music.
  • Then, the scale was minimized for a zoom-out effect as the intensity of the music built up.
  • For the final output, the color was intensified, along with another scale-down effect, before introducing a loophole effect that transitioned back into o0 for the ending scene.

Evaluation and Challenges

  • Some of the challenges that we encountered during the development process of the performance were coordinating a time that worked for everyone and making sure that our ideas were communicated with each other to ensure that everyone was on the same page.
  • Another issue we had was that the cc values were not reflected properly, or in the same way for everyone, which resulted in inconsistent visual outputs that weren’t in sync with the audio.
  • Something that worked very well was the distribution of work. Every part of the project was taken care of so that we had balanced visual and sound elements.

This is the recorded video of our project:

Louis:

  I am basically taking charge of the music part of the drum circle project. Because I knew before that the process of making the drum circle on Garageband is well visualized, which can help the team to make a good beat more easily, I chose to use Garageband. The process was really fun as I also enjoyed listening to my works when making them. 

Eventually, I chose one of the works: 

d2 $ s "hh*8" # gain 1.2

d3 $ fast 2 $ s "feel [~ feel] feel [feel ~]" # gain 1.2

d4 $ fast 4 $ s "~!4 808:3 ~!3"

d5 $ fast 2 $ whenmod 4 2 ( # s "hardcore:11 ~ hardcore:11/2 ~!4") $ fast 4 $ s "[hardcore:11 ~!3]!2" # gain 1.5

  These sounds remind me of the game music in the arcades back in the old days. After letting all of the members in our group listen to the samples, Shengyang and I started to make them in Tidalcycle. Thanks to the visualized beat pattern, it was much easier for us to know how the rhythm is like, which helped us to eventually make the music. 

 

Debbie:

  For the Drum Circle assignment, Louis, Shenyang, and I split the work into three parts: Tidal, Hydra, and P5. I was responsible for creating the P5 visuals, which we decided we would use as the base visual that any other hydra visuals would be added onto.

  I approached this by creating a very simple visual of overlapping circles. Then, I added some noise to each circle so they wouldn’t be static. Finally, in line with the colour scheme (pinks, purples, light greens and blues), I used the ‘random’ function to add variation to the colours of the circle:
  You can find the schematic gif here

  

 

Shengyang:

  Based on the p5 visuals made by Debbie, I made some hydra effects to make the dot matrix flow.

The codes with effects are below:

dot matrix 1

 

//swirls
src(s1).modulate(noise(()=>cc[16]*10,2).luma(0.1)).out()
 
//deeper colour
src(s1).modulate(noise().pixelate()).colorama(1).out()
​
//70s vibe
src(o1).modulatePixelate(noise(()=>cc[17],0.1)).out()
 
//falling circles
src(s1).diff(noise(1, .5).mult(osc(10, 0, 10)).pixelate(.09)).luma(0.9).out()
​
//ending visual
src(s1).blend(src(o0).diff(s0).scale(.99),1.1).modulatePixelate(noise(2,0.01)
    .pixelate(16,16),1024).out()

 

I found this reading very fascinating and relatable. It reminded me of this class I took in New York on Dance, Theatre, and Performance and we practiced a lot of Kinetic Awareness in it. Oliveros’s list reminded me of the lists we had in class:

  1. Feel your breadth. Dance to it. With eyes open and close. 
  2. Humm your name. Make it a ritual. React to it. 
  3. Hear your thoughts. That is the music you will perform to.
  4. Stand in a circle facing outward. Each of you has to sit, but no two together. Listen to each other. No talking. 

The idea of these practices was to “tune our mind and body” as Oliveros says. We learned in class to listen to ourselves and our surroundings, and become more aware and sensitive to the little things that go unnoticed. My teacher said, “You have to listen, listen carefully before you perform; become aware.” It was like practicing mindfulness and it was therapeutic! So yes, I agree with Oliveros’s saying, “Listening is healing.” She also says, “Listening is directing attention to what is heard, gathering meaning, interpreting and deciding on action.” I jumped when I read this. It is so true; I have tried this. This is what we practiced in our class in New York with those small exercises. We not just listened, but heard, gathered meaning, and interpreted our surroundings and ourselves before we performed. 

What I did not realize was how opening the inner body and mind to the outer world could lead to activism. I was really intrigued by the line “personal is political”. Once you know yourself and your surroundings, you can raise a voice for what is right and wrong and take a stand. Reading this article was inspiring!