Looking back at our final performance composition, we’re not really sure how this thing came together! Like the previous projects, we would bring in different sounds to our work sessions and visuals that we previously played around with and try to make something out of them. Perhaps one of the defining moments of this performance was the buildup exercise as this is when the creepy, scratchy visuals came together with the glowing orb from our drum circle project.
So, this was the starting point for this composition, the chalkboard-like scratchy visual which inspired us to do something kind of creepy and menacing, while involving some kind of grace and elegance.

Sounds
Eadin and Sarah were mainly working together on sound. Although the visuals kind of dictated the creepy vibe, we wanted to maintain our interest in using ambient sounds like what you hear at the beginning of the piece. Later on, instead of staying scratchy and heavy, the piece involves some piano melodies (some of which we spent ages trying to fit by looking through different scales and picking specific notes). However, a lot of the sounds came through by experimentation and live editing. Finally, we made sure that our piece also included a more positive, colorful sound in an attempt to make it sound more complete and wholesome.

Code Snippets

ambient = stack[
    slow 4 $ n (scale "<minor>" $ "{<g> <fs gs>}" - 5)  # s "supercomparator"  #lpf 500 #shape 0.3 #gain 1.1,
    slow 4 $ n (scale "<minor>" $ "{<g> <fs gs>}" - 5)  # s "superfm"  #lpf 500 #shape 0.3 #gain 1.1,
    slow 4 $ ccv "0 10" # ccn 0 # s "midi"
    ]


//piano buildup

first_piano_ever = every 2 (fast 2) $ fast 2 $ stack [n (scale "<minor>" $ "<f d <e a> cs>") # s "superpiano" #lpf 600 #shape 0.3 #room 0.3 #sustain 3, ccv "<20 32 <23 44 22 25 50> 60>" # ccn 2 # s "midi"]
first_piano_ever_v1 = every 2 (fast 2) $ fast 2 $ stack [n (scale "<minor>" $ "<f d <e a e*2 a> cs>(2,9)") # s "superpiano" #lpf 600 #shape 0.3 #room 0.3 #sustain 3, fast 2 $ ccv "<20 32 <23 44 22 25 50> 60>(2,9)" # ccn 2 # s "midi"]
first_piano_ever_v2 = every 2 (fast 2) $ fast 2 $ stack [n (scale "<minor>" $ "<f d <e a e*2 a> cs>(5,9)") # s "superpiano" #lpf 900 #shape 0.2 #room 0.2 #sustain 1 #gain 1.1, fast 5 $ ccv "<20 32 <23 44 25*2 50> 60>(5,9)" # ccn 2 # s "midi"]
first_piano_ever_v3 = fast 4 $ stack [n (scale "<minor>" $ "<f d e cs>(4,8)") # s "superpiano" #lpf 900 #shape 0.2 #room 0.2 #sustain 1 #gain 1.1 , fast 4 $ ccv "<20 32 23 60>(4,8)" # ccn 2 # s "midi"]
first_piano_ever_v4 = fast 8 $ stack [n (scale "<minor>" $ "<f d e cs>(2,8)") # s "superpiano" #lpf 900 #shape 0.2 #room 0.2 #sustain 1 #gain 1.3, fast 4 $ ccv "<20 32 23 60>(2,8)" # ccn 2 # s "midi"]


//drum buildup
d2 $ qtrigger 2 $ seqP [
(0, 4, fast 4 $ ccv "0" # ccn 0 # s "midi"),
(0, 16, fast 1 $ "~ bd [bd ~] bd*2"),
(2, 16, fast 1 $ "[bd, hh, bd, sd]"),
(4, 16, fast 1 $ "[~bd, hh*8, bd(3,8), sd]"),
(0, 8, fast 1 $ ccv "0 0 0 10" # ccn 4  # s "midi"),
(8, 16, fast 2 $ "[~bd, hh*8, bd(6,8,3), sd]"),
(8, 16, fast 2 $ ccv "0 0 0 10" # ccn 4 # s "midi"),
(9, 16, fast 2 $ "[~bd*4]"),
(10, 16, fast 2 $ "[~bd*4(5,8)]"),
(12, 16, fast 2 $ "[~bd*4(5,8), hh*8, bd, sd]"),
(10.5, 11, fast 2 $ n "as a f a" # s "superpiano" # lpf 900 #shape 0.3),
(10.5, 11, fast 2 $ ccv "90 70 50 70" # ccn 2 # s "midi"),
(12, 12.5, fast 2 $ n "as a f a" # s "superpiano" # lpf 900 #shape 0.3),
(12, 12.5, fast 2 $ ccv "90 70 50 70" # ccn 2 # s "midi"),
(13, 15, fast 1 $ n "as a f a as a bs a" # s "superpiano" # lpf 900 #shape 0.7 # gain 1.2),
(15, 16, fast 2 $ n "as a f a as a bs a" # s "superpiano" # lpf 900 #shape 0.9 # gain 1.4),
(13, 15, fast 1 $ n "190 170 150 270 190 720 120 270" # ccn 2 # s "midi"),
(15, 16, fast 2 $ n "190 170 150 270 190 270 120 270" # ccn 2 # s "midi"),
(12, 16, slow 2 $ s "made:3(50,8)" # gain (slow 2 (range 0 1.25 saw)) # speed (slow 2 (range 0.8 3.5 saw)) # cut 1 # lpf 900 #rel 0.5)
] #shape 0.3 #room 0.3 #size 0.6 #gain 1.4

 

Visuals
The visuals were Omar’s work, with the main inspiration for the audio composition being his creation. Most of what you see in the performance were derivates of this first visual layered with different elements, including the glowing orb that we used in our drum circle and looked at in class. After Omar put together the visuals, we would meet together and test out audiovisual interactions that we felt could be effective for the audience || Eadin and Sarah had my back for a good week and a half while I was working on capstone so just a big props to them for making awesome music that inspired and motivated me to make fitting visuals. Visuals were inconsistent because of our attempt to exploit glitches (by not hiding p5) in making a very pleasing visual. Some of the visual quality had to be sacrificed for reliability during the performance. Our last visual also didn’t show in the performance because I had toyed a bit with variables before the performance. But I think it was alright regardless, we learned to improvise.

Code Snippets

lightning = () =>
{
  return(
  src(o1).mult(src(s0).add(noteLine(),0.8).add(noteLineThin(),0.8)).blend(o0,0.3).modulate(o1, 0.03)
    ) //lightning light
}
flash = () =>
{
  return(
  src(o1).diff(o0).mult(src(s0).add(noteLine(),0.8).add(noteLineThin(),0.8)).modulate(o1,0.03) //lightning
    )
}
flashH = () =>
{
  return(
  src(o1).mult(src(s0).add(noteLine(),0.8).add(noteLineThin(),0.8)).modulate(o1,0.03).diff(o0) //lightning with flash
}
flashGH = () =>
{
  return(
    src(o1).mult(src(s0).add(noteLine(),0.0).add(noteLineThin(),0.0)).blend(o0,0.3).modulate(o1, 0.03)
  //lightning with flash HEAVY
    )
}


tissue = () =>
{
  p1.colorMode(p1.RGB);
  p1.noFill();
  p1.stroke(255, 0.1);
  xoff += 0.02;
  yoff = xoff;
  for (let i = 0; i < 10; i++)
  {
    p1.push();
    _x = p1.noise(xoff)*p1.width;
    _y = p1.noise(2*yoff)*p1.height;
    _r = p1.random(3);
    _hyp = p1.sqrt(p1.pow(width/2, 2) + p1.pow(height/2, 2));
    _d = p1.map(p1.dist(_x, _y, p1.width/2, p1.height/2), 0, _hyp, 0.05, 1);
    //only first shape gets bold on drums
    if(i == 0 && p1.frameCount%4==0)
    {
      _s = 50 + 10*ccActual[4] + 50*ccActual[1];
      p1.stroke(255, ccActual[4]/4 + 10*_d);
    }
    else
    { 
      _s = 50;
      p1.strokeWeight(1);
      p1.stroke(255, 10*_d);
    }
    //circle
    if (_r < 1)
      p1.rect(_x, _y, _s, _s); 
    //square
    else if(_r < 2)
      p1.circle(_x, _y, _s); 
    //triangle
    else
    {
      _angle = p1.random(0, 2*p1.PI);
      _angle2 = p1.random(0, 2*p1.PI);
      _angle3 = p1.random(0, 2*p1.PI);
      p1.triangle(_x+_s*p1.cos(_angle), _y+_s*p1.sin(_angle), _x+_s*p1.cos(_angle2), _y+_s*p1.sin(_angle2), _x+_s*p1.cos(_angle3), _y+_s*p1.sin(_angle3))
    }
    p1.pop();
    yoff += 0.05;
  }
}

 

Explaining some Creative Choices
combining Hydra diff coupled with a blend or modulate to have a lasting flash effect of the light on the canvas when the light pops into a random position. This is better achieved with instantaneous/discrete change of light positions than continuous movement.
modulating slightly with the light orb gives a 3-dimensional quality to the movement of the light, which nicely translates the drum impact.
the piano tiles were recycled from the composition project. The visual looked a little too round and symmetrical so we opted for more asymmetry by making the tiles on both ends of the canvas different sizes. Part of it was also that Sarah liked the smaller tiles and Eadin the bigger ones. So we chose to pick both and it worked well visually.

 

More Notes on Final Performance / Reflection
In terms of the final performance, we didn’t notice that the tidal code constantly disappeared (aka. the zoom-out thing). What we have learned here is that while performing it’s important to ensure what’s actually going on on the main screen. In addition, while doing it on stage, our pace was a bit slower than practicing, because once you miss the opportunity to trigger something you have to wait for, say, the next 2 or 4 cycles. We think there’s definitely value in improvisation. The point is how we can find a balance between following a strict plan and improvisation and this is something that we continuously experimented with as we practiced for the final performance.

The 8bit Orchestra was an incredible experience, it was so exciting to put together a lot of creative energy and come up with a piece that stuck with us to the point that we were humming it every day leading up to the performance. Moreover, seeing everyone’s work come together, from our two-minute, nervous performances in class to our algorave was an amazing reminder of our progress and the time we all spent together.

 

 

Below is a video of our performance:

 

<3

Group Members: Eadin, Omar, Sarah

For our drum circle project, we started by getting together and deciding on the kind of feel and impact we were going for. We all agreed that an ambient sound matched with a retro look is our goal. Then, we went through our previous projects and found sounds that we already like or feel like would suit what we want to do. Then we experimented with them and tried new stuff. 

For our workflow, we didn’t particularly divide roles. Instead, we would meet in our usual classroom, jam, and give tasks to each other as we worked. But in general, Sarah focused more on audio and compositional structure, Omar did more with visuals, and Eadin looked more into audio and midi interaction. 

Working in a group this time has been an exciting experience, as it felt like we had more possibilities with 3 brains focusing and trying to put something together. It was also helpful to learn from each other and see how each of us tackles issues that came up with the code. Having more than one person genuinely helped with debugging as we tend to overlook the problems with our own code but with flok we had to resolve everything to make sure we had something on the screen. However, improvising from scratch remains a little bit of a challenge. We all found ourselves to be interested in kicking off our performance with something already complex and building on top of it every time we meet, which still does not leave much room for improvisation and chance. Hopefully, through the class-wide drum circle, we would see how others jump in and contribute, getting some examples of unplanned improv.

For next week’s assignment, although we already kind of developed a build-up this week, we are thinking of trying something new and playing with new sounds, maybe bringing in our own vocal samples!

Here’s a snippet of the composition code we attempt to live code with:

d8 $ qtrigger 8 $ seqPLoop[
--
(0,9.5, off 0.125 (# squiz 4) $ fast 1 $  s "sid" >| note (scale "minor" ("<[3,5,7] [4,2]>(5,8)"+"<2! 3 4>"+"f")) # gain (range 1.2 1.3 perlin) # room 0.5 # djf (range 0.3 0.4 perlin) #hold 0.1 # size 0.9),
(4,9, loopAt 1 $ sound "breaks125:1(5,8)" # legato 7 # room 0.25 # vowel "o"),
--break // everything off to build tension
--- come back faster / build up-ish
(12,13, s "gab:9(40,8)" # gain (range 0.5 1 saw) # speed (range 0.8 2 saw) # cut 1),
(11,12, s "cp*4" # room 0.6),
(12,13, s "cp*16" # room 0.6) ,
(13,24,off 0.125 (# squiz 4) $ fast 1 $  s "sid" >| note (scale "minor" ("<[3,5,7] [4,2]>(5,8)"+"<2! 3 4>"+"g")) # gain (range 1.2 1.3 perlin) # room 0.5 # djf (range 0.3 0.4 perlin) #hold 0.1 # size 0.9),
(13,23 , loopAt 1 $ sound "breaks125:1(5,8)" # legato 7 # room 0.25 # gain 0.8),
--still needs a smoother breakdown/slow down--
(24,30, off 0.125 (# squiz 4) $ fast 1 $  s "sid" >| note (scale "minor" ("<[3,5,7] [4,2]>"+"<2! 3 4>"+"f")) # gain (range 1.2 1.3 perlin) # room 0.5 # djf (range 0.3 0.4 perlin) #hold 0.1 # size 0.9)
]

I really really enjoyed this reading, particularly because it contextualized the work of many artists I already like across disciplines such as Nam June Paik and Sonic Youth! I love how liberating the idea of finding the “means of expression for a particular idea, to test concepts in another field, or simply to extend one’s own radius of effect” is. An artist does not have to be defined by a specific practice.

Another thing that really stood out to me in the reading was the usage of the expression “media transgressor” in:

“Another media transgressor is Tony Conrad, who in the 1960s implemented minimalist concepts both in musical and visual form and at the same time explored the materiality of each respective media from its fringe.”

I guess in the context of this reading, media transgressors are those who are not confined by a specific medium or concept, multidisciplinary practitioners who use whatever means they find suitable to actualize their ideas. Nowadays, when multidisciplinarity seems to be the default (this could be debated, probably), I wonder who a media transgressor would be? Could it be those who actually reject new media? those who go back to traditional mediums and stick to one? I’m not sure!

General Overview

This assignment was quite a challenge! Composition is tough, especially in code format as it felt less intuitive for me. The way I tried to tackle this task is by consuming as much music and visuals that inspire me and trying to duplicate and play around with the resulting outcomes. For music, I realized that I’m not a big fan of typical EDM-ish build-ups, and find myself gravitating towards more experimental ambient music, so two of the main songs that were inspiring me for this project were: ATM by Billy Lemos, and An Encounter by The 1975. As for visuals, I wanted to steer away from coding shapes from scratch, rather I wanted to experiment with either layering the same video or source image.

Music

As mentioned above, I wanted to experiment with creating ambient and almost nostalgic sounds, but before all that I referred to music theory and the piano, and picked a scale that I thought would compliment the feel I’m going for, which is the C minor scale. I went through a lot of different samples and approaches to building the composition, and almost nothing felt like it made sense or like it’s cohesive. The final structure I went with is pretty experimental, a piece in two parts with the sound of children playing in the background. It’s meant to be both ominous and nostalgic both feelings that I attempted to achieve mostly by using room, sz, and orbit.

Visuals

Before starting, I knew I wanted to experiment with red-blue layered visuals with an anaglyph 3D effect. I played around with gifs and different figures, and then felt like a picture of an eye could be both visually appealing and match the vibe of the music. Throughout the whole piece, the eye is consistent as the source.

Difficulties

The biggest difficulties I faced are with compositional structure and audiovisual interaction. For compositional structure, as I mentioned, I struggled with finding an alternative to the build-up structure, as I felt like it did not resonate with the feel that I was trying to achieve. As for audiovisual interaction, I think I just need to practice it and experiment with it more until it becomes more precise and understandable for me.

---- start
d14 $ s "children" #gain 0.5

d13 $ qtrigger 13 $ seqPLoop[
  (0, 4,  note "[[ef5'maj] [g5'min] [bf5'maj] ~ ~]" # sound "superfork" # room 0.1 # gain 0.7 # legato 1.5),
  (4, 8, note "[[ef5'maj] [g5'min] [c6'min] ~ ~]" # sound "superfork" # room 0.1 # gain 0.7 # legato 1.5)
  ]

d7 $ ccv "0 50 64 0 0" # ccn "0" # s "midi"

d7 silence



d1 $ s "coins" # gain 0.8



d5 $ slow 2 $ sound "superfork" >| note "[c2'min]? [bf3'min]?"  # room 0.1 # gain 0.9 -- bg chord


-- introducing more rhythm + ambient element
hush

d3 $ qtrigger 3 $ seqPLoop[
  (0, 12, fast 2 $  sound "808bd(1,4)"  # gain 1.2),
  (4, 12,slow 2 $ sound " ~ [future:4(3,5)]  ~ ~"  # gain 1.5 # room 0.4)
  ]

  d11 $ ccv (segment 128 (range 127 0 saw)) # ccn "1" # s "midi"
  d9  $ struct "<~ t(3,5) t>" $ ccv ((segment 128 (range 127 0 saw))) # ccn "2" # s "midi"


d4 $ fast 2 $ s "ade:3"  |> note "<f5_>" # cut 1  # vowel "o"  # gain "<1? 0.9 0.2 0.7>"   #room 0.4


-- chaos & tension here w/ distort & adding beats

bassDrum = d14 $ fast 2 $ sound "808bd" # gain 1.2

bassDrum
d3 $ slow 2 $ sound "[future:4*3] ~ ~ ~"  # gain 1.5 # room 0.4 # distort 0.2
d5  $ struct "<t(3,5) t>" $ ccv ((segment 128 (range 127 0 saw))) # ccn "3" # s "midi"

d4 $ fast 2 $ s "ade:3*6"  |> note "<f5_>" # cut 1  # vowel "o"  # gain "<1? 0.9 0.2 0.7>"   #room 0.4

--- hush, instead of a drop for ominous vibe
hush

------ second bit, normal then degrade
d4 $ degradeBy 0.8 $ slow 2 $ s "armora:5" # room 0.4 # sz 0.6 # orbit 1 # gain 0.8
d1 $ s "<coins(1,4)>" # gain 0.95

d5  $  ccv " 127 0 0 0 0" # ccn "3" # s "midi"


d4 silence

d10 $ qtrigger 10 $ seqP[
  (0, 8, s "ade:3" |> note "c2"  # cut 1  # vowel "a"  # orbit 1 # room 0.7 ),
  (8, 14, s "ade:3" |> note "c3"  # cut 1  # vowel "a"  # orbit 1 # room 0.7),
  (14, 20, s  "ade:3" |> note "c4"  # cut 1  # vowel "a"  # orbit 1 # room 0.7)
  ]

d1 silence
hush
s2.initImage("https://i.pinimg.com/originals/b1/7b/6e/b17b6e0ba062a3217ecd873634093864.png")
s3.initImage("https://i.pinimg.com/originals/b1/7b/6e/b17b6e0ba062a3217ecd873634093864.png")


src(s2).scale(() => cc[0]*0.5,4,8).pixelate(600,600).scrollX(0.3,0.01).out(o0) // start, one eye layer

//() => cc[0]*0.5

src(s2).scale(0.02,4,8).color(1,0,0).scrollX(0.01,0.1).layer(src(s2).scale( () => cc[0]*0.1,4,8).pixelate(600,600).scrollX(0.2,0.1)).out(o0)

.layer().out(o0)

src(s2).scale( () => cc[0]*0.1,4,8).pixelate(600,600).scrollX(0.7,0.1).layer(src(s2).scale(()=>cc[1]/6,4,8).color(1,0,0).scrollX(0.,0.1)).rotate(() => cc[2]).out(o0)  // () => cc[2] //rotation n scaling for red

src(s2).scale( () => cc[0]*0.01,4,8).pixelate(600,600).scrollX(0.7,0.1).layer(src(s2).scale(()=>cc[2]/10,4,8).color(1,0,0).modulate(noise(()=> cc[1]*6)).scrollX(0.,0.1)).out(o0) //modulate noise

src(s2).scale( () => cc[0]*0.01,4,8).pixelate(600,600).scrollX(0.7,0.1).layer(src(s2).scale(()=>cc[2]/10,4,8).color(1,0,0).modulate(noise(()=> cc[1]*2)).pixelate(()=>cc[3]*2).scrollX(0.,0.1)).out(o0) //pixelate



osc(6).color(1,0,0).modulate(src(s2).scale(0.1,4,8),1).blend(osc(6).color(0,0,2).modulate(src(s2).scale(0.1,4,8).scrollX(0.7),1).modulate(noise(()=>cc[3]+0.3))).out(o0) // part 2

osc(6).color(1,0,0).modulate(src(s2).scale(0.1,4,8),1).blend(osc(6).color(0,0,2).modulate(src(s2).scale(0.1,4,8).scrollX(0.7),1).brightness(()=> +0.8).modulate(noise(()=>cc[3]))).out(o0) //brightness

For my research project, I chose to look into Alda which is described as “a music programming language for musicians.” It enables musicians to create compositions using text editors and the command line — super straightforward and simple if you’re already familiar with music notation! In terms of where Alda stands within the Live Coding context, I actually don’t think it’s much of a live coding platform. Although it has a “live-ish” mode, it is most powerful in its ability to simplify writing sheet music without being too focused on the act of notation, and this is what the creator Dave Yarwood intended as a musician and programmer. But who knows? Maybe the ability to simply notate and write notes for instruments in parallel allows for live band performances? or improvisation using more classical instruments and typical notation.
To understand how Alda works, I simply installed it and played around with its live or repl mode while following the cheat sheet. Afterward, I tried to find online tutorials or performances, and only found one which was sufficient for me to understand the potential of Alda! I then started breaking down some notation to try to put together a presentation that portrays this potential to my classmates.

I personally really enjoyed working with Alda and reviving my music theory knowledge, although I’ve never properly composed a track I watched a youtube video and tried to give it a go. Here’s my (very basic) composition:

and here’s the code:

gMajScale = [g a b > c d e f+ g]
gMajChord = [o2 g1/b/>d] (vol 50) #First of the scale
cMajChord = [o3 c1/e/g ] (vol 50)#Fourth of the scale
dMajChord = [o3 d1/f+/a ] (vol 50) #Fifth of the Scale

piano:
V1:
gMajChord | cMajChord | gMajChord | dMajChord #LH: 1-4-1-5 , 1-4-1-1 chord progression.
gMajChord | cMajChord | gMajChord | gMajChord

V2:
g4 a b a | c e c d | g2 (quant 30) > g4 (quant 99) b | d1 #RH (melody): inserting random notes from the scale

g4 a b a | c e c d | g2 (quant 30) > g4 (quant 99) b | < g1

midi-acoustic-bass:
o2 g8 r r b8 r r r r | r e4 c8 r r r | g8 r r b8 r r r r | d8 r r f8 r r r (volume 100) #played around with notes from the scale and note lengths
o2 g8 r r b8 r r r r | r e4 c8 r r r | g8 r r b8 r r r r | g8 r r b8 r r r r (volume 100)

percussion:
[o2 c8 r r c c r r ]*8 #experimented until something worked(?)
o3 c+

I grew up playing classical piano, and my father who plays the guitar and is a math enthusiast, always told me that music theory is just pure math. My experience in this class is further proving his point. In this week’s reading, Spiegel breaks down the concept of information theory and how it could be used in music. I particularly found her explanation of choosing sounds “vulnerable to corruption” to be informative, as I usually just experiment with random values and introduce noise, but now I can do it with a bit more intentionality.

Speigel also brings up the idea of whether there is such a process as composition, and I’ve been thinking about this for a while! After transitioning from classical music to different genres, I joined a band, and I found the times when we were brainstorming lyrics to write or melodies really challenging because nothing ever felt original. Whatever lyric I wrote, I can pinpoint a source song or artist that it was directly influenced by or borrowed from. We always hear from older generations that there is no longer music like what they had during their times. I still wonder: are we past the point of originality and novelty in music, and if we’re not, how can we ensure that what we’re making is a new composition? Or actually, does that even matter if the music speaks to someone and is enjoyed by audiences?

For our performance this week, one of the ways I tried to brainstorm was by listening to specific songs and deconstructing them — trying to understand the layers that make up each song, and how I can use that as inspiration for my own performance. One of the main elements of my live coding performance, for example, was this bass drum beat:

d1 $ sound "{808bd:12 [808bd:73]}" # room "0.03"

I came to this beat after listening to Fred Again.. ‘s song Marnie (wish i had u) and trying to understand how he gradually constructs and put together relatively simple layers and elements that make up the song. I eventually couldn’t escape this way of thinking while shuffling my playlist and thought of putting together a collaborative class playlist where we can drop in tracks that inspire us while we’re figuring out how to live code music. I started a collaborative Spotify playlist that you can check out and add to here. Let me know if you have any thoughts, or if another platform like youtube could be more accessible to people! (✨pls add songs I love exploring and learning what other people listen to✨)