Group Members: Aalya, Alia, Xinyue, Yeji

 

Final Presentation

https://drive.google.com/file/d/14vo1y2KyfkNQ7snYH1QXRpyS5YRDYc8a/view

 

Video Link

 

Inspiration

The initial inspiration for our final project was different for the audio and the visuals. For the audio, we were inspired by our previous beat drop assignment and had a small segment that we wanted to use for the final and built on that. For the visuals, we wanted to have a colorful yet minimalistic output and so we started off with a colorful circle with black squiggly lines, and decided to create variations of it. 

 

Who did what:

  • Tidal: Aalya and Xinyue
  • Hydra: Alia and Yeji

 

Project Progress

Tidal Process

For this project, we used a similar approach in developing our sounds. This began by laying out a few different audio samples and listening to them. Then we pieced together what sounds we thought fit best and that eventually turned into a specific theme for our performance. Once we set the foundation, it was then easier to break down the performance into a few main parts:

  1. Intro
  2. Verse
  3. Pre-Chorus / Build Up
  4. Chorus / Beat Drop
  5. Bridge / Transition to outro
  6. Outro

 

The next step in the music creating process was to figure out the intensity of the beats, and how we wanted specific parts to sound. How do we manipulate the sounds to have a more effective build up & beat drop, how do we make sure that the transitions are smooth and that the sounds aren’t choppy, etc… These are all questions that came up when we were in the brainstorming process.  

 

Some of the more prominent melodies found in our music consisted of:

 jux rev $ every 1 (slow 2) $ struct "t(3,8) t(5,8)" $ n "<0 1>" # s "supercomparator" # gain 0.9 # krush 1.3 # room 1.1

As well as creating different variations from it such as:

jux rev $ every 1 (slow 2) $ struct "t(3,8)" $ n "1" # s "supercomparator" # gain 0.9 # krush 1.3 # room 1.1 # up "-2 1 0"  # pan (fast 4 sine) # leslie 8struct "t*4" $ n "1" # s "supercomparator" # gain 1.3 # krush 1.3 # room 1.1 # up "-2 1 0"  # pan (fast 4 sine) # lpf (slow 4 (1000*sine + 100))  n "f'min ~ [df'maj ef'maj] f'min ~ ~ ~ ~" # s "superpiano" # gain 0.8 # room 2 

Next, it was just a matter of developing the sounds further. For example, we had a specific melody that we particularly wanted to stand out, so in order to support it, we layered sounds and effects onto it until it developed into something we liked. That was the case for all the music parts. It was all about balance and smooth transitions, forming a cohesive auditory piece that also complimented and went well with the visuals. 

 

Hydra Process

We started with a spinning colorful image (based on a color theme that we like) bounded by a circle shape. We then added noise to have moving squiggly lines at the center and decided to have that as our starting visual.

Alia and Yeji then started to experiment with this visual separately on different outputs and had all screens rendered so we can see them all at the same time and decide on what we like and what we don’t. 

At first it was purely based on visuals and finding ones we liked before coordinating with Aalya and Xinyue to match the visuals to the beats in the audio. 

We then started testing out taking different outputs as sources, adding functions to them, and playing around to make sure that the visuals are coherent when changed and the differences are not too drastic but also enough to match the beat change.

Next, we looked at the separate visuals one by one and deleted the ones we weren’t all agreeing on and left the ones we liked. By that time, Aalya and Xinyue were more or less done with the audios and we all had something to work with and put together.

This was more of a trial and error situation where we would improvise as they play the audios, see what matches, and basically map them to one another. Here, we also worked with the cc values to add onto the variations such as increasing the number of circles or blobs on the screen with the beat or creating pulsing effects. 

We wanted the visuals to match up the build up of the audio. To do this, we tried to build up the complexity of the visuals from a simple circle to an oval, a kaleid, a fluid pattern, a zoomed out effect, a fast butterfly effect, ultimately transitioning into the drop visual. To keep the style consistent throughout the composition, we stuck to the same color scheme presented inside the circle from the very beginning. As the beat built up, we used a more intense, saturated color scheme, along with cc values that matched the faster beat. 

now_drop = ()=> src(o0).modulate(noise(()=>cc[4]*5,()=>cc[0]*3)).rotate(()=>cc[4]*5+1, 0.9).repeatX(()=>-ccActual[1]+4).repeatY(()=>-ccActual[1]+4).blend(o0).color(()=>cc[4]*10,5,2)

now_drop().out(o1)

render(o1)

For the outro, we toned down the saturation back to its default, and silenced the cc values as we silenced each melody and beat, returning the visuals back to its original state. We further smoothed out the noise of the circle to indicate the proximity to the ending

src(o0).mult(osc(0,0,()=>Math.sin(time/1.5)+2 ) ).diff(noise(2,.4).brightness(0.1).contrast(1).mult(osc(9,0,13.95).rotate( ()=>time/2 )).rotate( ()=>time/2).mult( shape(100,()=>cc[2]*1,0.2).scale(1,.6,1) ).out(o0)

render(o0)

 

Evaluation and Challenges

The main challenge that we encountered in the development process of the performance was coordinating a time that worked for everyone and making sure that our ideas were communicated with each other to ensure that everyone was on the same page.

 

Another issue we had was that the cc values were not reflected properly, or in the same way for everyone, which resulted in inconsistent visual outputs that were not in sync with the audio for some of us while they were for others.

 

When Aalya and Xinyue were working on the music, although it was fast coming up with different beats, it took some time to put the pieces together into a complete and cohesive composition.

  

An aspect that worked very well was the distribution of work. Every part of the project was taken care of so that we had balanced visual and sound elements.

 

Overall, this project made us bond together as a group and fostered creativity. Combining our work to create something new was a rewarding experience, and with everyone  being on the same page and working towards the same goal, it was even more rewarding. For our final performance, we accomplished something that we all could be proud of and were satisfied with the progress we had made. Everyone was in good spirits throughout the process, which helped to create an atmosphere of trust, collaboration, and creativity. This project allowed us to use our individual strengths for the collective benefit and gave us an opportunity to learn from each other in a fun environment.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

The thought of activism taking place in more private quiet spaces was interesting. The text brings attention to the forms of activism that can less visible but just as impactful – one that takes place in the form of listening. People often associate activism as voicing your opinions, making them heard. And people who remain in silence are, in many instances, shamed for not contributing enough towards the movement. The text, however, made me think further about the instances where silence is a necessary component for thoughtful activism to occur. What if silence is the product of active listening and learning, prior to voicing an opinion? What if silence is one healing emotionally from a traumatic event?

 

The privacy of the meetings allowed the group to listen to sounds of their choice, giving them a sense of control and power over how they engaged their senses in the midst of such political unrest and chaos. I think there is a lot of power in the line that describes

“listening as a necessary pause before thoughtful action”.

Thoughtful action is only possible after one has given time to think. And listening invites more ideas to fertilize the thoughts preceding an action. By declaring this pause necessary, Olivero describes listening just as important as taking action. This reading inspired me to become more sensitive to the sounds that surround us and further experiment with the medium for healing purposes. 

 

Reading about the blending of two forms – musical and visual – through common concepts and abstraction made me question how this “blend” takes place purely in the digital space. Conceptual art movements of the past involved works becoming “dematerialized and increasingly independent of materials, techniques, media, and genres”. It is easy to imagine this for art that has physical, tangible components to it – the paint, the canvas, the sculpture, etc can be decontextualized by the viewer through the abstract concept presented in the work.

 

But what exactly is being decontextualized when we view digital abstract art? Is it the software that the art is contained within? Will the object always be in reference to our conception of the physical objects we know of? How does digital art – in both musical and visual form – become dematerialized from its digital material? What exactly is the digital material – pixels and wavelengths? Or is it the form that is presented within it? Or our conception of what’s recognizable to us? What happens when something becomes completely dematerialized? Is this possible? Is it only possible in the form of a memory, where the perception is no longer perceivable? This reading brought rise to a lot of interesting questions and I would love to hear your opinions about it!

I was playing around with variations of “notes” from the tidal cycle archive and came across this line that sounded oddly familiar to my phone when it is buzzing intensely at me.

d5 $ slow 4 $ s "notes:2" >| note (scale "major" ("<0>(2,2)") +"f2") # room 2.3 #krush 1.7 # gain (range 1.0 1.4 rand)

Hearing the buzz repeat itself, over and over again, gave me flashbacks to all the times I got notifications when I wasn’t supposed to – such as in my deep nap mode, when in a class trying to focus, when on a digital detox etc. Phone notifications can be inviting at times – I love getting texts from a best friend, family, or a loved one – but it could also be very irritating. I was inspired to create a composition expressing this very specific type of irritation we feel sometimes towards our digital best friend.

I used a mix of notes:1 and notes:2 for the main melodies. The build up of the sound was achieved by speeding up the base drums, snare drums, the bass (auto:5, auto:7), and the three melodies. For the visuals, I tried to find a cartoon character that resembled how I look like when I am sleep deprived. I came across a clip that I found really funny, from this particular scene in Spongebob where Squidward was very upset from the endless incoming calls from Spongebob and Patrick. Squidward had just finished preparing himself to sleep and was ready to go to dream land. The last thing he wanted to hear was the phone buzzing. This scene matched my inspiration perfectly. I wanted to sync the build up of the music to the anger I assume was building within Squidward. The story was the main component of focus within the composition, hence both the code for the visual and sound were structured around the storyline, the comments of which were left on Atom for the viewers to follow during the performance.

I hope you enjoy 😴

 

 

Tidal:

-- START

do
  d2 $ fast 2 $ s "notes:1" >| note (arp "up" (scale "major" ("[2,3,7,6]"+"<2 3 7 3>") + "f4")) # room 0.4 # gain (range 0.7 1.0 rand)
  d1 $ fast 2 $ s "notes:1" >| note (scale "major" ("[0,<5 <4 3>>](<1 2>,4)") + "f3") # room 0.7 # gain 1.5 #krush 1.2
  d3 $ s "bd bd <sd> bd" # room 0.1
  d14 $ fast 2 $ ccv "<127 50 127 50 >" # ccn "1" # s "midi"

-- CALL INCOMING!!!!!
do
  d5 $ slow 4 $ s "notes:2" >| note (scale "major" ("<0>(2,2)") +"f2") # room 2.3 #krush 1.7 # gain (range 1.0 1.4 rand)
  d14 $ slow 2 $ ccv "<127 50 127 50 >" # ccn "1" # s "midi"

  d5 silence

-- Second melody
d4 $ fast 2 $ s "notes:2" >| note (scale "major" ("<6 <2 4 8 4 2>>(1,3)")) # room 1.2 # gain 1.1

hush

-- build up of anger
do
  d7 $ slow 2 $ s "[auto:5, auto:7]" # gain (1.1) # size (0.9) # room (0.8)
  d8 $ s "[~ sd, hh*2 hh*2 hh*2 <hh*6 [hh*2]!3>]" # room 1 # gain (range 0.8 1.0 rand)
  d9 $ s "sd*4" # krush 2 # room 0.4
  d14 $ fast 16 $ ccv "<127 50 127 50 >" # ccn "1" # s "midi"

hush

-- ANGER BUILDING

d10 $ qtrigger 10 $ seqP [
    (0, 1, s "[sd*4]"),
    (1, 2, s "[bd*8, sd*8]"),
    (2, 3, s "[bd*16, sd*16, auto:1*16]"),
    (3, 4, s "[bd*32, sd*32, auto:1*32]"),
    (4, 5, s "sine*8" # up "g ~ e a [g ~] [~c] ~ ~")
]

-- REAL ANGRY
do
  d14 $ whenmod 24 16 (# ccn ("0*128"+"<t(7,16) t t(7,16) t>")) $ ccn "0*128" # ccv (slow 2 (range 0 127 saw)) # s "midi"
  d5 $ slow 2 $ s "notes:2" >| note (scale "major" ("<0>(2,2)") +"f2") # room 1.8 #krush 1.2 # gain 1
  d6 $ s "sine*8" # up "g? ~ e a [g ~] [~c] ~ ~" # room (1) # gain (0.9)
  d7 $ slow 2 $ s "[auto:5, auto:7]" # gain (1) # size (0.9) # room (0.8) # cutoff 5000
  d8 $ s "[~ cp, hh*2 hh*2 hh*2 <hh*6 [hh*2]!3>]" # room 1.1
-- fast

hush

-- OMG

do
  d14 $ fast 16 $ whenmod 24 16 (# ccn ("0*128"+"<t(7,16) t t(7,16) t>")) $ ccn "0*128" # ccv (slow 2 (range 0 127 saw)) # s "midi"
  d5 $ slow 1 $ s "notes:2" >| note (scale "major" ("<0>(2,2)") +"f2") # room 1.8 #krush 1.2 # gain 1
  d6 $ fast 2 $ s "sine*8" # up "g? ~ e a [g ~] [~c] ~ ~" # room (1) # gain (0.7)
  d7 $ slow 1 $ s "[auto:5, auto:7]" # gain (1) # size (0.9) # room (0.8) # cutoff 5000
  d8 $ fast 2 $ s "[~ cp, hh*2 hh*2 hh*2 <hh*6 [hh*2]!3>]" # room 1

do
  d14 $ fast 4 $ ccv "<127 50 127 50 >" # ccn "1" # s "midi"
  d9 silence
  d8 silence
  d7 silence
  d6 silence
  d4 silence
  d3 silence
  d2 silence
  d1 silence

  d5 silence

-- PEACEFUL TIMES

do
  d14 $ fast 1 $ ccv "<127 50 127 50 >" # ccn "1" # s "midi"
  d13 $ fast 1 $ s "notes:2" >| note (scale "major" ("a3 g3 e3 <b3 c3 e3 f3>")) # room 1.2 # gain 1.1

hush

 

Hydra:

// video control
vid = document.createElement('video')
vid.autoplay = true
vid.loop = true
vid.volume = 0


// BE SURE TO CHANGE THE FOLDER NAME
// BE SURE TO PUT A SLASH AFTER THE FOLDER NAME TOO
basePath = "/Users/yejikwon/Desktop/Fall\ 2022/Live\ Coding/class_examples/composition/video/"
videos = [basePath+"1.mp4", basePath+"2.mp4", basePath+"3.mp4", basePath+"4.mp4", basePath+"5.mp4"]

// choose video source from array
// SLEEP TIME
vid.src = videos[0]
// CALL
vid.src = videos[1]
// HELLO?!
vid.src = videos[2]
// HELLO?!!!!
vid.src = videos[3]
// THE END
vid.src = videos[4]


// use video within hydra
s1.init({src: vid})

render


// cc[1] will be zero always during the "A" section
// thus making the scale value not change
src(s1).scale(()=>-1*cc[0]*cc[1]+1).out()
render(o0)
let whichVideo = -1;

update = ()=> {
  // only change the video when the number from tidal changes,
  // otherwise the video wil keep triggering from the beginning and look like it's not playing
  // cc[2] is for changing the video
  if (cc[2]!=whichVideo){
    vid.src = videos[ccActual[2]]
    whichVideo=cc[2];
  }
}

Kilobeat is a collaborative web-based dsp (digital signal processing) live coding instrument, with aleatoric recording (applies when a random function is used) and playback. The user interface is as follows. Each row represents a connected device

Kilobeat Main Interface

There was no one in the main server every time I connected, so I took the entire server for myself during the experimentation. I opened four different tabs on my browser and tested running different functions for each tab. There are default functions available as tabs (Silence, Noise, Sine, Saw, …), and each function can be combined to produce new sounds. Some examples are layering (addition), amplitude modulation (multiplication), function composition (passing in one thing as an argument to another). The players can look at the oscilloscope and the spectrum analyzer to visualize their output.

I found the output created by kilobeat limiting, compared to supercollider. It was also quite difficult to make the piece sound enjoyable to the ear. The strength of the platform, it seems, lies in offering users with an easy collaborative experience on the web, which made me wonder whether there was an option on atom for real-time online collaboration. If so, although I appreciate the conceptual idea behind kilobeat, I personally would not use the platform again.  

The reading states that random corruption could help prevent redundancy and repetition, resulting in less bored listeners. This inspires me to incorporate more noise during my live coding sessions, in the form of ? and rand functions, making the piece more unpredictable. The challenge, however, seems lie in navigating the right amount of randomness to use. During last class, when we were playing around with the ? function, I noticed that there was always a point where overusing the “?” function lead to the piece sounding more off and empty. Hitting the right amount of randomness seems to be optimize the sound, but anything below and beyond it seems to do the opposite.

This lead me to question, how much randomness is too much? Is there a point where adding more randomness decreases the musicality of the piece? How can I know when that point would be? Is it subjective or is there a formula for that too?

Yej

A computer agent will be developed that produces a live coding performance indistinguishable from that of a human live coder

This idea made me further question – how should a creative software agent look like in the context of live coding? Considering the physical nature of conventional live coding performances, where you see a person in front of their computer tapping onto their keyboards, does the computer agent need to embody a human form? Not only software-wise, but also that of hardware? Is it only under this condition that they are truly indistinguishable from a human coder? Or does the physical component not matter as much?

The spontaneity, which is an integral part of what makes live coding interesting, is also up to question. Can a machine truly be spontaneous and improvise if they are not yet susceptible to biological conditions and emotions – two major sources of human spontaneity? What other inner impulses could non-feeling computer agents use to show spontaneous behaviors similar those of humans?

“Live coding has far less perfection and the product is more immediate. It allows for improvisation and spontaneity and discourages over-thinking”.

How does a machine discourage itself from over-thinking? Perhaps, they can systematically control, or simply dial down, the amount of “thoughts” they have? Does such simplification of the process threaten the quality or even the validity of spontaneity expressed by the agent?