“Take a walk at night. Walk so silently that the bottoms of your feet become ears.”

I think this is one of the most interesting forms of sonic meditation mentioned in the article. Because sounds are vibrations, that means we can hear them through touch, like feeling a phone vibrate on a table. I imagine that walking (barefoot?) at night, in a quiet neighbourhood, the vibrations from the different sounds can be felt through the feet. These vibrations may even be something that our ears can’t hear which I think is an interesting way to give more perspective of the things happening around us even the things we don’t necessarily hear or see. It is interesting how she combined different senses together in her meditation. I also think this is why there was a blur between her musical work and bodywork and why her meditations were so effective.

 

Oliveros described listening as a necessary pause before thoughtful action. A thoughtful action can only be taken when we fully understand and acknowledge what is happening around us. She thought of taking a moment to listen as more important than simply taking action. Through her mediations, Oliveros was not only able to empower women through music, but she was also able to bring them peace during a difficult time. I found the article to be inspiring in some way as there is still much to learn from her sonic and kinetic meditations.

While reading this article, something that came to mind was how, in a way, sound already has some sort of visualization in nature like how noise lies on a color spectrum or how sine waves have a defined shape. While live coding, I usually try to visualize how the sounds that I generate sound and I find myself always going back to sound waves for inspiration or use them as a base to build off of. I found the works of Paul Klee particularly refreshing because he managed to capture how sound would feel in a still painting using simple shapes and colors.

 

One form of sound visualization that I hadn’t thought about as a form of art is music videos. Because of how normalized music videos have become, I never thought of them as a form of “art” that combines both sound and visuals. This kind of adds up to when the author says “the dual profession of artist-musician/musician-artist is no longer anything of note” and makes me question whether we must create something drastically different for the work to be “noteworthy”

Concept

I wanted to create a short 8-bit indie game type of story, but I also wanted the audience to sort of build the story/game themselves. By naming the functions as I did, I hoped to stir the audience to know what part of the game/story they were in. once_upon_a_time is meant to signify the beginning of the game, usually where the story is told. “adventure” builds off from once upon a time and shows a more adventurous part of the game. “uh_oh” means trouble, the boss fight or critical moment in the game. By skipping directly to the end credits, I wanted the user to imagine the ending themselves, whether they won or lost, if it was a happy or sad ending, open to creativity and interpretation! 

Sound

Most of the sounds I used were already existing sound samples and synthesizers available in SuperDirt. The only “original” sound that I used was the piano sound as the sounds produced by SuperPiano did not fit the theme or vibe I was going for. I made three separate sections with an idea of what role each section would play in the overall story/game experience I was aiming for. The three sections are as follows: beginning (once_upon_a_time and adventure), climax (uh_oh) and ending (end_credits). The difficulty I faced the most was combining all the different parts together since they were all drastically different. But i’m happy with how the outcome was overall

 

Visuals

I built the visuals separately and then adapted them based on how I imagined the sounds would look like. Most of the experimenting I did with the visuals was trying to find out which parameter and which sound would be best to sync together to better match the visuals.

 

Reflection 

Although it was frustrating at times, I really enjoyed working on this assignment and trying to think of a way to convey a story by combining sound and visuals. Taking into consideration the feedback I got in class, I would change the colour scheme to be more unified to give it a more coherent outcome. Overall I’m happy with the outcome.

There’s the link to the recorded performance:

 

Hydra Code:

//start
cc[2] = 0
cc[8] = 0
cc[1] = 0.5
cc[9] =0
shape(()=>cc[2]*4+2,.6,.3).color(20,1,10).modulateRepeat(osc(2),()=>cc[1]*3+1,()=>cc[1]*3+1).modulate(noise(()=>cc[0]*3)).posterize().blend(src(o1), ()=>cc[3]).blend(o2, ()=>cc[8]).blend(o3, ()=>cc[9]).out()


shape(()=>cc[2]*4+2,.6,.3).color(20,1,10).modulateRepeat(osc(2),()=>cc[1]*2+1,()=>cc[1]*2+1).modulate(noise(()=>cc[0]*3)).modulateScale(o0,0.1).colorama(1).out(o1)

//mid
shape(100,0,3).repeat(2).modulate(noise(1,()=>cc[4]*10)).invert().mult(osc(1, 0.8, 1).color(-3.5, -3.0, -3.0)).posterize(1.1).scale(()=>cc[5]*1.2).modulateScale(o0, 0.5).pixelate(300,300).out(o2)

render()
//end
shape([40,50,60].smooth(1),0.5,0.5).modulateRepeat(noise(1)).mult(osc(1,0.1,5).diff(gradient(2)).modulateScale(osc(5),0.5),0.9).posterize(3).scale(0.5).kaleid(()=>cc[6]*10).out(o3)

hush()

 

Tidal Code: 

once_upon_a_time = do {
  d1 $ s "soskick" >| note "f(3,8,3)" #gain 0.8 #legato 2;
  d3 $ s "jazz:8" >| note (arp "diverge" (scale "maj" ("[2,5,7,1]") + "<c4 c5>")) #gain 1.2 #legato 1 #vowel "o i u" #speed 0.5;
  d2 $ slow 2 $ stack [ s "supervibe" >| note "d'm7 g'7m c? [f c]" #room 0.5 #velocity 0.6,
  struct "t t t? [t t]" $ ccv (segment 128 (range 100 20 isaw)) # ccn "0" # s "midi"];
}
-- d1 $ s "bd"
adventure = do {
  once_upon_a_time;
  d2 $ almostNever (fast 2) $ stack [s "supervibe" >| note "d'm7 g'7m c? [f c]" #room 0.5 #velocity 0.3, struct "t t? t [t t]" $ ccv (segment 128 (range 100 20 isaw)) # ccn "0" # s "midi"];
  d5 $ juxBy 0.2 (#vowel "a") $ almostNever (fast 2)$ stack [
  s "blip:8" >| note "d'm7 g'7m? c [f c]" #room 0.3 ,
  struct "t t t? [t t]" $ ccv (segment 128 (range 128 20 isaw)) # ccn "1" # s "midi"];
  d4 $ someCyclesBy 0.4 (rev) $ stack[note  "<c(5,8) d(3,8)? e(3,8,2) f(5,8,2)>" |< s "supergong", struct "<t(5,8) t(3,8)? t(3,8,2) t(5,8,2)>" $ ccv "64" # ccn "2" # s "midi"];
  d6 $ s "bd(5,8,2)"  #room 0.2;
  d8 $ s "sd:8*4";
}

uh_oh = do {
  d10 $ ccv "0" # ccn "3" # s "midi";
  d1 $ stack[ s "sine" >| note "d'm7 g'7m c [f c] d g c'maj d'm7",struct "t t t [t t] t t t" $ ccv (segment 128 (range 128 20 isaw)) # ccn "5" # s "midi"] ;
  d2 $ someCycles (rev) $ fast 2 $ s "supervibe" >| note (arp "diverge" (scale "minor" ("[1, 4, 1, 4]") + "<c5 d5 g5>")) #room 1  #legato 2 #pF "modamp" 2 #velocity 0.1;
  d4 $ s "bass(3,8)" #room 1;
  d5 $ s "sd*4";
  d6 $ s "808bd(5,8,2)" #room 0.5;
  d7 $ someCyclesBy 0.4 (fast 2) $ s "blip:8*8" #room 0.4;
  d8 $ someCyclesBy 0.4 (# ccv "127") $ struct "t*8" $ccv "10" #ccn "4" #s "midi";
}

uh_oh
end_credits
d7 $ silence

end_credits = do {
  d10 $ ccv "0" # ccn "8" # s "midi";
  d9 $ s "super808" >| note "<c'sus2 d'9sus4> [g'sus3 f'dom9]? g'sus ~" #vowel "a i o" #gain 1.5;
  d10 $ s "hh*4" #room 0.5;
  d11 $ scramble 8 $ fast 2 $ stack [ s "piano:25" >| note (arp "diverge" (scale "ritusen" ("[2,4,1,7]") + "<c4 c5>")) # room 0.4 #legato 1 #vowel "o i u", struct "[t t t t]" $ ccv (segment 128 (range 128 20 isaw)) # ccn "6" # s "midi"];
}

hush
//////
all id

once_upon_a_time


--

do
  hush
  d2 $ stack [s "supervibe" >| note "d'm7 g'7m c? [f c]" #room 0.5,struct "t t t? [t t]" $ ccv (segment 128 (range 128 20 sine)) # ccn "0" # s "midi"]

--
adventure

do
  d8 $ s "sd*8"
  d6 $ s "808bd(5,8,2)" #room 0.5
  d7 $ someCyclesBy 0.8 (fast 2) $ s "blip:8*8" #room 0.4
  d2 $ almostAlways (fast 2) $ stack [s "supervibe" >| note "d'm7 g'7m c? [f c]" #room 0.5 #velocity 0.4, struct "t t t? [t t]" $ ccv (segment 128 (range 100 20 isaw)) # ccn "0" # s "midi",struct "t t t? [t t]" $ ccv "0 127" # ccn "3" # s "midi" ]


d11 $ ccv "127" # ccn "8" # s "midi"
xfade 2 $ fast 2 $s "sine" >| note "d'm7 g'7m c [f c]"
d7 $ silence

uh_oh
--

all degrade 

hush

do
  d12 $ ccv "127" # ccn "9" # s "midi"
  end_credits

all id

all degrade

hush

 

This is the code from my performance the other day! I’m not entirely sure if this is the correct way of making functions in tidal but it seemed to work pretty well for me. I made functions for both the sound and the visuals in tidal so that it would be easier to control both at once. In hydra, I set it up so that the parameters that I wanted to change were linked to the functions I made in tidal

 

Another thing that could be helpful is writing code on multiple lines at once, just press shift and click on where you want to type and viola! This was especially helpful for when I wanted to add effects to the sound and wanted them to reflect on the visuals 

 

Hope this helps!

//hydra 

blobs = ()=> osc(()=> cc[2]*30+15, .01, 1).mult((osc(20, -.1,1)).modulate(noise(()=>(cc[0])*5, cc[0])).rotate(1)).posterize(()=> cc[1]*5).pixelate(200,200)

hush()

blobs().out()


--tidalCycles

--------------------------  fucntions --------------------------
beep_beep_bop = s "arpy*2 arpy?" # n "c5 <g4 c4> a5" -- ? fast
some_beats = s "bd*4" -- g1.2
more_beats = s "hh*8"
deeeep = s "<bass3(5,8) techno:1(3,8)>"
hehe = note "<g5 e5 c5> [<c4 g4> a4] <g4 g5>? c4" #s "sine"
deep = "techno:1*2 techno:1?" -- krush8
noisy = s "bleep*4" #speed 0.5 -- fast
genocide = note "<g1 e2 c3> [<c1 g1> a2] <g1 g2>? c2" #s "arpy" #squiz 2 #krush 9
-------------- VISUALS -----------------------------------
amount = struct "t*2 t?" $ ccv ((segment 128 (range 127 0 saw))) # ccn "0" # s "midi"
colour = struct  "<t(5,8) t(3,8)>" $ ccv ((segment 128 (range 127 30 saw))) # ccn "1" # s "midi"
wobble = struct "t [t*2] t? t" $ ccv ((segment 128 (range 127 30 saw))) # ccn "2" # s "midi"
--------------------------  functions end ----------------------

hush

d1 $ every 4 (fast 2) $ beep_beep_bop
d2 $ every 4 (fast 2) $ amount
d3 $ deeeep
d4 $ colour
d5 $ more_beats
d6 $ noisy #gain 0.8
d7 $ hehe
d8 $ wobble


hush

 

The program I chose for this research project is Vuo.

 

Vuo was originally designed after Apple’s Quartz Composer (QC), which was released in 2005. The producers of Vuo felt like Apple’s QC was not growing or improving and therefore decided to create a program that could carry out the same functionality as QC and more. It was first released in 2014 and has grown to have a large community. 

 

Vuo allows its users to create visual compositions using a node programming method, which makes its GUI very user friendly and can be used with little to no prior experience in coding. If needed, Vuo also allows its users to manipulate shader code and add shaders from the web, which makes it suitable for both beginners and professionals. Although Vuo does not have a way of composing music in the program, it has some music processing capabilities which makes it a very appealing platform for music visualization projections and performances. It is also used for projection installations and performances as it has a built-in function to project a composition onto a dome or other surfaces. I think it’s also important to note that Vuo can also be used to create and manipulate 3D graphics, images and videos.

 

There is definitely a lot to unpack in Vuo, but I decided to focus on creating a 2D composition. What I liked the most about Vuo is the ability to see how everything connects and what the effect each node has on the image. One thing I noticed is that there is a small lag each time a node is connected, which causes the program to stop for a bit, making the transition between effects unnatural for live coding. 

 

Final Performance:

https://youtu.be/mJOZnfs2GiI

Nodes used:

“Our sense of anticipation grows as we wait for something more, for change, uncertainty, the unpredictable, the resumption of information”

 

This sentence in the article was the first thing that caught my attention. Spiegel claims that the same block of music becomes boring the more we listen to it. While I do agree with this and I have experienced it, especially while preparing for the in-class performance, I still have to wonder to what extent this applies. When I take into consideration the songs and music I listen to, I find myself playing the same playlist every time, despite knowing each song in there by heart. This made me wonder why is it that we get bored of some music faster than other ones? If what Spiegel said is true, then why do we keep coming back to the same songs over and over again despite knowing exactly how the progress? 

 

Spiegel mentions that by adding noise to the composition, we can make it more interesting because of the decrease in predictability. But as mentioned in class, we still need to have some sort of rhythm or base for the music to sound good. How do we know how much randomness or noise is too much? How do we find the balance between predictability and randomness to create a piece that will always be engaging? Is it even possible to create a piece that will never get boring?  

“What is the difference between live coding a piece of music and composing it in the sequencer (live coding an animation and drawing one)? In other words, how does live coding affect the way you produce your work, and how does it affect the end result?” 

While showing my friends some of the examples from class, I was asked this question and wondered to myself “what makes coding it live so special?” I think that after reading the results of this survey, I realized that, to the live coder, what makes live coding unique is the risk that comes with it. It heavily relies on improvisation (despite the practice that goes into a performance), the live coder could have a new idea while performing and decide to try it out, giving a completely unexpected outcome. But does this risk hinder the live coder’s creativity because of the notion that this performance has to be perfect? 

The risk factor associated with live coding is also why I don’t think that live coding can become fully computer generated. It will take out the factor of human error, making it appear the same as any other composition, with the only difference being that the audience can view the code. Moreover, it was mentioned that the code written during a performance is a representation of the live coder’s style. If live coding becomes computer generated, I think that it would lose this “style” that makes each live coding performance unique.