This reading discusses how boundaries between different art forms being broken down in the 1960s allowed artists to create art that had a wider reach. Musicians were also picking up the paintbrush and vice versa. However, it did not stop at people becoming interdisciplinary. Artist-musicians were combining sounds and visuals in one performance, much like we do in our Live Coding class. The reading uses the word ‘synesthesia’ to describe this phenomenon. Synesthesia refers to when you experience one sense through another, for example, hearing shapes or tasting sound. It really is a beautiful way to create art. Rather than just painting a picture, why not combine visuals with sounds and paint what you see? In a way, I believe that human beings have been a little bit Synesthetic (is that a word?) since the beginning of time. What I mean by this is, when we hear music, we have certain colours we associate with the type of sounds we are listening to. For instance, when I listen to an upbeat pop song, I might envision bright colours alongside it such as pink, yellow, orange etc. If I go to a heavy metal concert, I will expect to see darker visuals.
Synesthesia is an important thing to consider when I create my Live Coding projects, such as what visuals best complement the sounds I am creating. The music and the visuals should not be treated as separate things, but as one new art form.
This reading contextualizes the way we see “musical art” today and how it was developed. It provides the theories and history behind its evolution as a concept and examples of works and artists in the field. There are several main ideas that I found to be relatable on a personal level, not in a way that they apply to me personally but rather based on my observations. Since we started this class, whenever we have a performance or look at documented works or experiences, I always think of house music or shows and concerts like Martin Garrix and similar artists. This, I feel, relates to section 5 which looks at the computer being a universal machine that combines clubs with galleries under a “one-person enterprise”(5). The author discusses how the artist-musician/musician-artist label has been influenced by the spread of electronic music such as techno and house. This led to clubs being a place that combines forms of expression like music and visuals, as well as other factors. I would say that such factors apply to concerts as well, alongside clubs. Not only does the performers’ music allow people to vibe, move with the beat, and dance together, but performing it live makes it 1000 times more interesting and captivating, engaging multiple senses at the same time. It creates some kind of immersive or 2D experience where you can see the visuals reacting with the audio/sounds and incorporating fire or smoke, in some cases, that also go with the beat.
I have been to many concerts in my life but I had the most fun at a Martin Garrix concert where I was close to the stage and could see all of the colors and visuals changing up close with the music. He also incorporated aspects that we have learned how to use in class such as images or videos within the visuals, which personalizes the performance even further and maybe incorporates the “filmic” form discussed in the reading. Another topic that I found to be very interesting when reading was how some terminology used for visuals has been taken from music to apply to paintings, for example, which we still use to this day. Referring to a visual work or producing it as a “composition, symphony, improvisation, or rhythm”(2) is a recent change, that emphasizes abstraction in art. However, I believe that there is a wider room for interpretation when it comes to understanding what each word means when applying it to visuals than it does for music.
General Overview
This assignment was quite a challenge! Composition is tough, especially in code format as it felt less intuitive for me. The way I tried to tackle this task is by consuming as much music and visuals that inspire me and trying to duplicate and play around with the resulting outcomes. For music, I realized that I’m not a big fan of typical EDM-ish build-ups, and find myself gravitating towards more experimental ambient music, so two of the main songs that were inspiring me for this project were: ATM by Billy Lemos, and An Encounter by The 1975. As for visuals, I wanted to steer away from coding shapes from scratch, rather I wanted to experiment with either layering the same video or source image.
Music
As mentioned above, I wanted to experiment with creating ambient and almost nostalgic sounds, but before all that I referred to music theory and the piano, and picked a scale that I thought would compliment the feel I’m going for, which is the C minor scale. I went through a lot of different samples and approaches to building the composition, and almost nothing felt like it made sense or like it’s cohesive. The final structure I went with is pretty experimental, a piece in two parts with the sound of children playing in the background. It’s meant to be both ominous and nostalgic both feelings that I attempted to achieve mostly by using room, sz, and orbit.
Visuals
Before starting, I knew I wanted to experiment with red-blue layered visuals with an anaglyph 3D effect. I played around with gifs and different figures, and then felt like a picture of an eye could be both visually appealing and match the vibe of the music. Throughout the whole piece, the eye is consistent as the source.
Difficulties
The biggest difficulties I faced are with compositional structure and audiovisual interaction. For compositional structure, as I mentioned, I struggled with finding an alternative to the build-up structure, as I felt like it did not resonate with the feel that I was trying to achieve. As for audiovisual interaction, I think I just need to practice it and experiment with it more until it becomes more precise and understandable for me.
---- start d14 $ s "children" #gain 0.5 d13 $ qtrigger 13 $ seqPLoop[ (0, 4, note "[[ef5'maj] [g5'min] [bf5'maj] ~ ~]" # sound "superfork" # room 0.1 # gain 0.7 # legato 1.5), (4, 8, note "[[ef5'maj] [g5'min] [c6'min] ~ ~]" # sound "superfork" # room 0.1 # gain 0.7 # legato 1.5) ] d7 $ ccv "0 50 64 0 0" # ccn "0" # s "midi" d7 silence d1 $ s "coins" # gain 0.8 d5 $ slow 2 $ sound "superfork" >| note "[c2'min]? [bf3'min]?" # room 0.1 # gain 0.9 -- bg chord -- introducing more rhythm + ambient element hush d3 $ qtrigger 3 $ seqPLoop[ (0, 12, fast 2 $ sound "808bd(1,4)" # gain 1.2), (4, 12,slow 2 $ sound " ~ [future:4(3,5)] ~ ~" # gain 1.5 # room 0.4) ] d11 $ ccv (segment 128 (range 127 0 saw)) # ccn "1" # s "midi" d9 $ struct "<~ t(3,5) t>" $ ccv ((segment 128 (range 127 0 saw))) # ccn "2" # s "midi" d4 $ fast 2 $ s "ade:3" |> note "<f5_>" # cut 1 # vowel "o" # gain "<1? 0.9 0.2 0.7>" #room 0.4 -- chaos & tension here w/ distort & adding beats bassDrum = d14 $ fast 2 $ sound "808bd" # gain 1.2 bassDrum d3 $ slow 2 $ sound "[future:4*3] ~ ~ ~" # gain 1.5 # room 0.4 # distort 0.2 d5 $ struct "<t(3,5) t>" $ ccv ((segment 128 (range 127 0 saw))) # ccn "3" # s "midi" d4 $ fast 2 $ s "ade:3*6" |> note "<f5_>" # cut 1 # vowel "o" # gain "<1? 0.9 0.2 0.7>" #room 0.4 --- hush, instead of a drop for ominous vibe hush ------ second bit, normal then degrade d4 $ degradeBy 0.8 $ slow 2 $ s "armora:5" # room 0.4 # sz 0.6 # orbit 1 # gain 0.8 d1 $ s "<coins(1,4)>" # gain 0.95 d5 $ ccv " 127 0 0 0 0" # ccn "3" # s "midi" d4 silence d10 $ qtrigger 10 $ seqP[ (0, 8, s "ade:3" |> note "c2" # cut 1 # vowel "a" # orbit 1 # room 0.7 ), (8, 14, s "ade:3" |> note "c3" # cut 1 # vowel "a" # orbit 1 # room 0.7), (14, 20, s "ade:3" |> note "c4" # cut 1 # vowel "a" # orbit 1 # room 0.7) ] d1 silence hush
s2.initImage("https://i.pinimg.com/originals/b1/7b/6e/b17b6e0ba062a3217ecd873634093864.png") s3.initImage("https://i.pinimg.com/originals/b1/7b/6e/b17b6e0ba062a3217ecd873634093864.png") src(s2).scale(() => cc[0]*0.5,4,8).pixelate(600,600).scrollX(0.3,0.01).out(o0) // start, one eye layer //() => cc[0]*0.5 src(s2).scale(0.02,4,8).color(1,0,0).scrollX(0.01,0.1).layer(src(s2).scale( () => cc[0]*0.1,4,8).pixelate(600,600).scrollX(0.2,0.1)).out(o0) .layer().out(o0) src(s2).scale( () => cc[0]*0.1,4,8).pixelate(600,600).scrollX(0.7,0.1).layer(src(s2).scale(()=>cc[1]/6,4,8).color(1,0,0).scrollX(0.,0.1)).rotate(() => cc[2]).out(o0) // () => cc[2] //rotation n scaling for red src(s2).scale( () => cc[0]*0.01,4,8).pixelate(600,600).scrollX(0.7,0.1).layer(src(s2).scale(()=>cc[2]/10,4,8).color(1,0,0).modulate(noise(()=> cc[1]*6)).scrollX(0.,0.1)).out(o0) //modulate noise src(s2).scale( () => cc[0]*0.01,4,8).pixelate(600,600).scrollX(0.7,0.1).layer(src(s2).scale(()=>cc[2]/10,4,8).color(1,0,0).modulate(noise(()=> cc[1]*2)).pixelate(()=>cc[3]*2).scrollX(0.,0.1)).out(o0) //pixelate osc(6).color(1,0,0).modulate(src(s2).scale(0.1,4,8),1).blend(osc(6).color(0,0,2).modulate(src(s2).scale(0.1,4,8).scrollX(0.7),1).modulate(noise(()=>cc[3]+0.3))).out(o0) // part 2 osc(6).color(1,0,0).modulate(src(s2).scale(0.1,4,8),1).blend(osc(6).color(0,0,2).modulate(src(s2).scale(0.1,4,8).scrollX(0.7),1).brightness(()=> +0.8).modulate(noise(()=>cc[3]))).out(o0) //brightness
Concept:
For this assignment, I tried so many things that were not coming together. I had some very interesting visuals and I spent so much time crafting melodies (even used the piano in class to try out new patterns), but they were not coming along together. Then last week I had a very deep reflection on water and how we usually associate the sounds of water as something relaxing, but water can be very dangerous and wild depending on the context. To me water can be very relaxing but also very anxiety-inducing under certain conditions. It was somewhat of a deeply personal moment of introspection that inspired me to create water-like visuals and then come up with a soundscape to match them.
I was really interested in exploring my relationship with water and I came up with a story for that. I decided to use lines of the story as a way to name my functions in Tidalcycles. I personally really enjoyed this idea and thought it was one of the most appealing aspects of my presentation.
Music:
For the music, I wanted to emphasize the calmness of the ocean and then build it up to a very chaotic place, reminiscent of a storm.
The music was the trickiest part of the assignment. I was very locked into the idea of traditional composition, but even though I was able to come up with very nice melodies, I could not come up with beats that matched the melody well. It was very hard to get those to match and I spent a lot of time working on sound elements that I ended up not incorporating into my presentation.
After narrowing down a theme for my performance and coming up with the story that I want to tell, I actually threw away my more traditional composition approach and just started playing around with sounds that I found appealing and how to merge them with the rest of my music.
I think music was where I spent the bulk of my time. It was very hard, but I tried to keep myself organized with the code and that really help me tailor the kind of sound experience that I wanted to create. I also loved the Mask function in Tidalcycles. I watched this performance by Dan Gorelick. He uses mask in a very playful way and I thought that it would be a nice way to live code but at the same time have an outcome that will be synced to the rest of the music. It was very fun to try mask out and come up with different beats.
I am very happy with the organization of my code. Maybe it won’t make much sense to someone seen the code, but I understand very well how I placed things together and how I use variables and functions in the code.
by_The_Ocean lived_A_Man who_Manned d9 $ a_lighthouse 1 "t t t t" but_he_didnt_like the_Ocean because_it_was_beautiful but_dangerous so_when_the storm_hits he_has_to go_up _the_lighthouse and_sound the_alarm to_warn_people when_the_storm_stops the_man_repeats_this until_the_ocean_is_silent hush -- Evaluate before performance -- to send cc values do d11 $ ccv "20 40 64 127" # ccn "0" # s "midi" d12 $ ccv "0 20 64 127" # ccn "2" # s "midi" but_he_didnt_like = do d1 $ slow 2 $ s "wind" <| note ("bf a g d") # room 0.5 d2 $ slow 2 $ s "newnotes" <| note ("bf a g d") # room 0.5 d3 $ slow 2 $ s "gtr" <| note ("bf a g d" + 12) # room 0.5 d13 $ ccv "2" # ccn "3" # s "midi" the_Ocean = do d1 $ every 4 (+ "c6 as a as")$ slow 2 $ s "wind" <| note "bf a g d"# room 0.5 d2 $ every 4 (+ "c6 as a as")$ slow 2 $ s "newnotes" <| note "bf a g d"# room 0.5 d3 $ every 4 (+ "c6 as a as")$ slow 2 $ s "pluck" <| note ("bf a g d" + 12)# room 0.5 d4 $ every 4 (+ "c6 as a as")$ slow 2 $ s "pluck" <| note ("bf a g d")# room 0.5 d11 $ ccv "20 40 64 127 [0 10] [70 50] [90] [100]" # ccn "0" # s "midi" d13 $ ccv "3" # ccn "3" # s "midi" because_it_was_beautiful = do d5 $ slow 2 $ s "em2" <| note ("[e5 d5 ~] [c5 b4 ~] [a4 g4 ~] [f4 e4 ~]") # gain 1.2 d6 $ slow 2 $ s "wind" <| note ("[e5 d5 ~] [c5 b4 ~] [a4 g4 ~] [f4 e4 ~]") d7 $ slow 2 $ s "newnotes" <| note ("[e5 d5 ~] [c5 b4 ~] [a4 g4 ~] [f4 e4 ~]" + 12) # gain 1.2 d13 $ ccv "4" # ccn "3" # s "midi" but_dangerous = do d1 $ n (arpg "'major7 [0,4,7,11]") # sound "wind" d2 $ n (arpg "'major7 [0,4,7,11]") # sound "newnotes" d3 $ n (arpg "'major7 [0,4,7,11]" - 12) # sound "pluck" d4 $ n (arpg "'major7 [0,4,7,11]" - 38) # sound "pluck" xfade 5 silence xfade 6 silence xfade 7 silence d13 $ ccv "5" # ccn "3" # s "midi" -- cuerdas so_when_the = do xfade 3 $ slow 2 $ n (off 0.25 (+12) $ off 0.125 (+7) $ "c(3,8) a(3,8,2) f(3,8) e(3,8,4)") # sound "pluck" d7 $ s "hh*8" -- storm is coming storm_hits = do d1 $ jux rev $ s "sine*8" # note (scale "iwato" "0 .. 8" + "f3" ) # room 0.9 # gain 0.6 he_has_to = do d1 $ n (arpg "'major7 [0,4,7,11]") # sound "trump" d13 $ ccv "6" # ccn "3" # s "midi" --storm go_up = do d1 $ slow 2 $ s "superpiano" <| note ("[e5 d5 ~] [c5 b4 ~] [a4 g4 ~] [f4 e4 ~]") d2 $ slow 2 $ s "pluck:2" <| note ("[e5 d5 ~] [c5 b4 ~] [a4 g4 ~] [f4 e4 ~]" + 24) # gain 1.2 d4 $ slow 2 $ s "gtr" <| note ("[e5 d5 ~] [c5 b4 ~] [a4 g4 ~] [f4 e4 ~]" + 24) d5 $ slow 2 $ s "em2" <| note ("[e5 d5 ~] [c5 b4 ~] [a4 g4 ~] [f4 e4 ~]") # gain 1.2 d6 $ slow 2 $ s "wind" <| note ("[e5 d5 ~] [c5 b4 ~] [a4 g4 ~] [f4 e4 ~]") d7 $ slow 2 $ s "newnotes" <| note ("[e5 d5 ~] [c5 b4 ~] [a4 g4 ~] [f4 e4 ~]" + 12) # gain 1.2 d8 $ slow 2 $ s "notes" <| note ("[e5 d5 ~] [c5 b4 ~] [a4 g4 ~] [f4 e4 ~]"+ 12) # gain 1.2 d13 $ ccv "7" # ccn "3" # s "midi" _the_lighthouse = do d1 $ fast 2 $ s "hh*2 hh*2 hh*2 <hh*6 [hh*2]!3>" # room 0.7 # gain (range 1 1.2 rand) and_sound = do d8 $ a_lighthouse 1 "t t t t" d13 $ ccv "8" # ccn "3" # s "midi" -- High Hat beat the_alarm = do xfade 5 $ alarm d13 $ ccv "9" # ccn "3" # s "midi" -- maybe the alarm to_warn_people = do xfade 9 $ slow 2 $ n (off 0.25 (+12) $ off 0.125 (+7) $ "c(3,8) a(3,8,2) f(3,8) e(3,8,4)") # sound "trump" -- After buildup when_the_storm_stops = do d1 silence d2 silence d3 $ s "sax" # note (arp "updown"(scale "major" ("[0,2,4,6]" +" <0 0 9 8>") + "f5")) d4 silence d5 silence d6 silence d7 silence d8 silence d9 silence d13 $ ccv "10" # ccn "3" # s "midi" alarm = stack[ s "~ trump" # room 0.5, fast 2 $ s "gtr*2 gtr*2 gtr*2 " # room 0.7 # gain (range 1 1.2 rand)] # speed (slow 4 (range 1 2 saw)) -- Ending the_man_repeats_this = do xfade 1 $ s "em2:2" <| note ("c [d e] c") d3 silence d13 $ ccv "11" # ccn "3" # s "midi" until_the_ocean_is_silent = do xfade 1 silence d13 $ ccv "12" # ccn "3" # s "midi" -- Ambience by_The_Ocean = do xfade 15 $ slow 4 $ s "sheffield" # gain 0.7 -- ambience d14 $ slow 8 $ s "pebbles" # gain 0.7 -- very long, maybe pebbles on a beach d3 $ slow 8 $ s "birds" <| n (run 4)# legato 1.8 d13 $ ccv "0" # ccn "3" # s "midi" lived_A_Man = do d3 silence d3 $ qtrigger 3 $ slow 8 $ s "em2:2" <| note ("[b ~] [a ~] [g ~] [e ~]") who_Manned = do d1 $ s "bd bd cp ~" d13 $ ccv "1" # ccn "3" # s "midi" a_lighthouse speed t = slow speed $ mask t $ sound "clubkick clubkick clubkick [clubkick clubkick clubkick]"
Visuals:
The visuals were very fun to make. Again, a lot of the things I made were not used, but I did notice a pattern in the types of animations I was creating, which were very similar to the way water behaves. I decided to narrow down my visuals to those that reflected water behaviour (both the calm and the storm).
While I was practising, I was struggling a lot between triggering things in Tidal and then going to the hydra and vice-versa. It was very overwhelming for me because of how much code I had. So, after trying to optimize my triggering skills, I decided to instead automize the visuals by using the update() function and controlling which visual is shown through one of the cc value arrays (channel 3). This took away the pressure of modifying things in both files, so I could just focus on the music and that would change the visuals without me having to do it into the js file.
To be honest, I really thought that I was been selective with my visuals because I got to narrow them down to a theme and even then, I did not use all of my water-related animations. However, I see now that it was not enough and I should have trimmed down on the visuals more. It would have been better to dedicate more time to how using the cc[] could modify the visual into something completely different.
update = () => { if(ccActual[3] == 0) ocean() //by the ocean if(ccActual[3] == 1) sparklingWater() // who Manned if(ccActual[3] == 2) drop()// he didn't like if(ccActual[3] == 3) honda() //ocean if(ccActual[3] == 4) waterwheel() //beautifu; if(ccActual[3] == 5) psychoWave() //dangerous if(ccActual[3] == 6) wave()// he has to if(ccActual[3] == 7) splash()//go up if(ccActual[3] == 8) storm()//sound if(ccActual[3] == 9) hurricane() // the alarm if(ccActual[3] == 10) dripple() //sax if(ccActual[3] == 11) ocean() // repeats if(ccActual[3] == 12) hush() //silent } //THINGS THAT WORKED //wavey water // b = 0 // update = () => b += 0.001 * Math.sin(time) wave = ()=> voronoi(()=>ccActual[0],0,0).color(.2,0.3,0.9) .modulateScrollY(osc(10),0.5,0) .out() dripple() //storm storm = ()=> shape(2).mask(noise(10,0.1)).color(1,0,0).invert(1).rotate(()=>6+Math.sin(time)*cc[0]).out() drop = ()=> shape(2).mask(noise(cc[0],0.1)).color(1,0,0).invert(1).rotate(()=>6+Math.sin(time)*cc[0]).out() hurricane = ()=> voronoi() .add(gradient().invert(1)) .rotate(({time})=>(time%360)/2) .modulate(osc(25,0.1,0.5) .kaleid(50) .scale(({time})=>Math.sin(time*cc[0])*0.5+1) .modulate(noise(0.6,0.5)), 0.5) .out() splash = ()=>gradient().color(1,0,1).invert(1).saturate(5) .add(voronoi(50,0.3,0.3)).repeat(10,10).kaleid([3,5,7,9].fast(cc[0])).modulateScale(osc(4,-0.5,0),15,0).out() dripple = ()=> gradient().invert(1).kaleid([3,5,7,9].fast(0.5)) .modulateScale(osc(cc[2],-0.5,1).kaleid(50).scale(0.5),10,0) .out() drippleActual = ()=> gradient().invert(1).kaleid([3,5,7,9].fast(0.5)) .modulateScale(osc(()=>ccActual[2],-0.5,1).kaleid(50).scale(ccActual[0]),10,0) .out() //wavy pattern psychoWave = ()=>shape(4,0.95) .mult(osc(0.75,1,0.65)) .hue(0.9) .modulateRepeatX( osc(10, 5.0, ({time}) => Math.sin(time*(cc[0]))) ) .scale(10,0.5,0.05) //first value could be a cc .out() // honda honda = ()=> osc(10,-0.25,1) .color(0,0.1,0.9) .saturate(8).hue(0.9) .kaleid(50) .mask( noise(25,2) .modulateScale(noise(0.25,0.05)) ) .modulateScale( osc(6,-0.5,()=>cc[2]) .kaleid(50) ) .scale(()=>cc[2],0.5,0.75) .out() // background of house waterwheel = ()=> noise(10, 0.1).color(1,0.5,0.5).invert([1]).add(shape(2,0.8).kaleid(()=>6+Math.sin(time)*4)).out() sparklingWater = ()=> noise(25,2).invert(0).color(0.7, 0.9, 1).saturate(4).modulateScale(noise(()=>cc[2],0.05)).out() ocean = ()=> osc(1,1,0).color(0,0,1).hue(0.9).modulatePixelate(noise(5,1),()=>cc[0]).out()
Performance:
Link to Performance
My computer slows down a lot when I try to record the audio from supercollider, so I decided to just screenrecord.
Reflection:
I felt like this assignment made me really grasp how things work in TidalCycles. I spent a lot of time going through the documentation and trying out things. Hydra was a bit more about experimenting, but I liked the idea of using Update as a way to automize certain things and not have to worry about both triggering things in Tidal and in Hydra. My understanding of both was really reinforced and expanded.
However, I do think that I could have been more selective on the visuals and maybe should have focused more on transitions. Because I tried so many things out that I ended up not using, I felt like as long as the visuals were coherent in themes and palette with each other then my performance would be good. I was a bit blinded by how much I did vs how much I should be doing on my presentation.
I really liked some of the visuals I created and I would have liked to maybe do more in the interaction of the visual with the cc values, not just triggering things and scaling but also transforming the visual. I also thought that automating certain things was something smart that I was doing to help myself but I think it ended up downgrading the overall experience of the performance.
I practised a lot for this assignment and I think that made me have certain thinking of how well the performance was working compared to my previous versions of the assignment, but now I see that just because the performance is better than when I previously practised, that doesn’t mean that it’s good overall. I think there were a lot of things I was lacking because I was worried about the wrong things. I think that now that I am looking back on the performance more clearly, I can definitely see that my concept was good but the execution was not as well planned as I had falsely thought and the result was not what I thought.
I think that maybe I should have put more of myself out there in the performance and though the topic was very mine, the execution could have been better. I really like the idea of using my artwork combined with hydra for the visuals of my performances and I think I really want to explore that this second half of the semester.
I was inspired by the music box gif on the giphy website. I want to have the main melody which is To Alice. Throughout the whole composition, there are three main components. The first one is the verse, which in context includes when the music box is turned on and winded and plays beautiful tunes. However, it follows the distortion. I introduced the distortion by varying the speed of the melody so that it sounds like rewinding the music. To go with the music, I found a gif on giphy to present to the audience the music box being played.
It is convenient that the gif is already creepy in some ways because it facilitates my chorus, which I use to portray the increasingly broken music box. Composition wise, I added lower end melody using different samples as the original one, as well as changing the beats a little to give the composition piece more variety. As for the visuals, I introduced noise into the giphy to simulate chaotic features. Moreover, In the second half of the chorus, I use heavy electric guitar sound to push the music to a peak. At the same time, the visuals are even more chaotic and by using feedbacks, I was able to convey the emotion of fear.
Next is the bridge. To contrast with the chorus, I slowed down music and added some bass. In the visuals, chaos are gone, instead there is slowed and smooth distortion I created using blend function.
I think the performance could be improved by a more cohesive use of visuals, sounds as well as midi. I These would facilitate me in telling the story since the audience won’t be distracted by the incoherent movement between different parts of music or visuals. I have pasted the YouTube video here:
I was playing around with variations of “notes” from the tidal cycle archive and came across this line that sounded oddly familiar to my phone when it is buzzing intensely at me.
d5 $ slow 4 $ s "notes:2" >| note (scale "major" ("<0>(2,2)") +"f2") # room 2.3 #krush 1.7 # gain (range 1.0 1.4 rand)
Hearing the buzz repeat itself, over and over again, gave me flashbacks to all the times I got notifications when I wasn’t supposed to – such as in my deep nap mode, when in a class trying to focus, when on a digital detox etc. Phone notifications can be inviting at times – I love getting texts from a best friend, family, or a loved one – but it could also be very irritating. I was inspired to create a composition expressing this very specific type of irritation we feel sometimes towards our digital best friend.
I used a mix of notes:1 and notes:2 for the main melodies. The build up of the sound was achieved by speeding up the base drums, snare drums, the bass (auto:5, auto:7), and the three melodies. For the visuals, I tried to find a cartoon character that resembled how I look like when I am sleep deprived. I came across a clip that I found really funny, from this particular scene in Spongebob where Squidward was very upset from the endless incoming calls from Spongebob and Patrick. Squidward had just finished preparing himself to sleep and was ready to go to dream land. The last thing he wanted to hear was the phone buzzing. This scene matched my inspiration perfectly. I wanted to sync the build up of the music to the anger I assume was building within Squidward. The story was the main component of focus within the composition, hence both the code for the visual and sound were structured around the storyline, the comments of which were left on Atom for the viewers to follow during the performance.
I hope you enjoy 😴
Tidal:
-- START do d2 $ fast 2 $ s "notes:1" >| note (arp "up" (scale "major" ("[2,3,7,6]"+"<2 3 7 3>") + "f4")) # room 0.4 # gain (range 0.7 1.0 rand) d1 $ fast 2 $ s "notes:1" >| note (scale "major" ("[0,<5 <4 3>>](<1 2>,4)") + "f3") # room 0.7 # gain 1.5 #krush 1.2 d3 $ s "bd bd <sd> bd" # room 0.1 d14 $ fast 2 $ ccv "<127 50 127 50 >" # ccn "1" # s "midi" -- CALL INCOMING!!!!! do d5 $ slow 4 $ s "notes:2" >| note (scale "major" ("<0>(2,2)") +"f2") # room 2.3 #krush 1.7 # gain (range 1.0 1.4 rand) d14 $ slow 2 $ ccv "<127 50 127 50 >" # ccn "1" # s "midi" d5 silence -- Second melody d4 $ fast 2 $ s "notes:2" >| note (scale "major" ("<6 <2 4 8 4 2>>(1,3)")) # room 1.2 # gain 1.1 hush -- build up of anger do d7 $ slow 2 $ s "[auto:5, auto:7]" # gain (1.1) # size (0.9) # room (0.8) d8 $ s "[~ sd, hh*2 hh*2 hh*2 <hh*6 [hh*2]!3>]" # room 1 # gain (range 0.8 1.0 rand) d9 $ s "sd*4" # krush 2 # room 0.4 d14 $ fast 16 $ ccv "<127 50 127 50 >" # ccn "1" # s "midi" hush -- ANGER BUILDING d10 $ qtrigger 10 $ seqP [ (0, 1, s "[sd*4]"), (1, 2, s "[bd*8, sd*8]"), (2, 3, s "[bd*16, sd*16, auto:1*16]"), (3, 4, s "[bd*32, sd*32, auto:1*32]"), (4, 5, s "sine*8" # up "g ~ e a [g ~] [~c] ~ ~") ] -- REAL ANGRY do d14 $ whenmod 24 16 (# ccn ("0*128"+"<t(7,16) t t(7,16) t>")) $ ccn "0*128" # ccv (slow 2 (range 0 127 saw)) # s "midi" d5 $ slow 2 $ s "notes:2" >| note (scale "major" ("<0>(2,2)") +"f2") # room 1.8 #krush 1.2 # gain 1 d6 $ s "sine*8" # up "g? ~ e a [g ~] [~c] ~ ~" # room (1) # gain (0.9) d7 $ slow 2 $ s "[auto:5, auto:7]" # gain (1) # size (0.9) # room (0.8) # cutoff 5000 d8 $ s "[~ cp, hh*2 hh*2 hh*2 <hh*6 [hh*2]!3>]" # room 1.1 -- fast hush -- OMG do d14 $ fast 16 $ whenmod 24 16 (# ccn ("0*128"+"<t(7,16) t t(7,16) t>")) $ ccn "0*128" # ccv (slow 2 (range 0 127 saw)) # s "midi" d5 $ slow 1 $ s "notes:2" >| note (scale "major" ("<0>(2,2)") +"f2") # room 1.8 #krush 1.2 # gain 1 d6 $ fast 2 $ s "sine*8" # up "g? ~ e a [g ~] [~c] ~ ~" # room (1) # gain (0.7) d7 $ slow 1 $ s "[auto:5, auto:7]" # gain (1) # size (0.9) # room (0.8) # cutoff 5000 d8 $ fast 2 $ s "[~ cp, hh*2 hh*2 hh*2 <hh*6 [hh*2]!3>]" # room 1 do d14 $ fast 4 $ ccv "<127 50 127 50 >" # ccn "1" # s "midi" d9 silence d8 silence d7 silence d6 silence d4 silence d3 silence d2 silence d1 silence d5 silence -- PEACEFUL TIMES do d14 $ fast 1 $ ccv "<127 50 127 50 >" # ccn "1" # s "midi" d13 $ fast 1 $ s "notes:2" >| note (scale "major" ("a3 g3 e3 <b3 c3 e3 f3>")) # room 1.2 # gain 1.1 hush
Hydra:
// video control vid = document.createElement('video') vid.autoplay = true vid.loop = true vid.volume = 0 // BE SURE TO CHANGE THE FOLDER NAME // BE SURE TO PUT A SLASH AFTER THE FOLDER NAME TOO basePath = "/Users/yejikwon/Desktop/Fall\ 2022/Live\ Coding/class_examples/composition/video/" videos = [basePath+"1.mp4", basePath+"2.mp4", basePath+"3.mp4", basePath+"4.mp4", basePath+"5.mp4"] // choose video source from array // SLEEP TIME vid.src = videos[0] // CALL vid.src = videos[1] // HELLO?! vid.src = videos[2] // HELLO?!!!! vid.src = videos[3] // THE END vid.src = videos[4] // use video within hydra s1.init({src: vid}) render // cc[1] will be zero always during the "A" section // thus making the scale value not change src(s1).scale(()=>-1*cc[0]*cc[1]+1).out() render(o0) let whichVideo = -1; update = ()=> { // only change the video when the number from tidal changes, // otherwise the video wil keep triggering from the beginning and look like it's not playing // cc[2] is for changing the video if (cc[2]!=whichVideo){ vid.src = videos[ccActual[2]] whichVideo=cc[2]; } }
Visuals:
I really liked the mask code examples and how they interact with basic geometric shapes, like encasing chaos in one small orderly container. So I chose to use triangles with very chaotic (noise, Voronoi, screen recording) fillings. This was to allow for the readability of discrete sounds by affecting the base shapes of containers (triangles!) and complexity by using more fluid cc values to affect noise and other chaotic textures of the shapes.
Edit: After the first day of performances, I changed a lot of my visuals to be more minimalistic and readable in the ways that they interact with the music. I also had to dump many of my previous visuals without looking back, simply because they did not fit the mood of the musical score.
Music:
I think this was the hardest part for me. I never learned to play an instrument or done anything music-related, so I felt very behind in stuff related to TidalCycle. To make up for that I’ve been going to the piano room often and trying to replicate the sounds I made on the keyboard in TidalCycle. It’s been fun and very helpful because it gave me an intuition of what I wanted to go for. The idea was to make a creepy sound, something that inspired anxiety, and contrast it with sounds that brought a soft, almost hopeful joy. Eiden had a very similar project to mine in that aspect and her performance inspired many of the changes I made to mine.
A lot of times for the music I had to borrow other people’s ears to ask what was wrong with the things that bothered me and give them names (learned about dissonant notes). I listened to some soundtracks that reflected my feelings and what I wanted to make out of this piece.