Process

As someone who is very indecisive, it’s often difficult to find a starting point, especially when there are no limitations or specific guidelines but I had to start somewhere. I looked over my previous assignments, pasted in some code for hydra, and started experimenting. I started with the visuals first as I wanted to get an idea of how fast or slow I would like to have my audio.

Visuals

For the visuals in this project, I decided to take a more sleek and simple approach relative to my previous assignments. I noticed that in all of my previous assignments I would create visuals that take up the whole screen and have colors all over the place, so I decided to make something different and focus on the use of shapes. I chose to incorporate my personality into the theme by choosing a color palette consisting of my favorite colors; pink and blue, with some transitional colors and shades. I made at least 10 different mini-performances with the visuals, left them, and started working on the audio.

Audios

In regards to the audio, I also wanted to step out of my comfort zone and find and work with my own samples rather than the ones offered by tidal. The longer audios were a bit more difficult to incorporate as I had to figure out how many loops to set them as and add effects to make them sound like the original sample file. I got Aaron’s help for one of them and applied the same logic to the rest. As usual, for me, the audios were more difficult to put together, as I feel that I don’t have a musical ear and so it takes me some time to find different sounds that go together and asked a few of my friends if they sound good. Similar to the visuals, I put together maybe 7 mini-performances until I was satisfied with the results I had. I experimented with different scales of structure, however, I ended up choosing performances that included longer audio files so I didn’t feel that the scales went along with them.

Composition

Next, I had to put the visuals and audio together so I went through them all, and deleted the performances that I either didn’t like or felt that they don’t go with the rest. Testing by mixing and matching different visuals with different audios, I reached 2 of each that I really liked and decided to incorporate them with one another. I separated the performance into 2 phases, fading in and out of the first one, merging into the second one, and lastly fading out with the second one. I connected the visuals and audio through Midi, also testing out several output patterns, and organized everything in a thought-out order of events to put together the composition.

Reflection and Outcome

Out of all of the assignments that we have done since the beginning of the semester, I would say I enjoyed the composition project the most. It gave me the opportunity to explore and experiment but with the goal of having a well-put-together composition that takes into account the sounds as well as the visuals. Ironically, screen recording was a challenge that I had as every time I record something goes wrong; the laptop lags, the audio doesn’t play, I mess up, the audio doesn’t record, …etc. I had to record over 20 times until I finally had a proper recording as could be seen below:

Initially I was just exploring different visual and audio possibilities, until I had a dream about swimming. Therefore, my later intention and concept were to create a sense of dream where one sinks into the water, discovers and experiences a different fantasy. I referred to ABCB structure, but the C part is mainly variation of materials. 

What I wish I have done better is the interaction between audio and visual. For example, I don’t have much visual changes going on for the buildup to increase the intensity. But overall I’m happy with what I have done. 😀

Update: I intended to create a strong contrast visually from B&W to colorful environment with the help of audio. However it turned out a bit abrupt for some viewers, and I can get the point. So I think it’s good to show things to people and get feedback, which can be different, but will provide insights that we as creators couldn’t sense otherwise.

My Code:

Hydra

osc(10,0.1,0).modulate(noise(()=>cc[0]*1,0.04),0.5).thresh().out()
// ()=>cc[0]*1,0.04
// thresh

shape(3).scale(0.4).scale(()=>cc[0]*4).out(o0)

shape(3).scale(()=>cc[0]*1.2).repeat(6,4).out(o0)

//tin
shape(3).scale(()=>cc[0]*1).rotate().repeat(5).scrollX(()=>(-cc[2])*10).scrollY(()=>(-cc[2])*10).out(o0)

shape(4).scale(()=>cc[0]*1).rotate().repeat(5).scrollX(()=>cc[2]*10).scrollY(()=>cc[2]*10).out()

hush()

s1 = osc(10,0.1,0.3).hue(()=>cc[0]*0.3).modulate(noise(()=>cc[0],0.04),0.5).colorama(0.5)
s2 = osc(10,0.1,1).hue(()=>cc[6]*2).modulate(noise(()=>cc[6],0.04),0.5).colorama(1)

//drop
s2.out()

shape(3).scale(0.5).scale(()=>cc[0]*4).repeat(6,4).out(o0)
s1.out(o1)
src(o2).modulate(src(o1).add(solid(1,1),-0.5),0.001).blend(o0,0.1).out(o2)
solid().layer(src(o2).mult(o1),0.5).out(o3)
render(o3)

//ending
tri = shape(3).kaleid(3).scale(0.6)
tri.scale(()=>cc[5]).rotate(()=>(cc[5])).repeat(5).out(o0)
osc(10,0.1,1).hue(.8).modulate(noise(2,0.04),0.5).colorama(2).out(o1)
src(o2).modulate(src(o1).add(solid(1,1),-0.5),0.001).blend(o0,0.1).out(o2)
solid(0.2,()=>cc[1]*0.2,()=>cc[1]*0.2).layer(src(o2).mult(o1).luma(0.01),0.1).out(o3)
render(o3)

hush()

Tidalcycle:

Continue reading “Eadin’s Composition Project”

For me, this project is by far the most challenging one, and again, the one I prepared for most carefully. I must say that I put more effort into making the music than the visual, and I am relatively more satisfied with the music part as well.

For the music part, I used the Lupin III theme as the theme motive. I would consider my musical pattern as=>

Intro

The main part I(with motive)

The Main part II(with buildup+motive)

Bridge(saxophone solo)

End(with buildup+motive)

For the structure of the music, I was inspired by “Theme from Lupin III 2021”.

I would also like to share some details of my music part: First, I used the classic bossa nova drum pattern in the intro and main part I, which contains an eight-beat hihat, a bottom drum, and a side drum. Secondly, I use the hihat in 8 beats as an instrument throughout the song to serve as a link between different parts (I think it works quite well). Thirdly, I deliberately increased the contrast in each part: in some parts, I used multiple fast-paced percussion and heavy metal guitars (namely, the main part II) to create a passionate atmosphere; I only used the piano for the transition between the endings. I hope to use this sense of contrast to describe Lupin’s ups and downs adventure story.

For my visual part, honestly speaking I didn’t study the transition between the images that carefully. The result of this is that while I did make some interesting graphics, there are times when “too many things are happening at the same time”. So if I have more time, I think this will be the part that I want to improve. 

Here is the link to my code: https://github.com/AvatarLouisLi/Live-Coding/tree/main/Composition%20Project

Here is my work:

For this project, I found it really overwhelming trying to juggle between the visuals and the audio aspects of the project. To simplify this on myself, I decided to divide the two, and start working by thinking of them as two separate entities. I started with the visuals first, and created a few different and completely unrelated designs, just so I could get things going. Then, from the base design that I had, I manipulated each of them and created variations from the same patterns. Next, I set visuals aside and started working on the audio. For some reason I found this step to be more complex and it took me way longer than the visuals. I think a big part of it has to do with the fact that with audio, there is a lot of layering happening and a lot of different components and elements that need to be put together in order to create a good sequence. Regardless, I was eventually able to get the audio track done by breaking it down into sections, and slowly building up to what I wanted it to be. Finally, came the part of syncing between audio and visuals. I struggled with trying to connect the pieces together. I felt like the visuals were lacking in comparison to the audio that I had, but after some tweaking and some help from the Professor, I was able to get on track with what I wanted to do. All of this then lead to the final result that you see below! I hope you enjoy…

 

The audio may be slightly out of sync with the visuals, so I apologize! I could not get the editing quite right…

 

Hi everyone!

This is the documentation for my Composition Performance (linked is a recorded draft — the actual piece to be performed with minor improvements in class tomorrow!)

Intro

The theme of my performance is visualizing and sound-narrating a gameplay experience. My intention was to depict the energy, intensity, and old-fashioned vibe of Mario-like 2D games. This is done through heavy use of pixelated graphics, noise, vibrant colors in Hydra, and electronic sounds/chords (mostly from the ‘arpy’ library) as well as some self-recorded samples in Tydal Cycles.

Audio

The overall intention behind the audio in the piece is to take the audience from the start of the game, followed by some progression with earning score points, then culminating and progressing to level 2, and lastly to “losing” the game and the piece ending.

The audio elements that I recorded are the ones saying “level two”, “danger!”, “game over!”, “score”, and “welcome to the game!”:

-- level 2
d1 $ fast 0.5 $ s "amina:2" # room 1
-- danger
d1 $ fast 0.5 $ s "amina:0" # room 1
-- game over
d1 $ fast 0.5 $ s "amina:1" # room 1
-- score
d1 $ fast 0.5 $ s "amina:3" # room 1
-- welcome to the game
d1 $ fast 0.5 $ s "amina:4" # room 1

 

Learning from class examples of musical composition with chords, rhythms, and beat drops, I experimented with different notes, musical elements, and distortions. My piece starts off with a distorted welcome message that I recorded and edited:

d1 $ every 2 ("<0.25 0.125 0.5>" <~) $ s "amina:4"
  # squiz "<1 2.5 2>"
  # room (slow 4 $ range 0 0.2 saw)
  # gain 1.3
  # sz 0.5
  # orbit 1.9

Building upon various game-like sounds, I create different compositions throughout the performance. For example:

-- some sort of electronic noisy beginning
-- i want to add "welcome to the game"
d1 $ fast 0.5 $ s "superhoover" >| note (arp "updown" (scale "minor" ("<5,2,4,6>"+"[0 0 2 5]") + "f4")) # room 0.3 # gain 0.8 -- change to 0.8

-- narration of a game without a game
-- builds up slightly
d3 $ stack [
  s "sine" >| note (scale "major" ("[<2,3>,0,2](3,8)") + "g5") # room 0.4,
  fast 2 $ s "hh*2 hh*2 hh*2 <hh*6 [hh*2]!3>" # room 0.4 # gain (range 1 1.2 rand)
]

Visuals

My visuals rely on the midi output generated in Tydal Cycle and build upon each other. What starts as a black-and-white modulated and pixelated Voronoi noise grows gets color and grows into playful animations throughout.

For example:

voronoi(10,1,5).brightness(()=>Math.random()*0.15)
  .modulatePixelate(noise(()=>cc[1]+20),()=>cc[0]),100)
  .color(()=>cc[0],2.5,3.4).contrast(1.4)
  .out(o0)

// colored
voronoi(10,1,5).brightness(()=>Math.random()*0.15)
  .modulatePixelate(noise(25,()=>cc[0]),100)
  .color(()=>cc[0],2.5,3.4).contrast(1.4)
  .out(o0)

voronoi(10,1,5).brightness(()=>Math.random()*0.15)
  .modulatePixelate(noise(()=>cc[1]+20,()=>cc[0]),100)
  .color(()=>cc[0],()=>cc[1]-1.5,3.4).contrast(0.4)
  .out(o0)

// when adding score
voronoi(10,1,5).brightness(()=>Math.random()*0.15)
  .modulatePixelate(noise(()=>cc[2]*20,()=>cc[0]),100)
  .color(()=>cc[0],5,3.4).contrast(1.4)
  .add(shape(7),[cc[0],cc[1]*0.25,0.5,0.75,1])
  .out(o0)

// when dropping the beat
voronoi(10,1,5).brightness(()=>Math.random()*0.15)
  .modulatePixelate(noise(cc[0]+cc[1],0.5),100)
  .color(()=>cc[0],0.5,0.4).contrast(1.4)
  .out(o0)

 

Challenges

When curating the whole experience, I first started by creating the visuals. When I found something that I liked (distorted Voronoi noise), my intention was to add pixel-like sound, and the idea of games came up. This is how I decided to record audio and connect it to the visuals. What I did not anticipate is how challenging it would be to link transitions on both sides and timing them appropriately.

Final reflections and future improvements

I put a lot of effort into making the sounds cohesive and creating a story that I wanted to tell with the composition. When doing the next assignment in class, I think I will try to make the transitions smoother and make visuals be more powerful in the overall storytelling.

Most importantly, I think I am finally starting to get the hang of live coding. This is my first fully “intentional” performance — although many elements will be done with improv when performing live, this project feels less chaotic in terms of execution and the thought process put behind the experience.

The full code can be found here.

Heyy guys! Some of you asked me how I did the visuals for my previous weekly assignment. Here is the code:

src(o0).modulateHue(src(o0).scale(1.01),1).layer(osc(10,.1,2).mask(osc(200).modulate(noise(3)).thresh(1,0.1))).out()

The above code I used to transition from my previous visual. And the below one is the one connected to midi:

osc(10,.1,2).mask(osc(200).modulate(noise(()=>cc[0]*3+1)).thresh(()=>(cc[0]),0.1)).rotate().out()

Altering the thresh values and noise level made interesting patterns!

Also, I wondered what the difference was between layer and mask. While coming up with the above, I discovered that layer and mask can be used interchangeably in the following way:

// the following 2 codes are same
osc(10,.1,2).layer(osc(200).luma(0.5,0.8).color(0,0,0,1)).out()
osc(10,.1,2).mask(osc(200)).out()

The above 2 lines of code produce the same output. Hope this was helpful! 🙂

I find the role of improvisation in our digital culture as Jurgenson discusses to be very inspiring. Improvising is a key part of how we create and interact with digital media for several reasons such as the fact that digital media are often designed to be flexible and adaptable, we often use them in unexpected ways and improvisation can help us create new meanings and experiences.

This piece resonated with me on a personal level. As someone who does a lot of work in the field of web design, I am constantly faced with the need to improvise. Whether it’s making small changes to an existing design or completely redesigning a website from scratch as the work requires me to be creative and flexible. And while it can be challenging at times, I find that improvising helps me come up with better solutions than if I were following a set plan.

It also made me think about how important improvisation is in our everyday lives. We might not always think about it, but many of our interactions with others are improvised – whether we’re having a conversation or simply exchanging glances across a room. It allows us to respond spontaneously to what others are saying or doing, which can lead to more meaningful interactions, and sometimes even funny ones.

Another concept that also got me thinking is “the digitization of information”. It made me consider how my personal relationship with technology has changed over time as more things become digitized; for example, instead of buying physical CDs or DVDs containing my favorite movies or TV shows, now I just stream them online on Netflix or YouTube. There’s definitely something lost in terms of tangibility when things go digital like this – you no longer have an “original” copy per se – but there are also advantages, in terms of ease and convenience, at least, gained.