Initially I was just exploring different visual and audio possibilities, until I had a dream about swimming. Therefore, my later intention and concept were to create a sense of dream where one sinks into the water, discovers and experiences a different fantasy. I referred to ABCB structure, but the C part is mainly variation of materials. 

What I wish I have done better is the interaction between audio and visual. For example, I don’t have much visual changes going on for the buildup to increase the intensity. But overall I’m happy with what I have done. 😀

Update: I intended to create a strong contrast visually from B&W to colorful environment with the help of audio. However it turned out a bit abrupt for some viewers, and I can get the point. So I think it’s good to show things to people and get feedback, which can be different, but will provide insights that we as creators couldn’t sense otherwise.

My Code:

Hydra

osc(10,0.1,0).modulate(noise(()=>cc[0]*1,0.04),0.5).thresh().out()
// ()=>cc[0]*1,0.04
// thresh

shape(3).scale(0.4).scale(()=>cc[0]*4).out(o0)

shape(3).scale(()=>cc[0]*1.2).repeat(6,4).out(o0)

//tin
shape(3).scale(()=>cc[0]*1).rotate().repeat(5).scrollX(()=>(-cc[2])*10).scrollY(()=>(-cc[2])*10).out(o0)

shape(4).scale(()=>cc[0]*1).rotate().repeat(5).scrollX(()=>cc[2]*10).scrollY(()=>cc[2]*10).out()

hush()

s1 = osc(10,0.1,0.3).hue(()=>cc[0]*0.3).modulate(noise(()=>cc[0],0.04),0.5).colorama(0.5)
s2 = osc(10,0.1,1).hue(()=>cc[6]*2).modulate(noise(()=>cc[6],0.04),0.5).colorama(1)

//drop
s2.out()

shape(3).scale(0.5).scale(()=>cc[0]*4).repeat(6,4).out(o0)
s1.out(o1)
src(o2).modulate(src(o1).add(solid(1,1),-0.5),0.001).blend(o0,0.1).out(o2)
solid().layer(src(o2).mult(o1),0.5).out(o3)
render(o3)

//ending
tri = shape(3).kaleid(3).scale(0.6)
tri.scale(()=>cc[5]).rotate(()=>(cc[5])).repeat(5).out(o0)
osc(10,0.1,1).hue(.8).modulate(noise(2,0.04),0.5).colorama(2).out(o1)
src(o2).modulate(src(o1).add(solid(1,1),-0.5),0.001).blend(o0,0.1).out(o2)
solid(0.2,()=>cc[1]*0.2,()=>cc[1]*0.2).layer(src(o2).mult(o1).luma(0.01),0.1).out(o3)
render(o3)

hush()

Tidalcycle:

Continue reading “Eadin’s Composition Project”

For me, this project is by far the most challenging one, and again, the one I prepared for most carefully. I must say that I put more effort into making the music than the visual, and I am relatively more satisfied with the music part as well.

For the music part, I used the Lupin III theme as the theme motive. I would consider my musical pattern as=>

Intro

The main part I(with motive)

The Main part II(with buildup+motive)

Bridge(saxophone solo)

End(with buildup+motive)

For the structure of the music, I was inspired by “Theme from Lupin III 2021”.

I would also like to share some details of my music part: First, I used the classic bossa nova drum pattern in the intro and main part I, which contains an eight-beat hihat, a bottom drum, and a side drum. Secondly, I use the hihat in 8 beats as an instrument throughout the song to serve as a link between different parts (I think it works quite well). Thirdly, I deliberately increased the contrast in each part: in some parts, I used multiple fast-paced percussion and heavy metal guitars (namely, the main part II) to create a passionate atmosphere; I only used the piano for the transition between the endings. I hope to use this sense of contrast to describe Lupin’s ups and downs adventure story.

For my visual part, honestly speaking I didn’t study the transition between the images that carefully. The result of this is that while I did make some interesting graphics, there are times when “too many things are happening at the same time”. So if I have more time, I think this will be the part that I want to improve. 

Here is the link to my code: https://github.com/AvatarLouisLi/Live-Coding/tree/main/Composition%20Project

Here is my work:

For this project, I found it really overwhelming trying to juggle between the visuals and the audio aspects of the project. To simplify this on myself, I decided to divide the two, and start working by thinking of them as two separate entities. I started with the visuals first, and created a few different and completely unrelated designs, just so I could get things going. Then, from the base design that I had, I manipulated each of them and created variations from the same patterns. Next, I set visuals aside and started working on the audio. For some reason I found this step to be more complex and it took me way longer than the visuals. I think a big part of it has to do with the fact that with audio, there is a lot of layering happening and a lot of different components and elements that need to be put together in order to create a good sequence. Regardless, I was eventually able to get the audio track done by breaking it down into sections, and slowly building up to what I wanted it to be. Finally, came the part of syncing between audio and visuals. I struggled with trying to connect the pieces together. I felt like the visuals were lacking in comparison to the audio that I had, but after some tweaking and some help from the Professor, I was able to get on track with what I wanted to do. All of this then lead to the final result that you see below! I hope you enjoy…

 

The audio may be slightly out of sync with the visuals, so I apologize! I could not get the editing quite right…

 

Hi everyone!

This is the documentation for my Composition Performance (linked is a recorded draft — the actual piece to be performed with minor improvements in class tomorrow!)

Intro

The theme of my performance is visualizing and sound-narrating a gameplay experience. My intention was to depict the energy, intensity, and old-fashioned vibe of Mario-like 2D games. This is done through heavy use of pixelated graphics, noise, vibrant colors in Hydra, and electronic sounds/chords (mostly from the ‘arpy’ library) as well as some self-recorded samples in Tydal Cycles.

Audio

The overall intention behind the audio in the piece is to take the audience from the start of the game, followed by some progression with earning score points, then culminating and progressing to level 2, and lastly to “losing” the game and the piece ending.

The audio elements that I recorded are the ones saying “level two”, “danger!”, “game over!”, “score”, and “welcome to the game!”:

-- level 2
d1 $ fast 0.5 $ s "amina:2" # room 1
-- danger
d1 $ fast 0.5 $ s "amina:0" # room 1
-- game over
d1 $ fast 0.5 $ s "amina:1" # room 1
-- score
d1 $ fast 0.5 $ s "amina:3" # room 1
-- welcome to the game
d1 $ fast 0.5 $ s "amina:4" # room 1

 

Learning from class examples of musical composition with chords, rhythms, and beat drops, I experimented with different notes, musical elements, and distortions. My piece starts off with a distorted welcome message that I recorded and edited:

d1 $ every 2 ("<0.25 0.125 0.5>" <~) $ s "amina:4"
  # squiz "<1 2.5 2>"
  # room (slow 4 $ range 0 0.2 saw)
  # gain 1.3
  # sz 0.5
  # orbit 1.9

Building upon various game-like sounds, I create different compositions throughout the performance. For example:

-- some sort of electronic noisy beginning
-- i want to add "welcome to the game"
d1 $ fast 0.5 $ s "superhoover" >| note (arp "updown" (scale "minor" ("<5,2,4,6>"+"[0 0 2 5]") + "f4")) # room 0.3 # gain 0.8 -- change to 0.8

-- narration of a game without a game
-- builds up slightly
d3 $ stack [
  s "sine" >| note (scale "major" ("[<2,3>,0,2](3,8)") + "g5") # room 0.4,
  fast 2 $ s "hh*2 hh*2 hh*2 <hh*6 [hh*2]!3>" # room 0.4 # gain (range 1 1.2 rand)
]

Visuals

My visuals rely on the midi output generated in Tydal Cycle and build upon each other. What starts as a black-and-white modulated and pixelated Voronoi noise grows gets color and grows into playful animations throughout.

For example:

voronoi(10,1,5).brightness(()=>Math.random()*0.15)
  .modulatePixelate(noise(()=>cc[1]+20),()=>cc[0]),100)
  .color(()=>cc[0],2.5,3.4).contrast(1.4)
  .out(o0)

// colored
voronoi(10,1,5).brightness(()=>Math.random()*0.15)
  .modulatePixelate(noise(25,()=>cc[0]),100)
  .color(()=>cc[0],2.5,3.4).contrast(1.4)
  .out(o0)

voronoi(10,1,5).brightness(()=>Math.random()*0.15)
  .modulatePixelate(noise(()=>cc[1]+20,()=>cc[0]),100)
  .color(()=>cc[0],()=>cc[1]-1.5,3.4).contrast(0.4)
  .out(o0)

// when adding score
voronoi(10,1,5).brightness(()=>Math.random()*0.15)
  .modulatePixelate(noise(()=>cc[2]*20,()=>cc[0]),100)
  .color(()=>cc[0],5,3.4).contrast(1.4)
  .add(shape(7),[cc[0],cc[1]*0.25,0.5,0.75,1])
  .out(o0)

// when dropping the beat
voronoi(10,1,5).brightness(()=>Math.random()*0.15)
  .modulatePixelate(noise(cc[0]+cc[1],0.5),100)
  .color(()=>cc[0],0.5,0.4).contrast(1.4)
  .out(o0)

 

Challenges

When curating the whole experience, I first started by creating the visuals. When I found something that I liked (distorted Voronoi noise), my intention was to add pixel-like sound, and the idea of games came up. This is how I decided to record audio and connect it to the visuals. What I did not anticipate is how challenging it would be to link transitions on both sides and timing them appropriately.

Final reflections and future improvements

I put a lot of effort into making the sounds cohesive and creating a story that I wanted to tell with the composition. When doing the next assignment in class, I think I will try to make the transitions smoother and make visuals be more powerful in the overall storytelling.

Most importantly, I think I am finally starting to get the hang of live coding. This is my first fully “intentional” performance — although many elements will be done with improv when performing live, this project feels less chaotic in terms of execution and the thought process put behind the experience.

The full code can be found here.

Heyy guys! Some of you asked me how I did the visuals for my previous weekly assignment. Here is the code:

src(o0).modulateHue(src(o0).scale(1.01),1).layer(osc(10,.1,2).mask(osc(200).modulate(noise(3)).thresh(1,0.1))).out()

The above code I used to transition from my previous visual. And the below one is the one connected to midi:

osc(10,.1,2).mask(osc(200).modulate(noise(()=>cc[0]*3+1)).thresh(()=>(cc[0]),0.1)).rotate().out()

Altering the thresh values and noise level made interesting patterns!

Also, I wondered what the difference was between layer and mask. While coming up with the above, I discovered that layer and mask can be used interchangeably in the following way:

// the following 2 codes are same
osc(10,.1,2).layer(osc(200).luma(0.5,0.8).color(0,0,0,1)).out()
osc(10,.1,2).mask(osc(200)).out()

The above 2 lines of code produce the same output. Hope this was helpful! 🙂

I find the role of improvisation in our digital culture as Jurgenson discusses to be very inspiring. Improvising is a key part of how we create and interact with digital media for several reasons such as the fact that digital media are often designed to be flexible and adaptable, we often use them in unexpected ways and improvisation can help us create new meanings and experiences.

This piece resonated with me on a personal level. As someone who does a lot of work in the field of web design, I am constantly faced with the need to improvise. Whether it’s making small changes to an existing design or completely redesigning a website from scratch as the work requires me to be creative and flexible. And while it can be challenging at times, I find that improvising helps me come up with better solutions than if I were following a set plan.

It also made me think about how important improvisation is in our everyday lives. We might not always think about it, but many of our interactions with others are improvised – whether we’re having a conversation or simply exchanging glances across a room. It allows us to respond spontaneously to what others are saying or doing, which can lead to more meaningful interactions, and sometimes even funny ones.

Another concept that also got me thinking is “the digitization of information”. It made me consider how my personal relationship with technology has changed over time as more things become digitized; for example, instead of buying physical CDs or DVDs containing my favorite movies or TV shows, now I just stream them online on Netflix or YouTube. There’s definitely something lost in terms of tangibility when things go digital like this – you no longer have an “original” copy per se – but there are also advantages, in terms of ease and convenience, at least, gained.

It’s indicated that improvisation regards music as information. But what is not information? 

Seems like the conversation didn’t ever give a clear definition of improvisation, music or digital community. As I understand, it broadens the concept of improvisation to human experience, and reaction to digital development. It actually makes more sense to me if we use the word “reaction” to describe this ubiquitous improvisation. We live in and react to the massive information world, and therefore[improvisation is] about navigating an informational landscape.” 

Therefore, as I understand, music, as a kind of information and human activity (’s product), reacts to the digital environment and society. That makes sense because when we are coding lively, we travel between countless possibilities and land on things we like. So I think the visual creation stands at a similar position as well. 

I also wonder how we can feel the digital community in life. Also, other than people, are tools and media used in human actions also included in the digital community? As they said, the digital community is about this sense of networks. Apart from live performances, we as a class form a community. The forums and reference of tidal cycles or hydra can be communities. These community exchanges are large in quantity. For live sessions, whether people know each other or not, the exchange of information might have more intensity because there is hyper-engagement of multiple senses. I agree that the live moment brings people back to the recognition that music is human action, and it’s human action in the digital world.