For this project, I found it really overwhelming trying to juggle between the visuals and the audio aspects of the project. To simplify this on myself, I decided to divide the two, and start working by thinking of them as two separate entities. I started with the visuals first, and created a few different and completely unrelated designs, just so I could get things going. Then, from the base design that I had, I manipulated each of them and created variations from the same patterns. Next, I set visuals aside and started working on the audio. For some reason I found this step to be more complex and it took me way longer than the visuals. I think a big part of it has to do with the fact that with audio, there is a lot of layering happening and a lot of different components and elements that need to be put together in order to create a good sequence. Regardless, I was eventually able to get the audio track done by breaking it down into sections, and slowly building up to what I wanted it to be. Finally, came the part of syncing between audio and visuals. I struggled with trying to connect the pieces together. I felt like the visuals were lacking in comparison to the audio that I had, but after some tweaking and some help from the Professor, I was able to get on track with what I wanted to do. All of this then lead to the final result that you see below! I hope you enjoy…

 

The audio may be slightly out of sync with the visuals, so I apologize! I could not get the editing quite right…

 

Hi everyone!

This is the documentation for my Composition Performance (linked is a recorded draft — the actual piece to be performed with minor improvements in class tomorrow!)

Intro

The theme of my performance is visualizing and sound-narrating a gameplay experience. My intention was to depict the energy, intensity, and old-fashioned vibe of Mario-like 2D games. This is done through heavy use of pixelated graphics, noise, vibrant colors in Hydra, and electronic sounds/chords (mostly from the ‘arpy’ library) as well as some self-recorded samples in Tydal Cycles.

Audio

The overall intention behind the audio in the piece is to take the audience from the start of the game, followed by some progression with earning score points, then culminating and progressing to level 2, and lastly to “losing” the game and the piece ending.

The audio elements that I recorded are the ones saying “level two”, “danger!”, “game over!”, “score”, and “welcome to the game!”:

-- level 2
d1 $ fast 0.5 $ s "amina:2" # room 1
-- danger
d1 $ fast 0.5 $ s "amina:0" # room 1
-- game over
d1 $ fast 0.5 $ s "amina:1" # room 1
-- score
d1 $ fast 0.5 $ s "amina:3" # room 1
-- welcome to the game
d1 $ fast 0.5 $ s "amina:4" # room 1

 

Learning from class examples of musical composition with chords, rhythms, and beat drops, I experimented with different notes, musical elements, and distortions. My piece starts off with a distorted welcome message that I recorded and edited:

d1 $ every 2 ("<0.25 0.125 0.5>" <~) $ s "amina:4"
  # squiz "<1 2.5 2>"
  # room (slow 4 $ range 0 0.2 saw)
  # gain 1.3
  # sz 0.5
  # orbit 1.9

Building upon various game-like sounds, I create different compositions throughout the performance. For example:

-- some sort of electronic noisy beginning
-- i want to add "welcome to the game"
d1 $ fast 0.5 $ s "superhoover" >| note (arp "updown" (scale "minor" ("<5,2,4,6>"+"[0 0 2 5]") + "f4")) # room 0.3 # gain 0.8 -- change to 0.8

-- narration of a game without a game
-- builds up slightly
d3 $ stack [
  s "sine" >| note (scale "major" ("[<2,3>,0,2](3,8)") + "g5") # room 0.4,
  fast 2 $ s "hh*2 hh*2 hh*2 <hh*6 [hh*2]!3>" # room 0.4 # gain (range 1 1.2 rand)
]

Visuals

My visuals rely on the midi output generated in Tydal Cycle and build upon each other. What starts as a black-and-white modulated and pixelated Voronoi noise grows gets color and grows into playful animations throughout.

For example:

voronoi(10,1,5).brightness(()=>Math.random()*0.15)
  .modulatePixelate(noise(()=>cc[1]+20),()=>cc[0]),100)
  .color(()=>cc[0],2.5,3.4).contrast(1.4)
  .out(o0)

// colored
voronoi(10,1,5).brightness(()=>Math.random()*0.15)
  .modulatePixelate(noise(25,()=>cc[0]),100)
  .color(()=>cc[0],2.5,3.4).contrast(1.4)
  .out(o0)

voronoi(10,1,5).brightness(()=>Math.random()*0.15)
  .modulatePixelate(noise(()=>cc[1]+20,()=>cc[0]),100)
  .color(()=>cc[0],()=>cc[1]-1.5,3.4).contrast(0.4)
  .out(o0)

// when adding score
voronoi(10,1,5).brightness(()=>Math.random()*0.15)
  .modulatePixelate(noise(()=>cc[2]*20,()=>cc[0]),100)
  .color(()=>cc[0],5,3.4).contrast(1.4)
  .add(shape(7),[cc[0],cc[1]*0.25,0.5,0.75,1])
  .out(o0)

// when dropping the beat
voronoi(10,1,5).brightness(()=>Math.random()*0.15)
  .modulatePixelate(noise(cc[0]+cc[1],0.5),100)
  .color(()=>cc[0],0.5,0.4).contrast(1.4)
  .out(o0)

 

Challenges

When curating the whole experience, I first started by creating the visuals. When I found something that I liked (distorted Voronoi noise), my intention was to add pixel-like sound, and the idea of games came up. This is how I decided to record audio and connect it to the visuals. What I did not anticipate is how challenging it would be to link transitions on both sides and timing them appropriately.

Final reflections and future improvements

I put a lot of effort into making the sounds cohesive and creating a story that I wanted to tell with the composition. When doing the next assignment in class, I think I will try to make the transitions smoother and make visuals be more powerful in the overall storytelling.

Most importantly, I think I am finally starting to get the hang of live coding. This is my first fully “intentional” performance — although many elements will be done with improv when performing live, this project feels less chaotic in terms of execution and the thought process put behind the experience.

The full code can be found here.

Heyy guys! Some of you asked me how I did the visuals for my previous weekly assignment. Here is the code:

src(o0).modulateHue(src(o0).scale(1.01),1).layer(osc(10,.1,2).mask(osc(200).modulate(noise(3)).thresh(1,0.1))).out()

The above code I used to transition from my previous visual. And the below one is the one connected to midi:

osc(10,.1,2).mask(osc(200).modulate(noise(()=>cc[0]*3+1)).thresh(()=>(cc[0]),0.1)).rotate().out()

Altering the thresh values and noise level made interesting patterns!

Also, I wondered what the difference was between layer and mask. While coming up with the above, I discovered that layer and mask can be used interchangeably in the following way:

// the following 2 codes are same
osc(10,.1,2).layer(osc(200).luma(0.5,0.8).color(0,0,0,1)).out()
osc(10,.1,2).mask(osc(200)).out()

The above 2 lines of code produce the same output. Hope this was helpful! 🙂

I find the role of improvisation in our digital culture as Jurgenson discusses to be very inspiring. Improvising is a key part of how we create and interact with digital media for several reasons such as the fact that digital media are often designed to be flexible and adaptable, we often use them in unexpected ways and improvisation can help us create new meanings and experiences.

This piece resonated with me on a personal level. As someone who does a lot of work in the field of web design, I am constantly faced with the need to improvise. Whether it’s making small changes to an existing design or completely redesigning a website from scratch as the work requires me to be creative and flexible. And while it can be challenging at times, I find that improvising helps me come up with better solutions than if I were following a set plan.

It also made me think about how important improvisation is in our everyday lives. We might not always think about it, but many of our interactions with others are improvised – whether we’re having a conversation or simply exchanging glances across a room. It allows us to respond spontaneously to what others are saying or doing, which can lead to more meaningful interactions, and sometimes even funny ones.

Another concept that also got me thinking is “the digitization of information”. It made me consider how my personal relationship with technology has changed over time as more things become digitized; for example, instead of buying physical CDs or DVDs containing my favorite movies or TV shows, now I just stream them online on Netflix or YouTube. There’s definitely something lost in terms of tangibility when things go digital like this – you no longer have an “original” copy per se – but there are also advantages, in terms of ease and convenience, at least, gained.

It’s indicated that improvisation regards music as information. But what is not information? 

Seems like the conversation didn’t ever give a clear definition of improvisation, music or digital community. As I understand, it broadens the concept of improvisation to human experience, and reaction to digital development. It actually makes more sense to me if we use the word “reaction” to describe this ubiquitous improvisation. We live in and react to the massive information world, and therefore “[improvisation is] about navigating an informational landscape.” 

Therefore, as I understand, music, as a kind of information and human activity (’s product), reacts to the digital environment and society. That makes sense because when we are coding lively, we travel between countless possibilities and land on things we like. So I think the visual creation stands at a similar position as well. 

I also wonder how we can feel the digital community in life. Also, other than people, are tools and media used in human actions also included in the digital community? As they said, the digital community is about this sense of networks. Apart from live performances, we as a class form a community. The forums and reference of tidal cycles or hydra can be communities. These community exchanges are large in quantity. For live sessions, whether people know each other or not, the exchange of information might have more intensity because there is hyper-engagement of multiple senses. I agree that the live moment brings people back to the recognition that music is human action, and it’s human action in the digital world. 

What does it take to improvise? What is a good improvisation rooted in the context of a performance? I found these questions being answered in this week’s reading by Paul D. Miller (aka DJ Spooky) and Vijay Iyer.

An interesting motif that both speakers outline in their discussion is that of jazz. Combining orchestrated French cinema, i.e. the example of improvisation in new media, they draw a connection to music.

There’s another new media professional, the father of VR Jaron Lanier, who also talks about jazz and cinema in his work. Namely, he defines VR through jazz:

A twenty-first century art form that will weave together the three great twentieth-century arts: cinema, jazz, and programming (Lanier).

It seems interesting to me how the concepts of programming both in live coding and VR refer to “jazz” — what do they really mean? Is it because of the similar effect of expectancy that comes in? Miller and Iyer discuss it as when “the audience [isn’t] quite sure how to respond” and “navigating an informational landscape”. Connected to what the reading’s authors say, the following quote actually starts to make more sense in the connection of new realities and especially the context of VR:

Improvisation is embedded in reality. It’s embedded in real time, in an actual historical circumstance, and contextualized in a way that is, in some degree, inseparable from reality (Miller and Iyer).

Another observation that I drew from the text is rooting jazz in its original cultural context. Jazz developed from Afro-American music. Much like hip hop, it was born in marginalized communities and became greatly adopted worldwide after. Given the context of improvisation then, it is much more than just creating randomized art; it’s about standing for who you are and your identity.

What I didn’t quite understand in the discussion of cultural context was the following excerpt:

Paul Miller: Yes, there’s an infamous Brian Eno statement in Wired magazine a couple of years ago where he said, “the problem with computers is that there’s not enough Africa in them.” I think it’s the opposite; there’s actually a lot of Africa in them. Everyone’s beginning to exchange and trade freeware. [You can] rip, mix and burn. That’s a different kind of improvisation. What I think Vijay is intimating is that we’re in an unbalanced moment in which so much of the stuff is made by commercial [companies]. My laptop is not my culture. The software that I use—whether it’s Protools, Sonar, or Audiologic—[is not my culture per se]. But, it is part of the culture because it’s a production tool.

What does Miller mean by saying “there’s not enough Africa in them…it’s the opposite; there’s actually a lot of Africa in them”? Does he refer to an argument of remixing to the point where there is no originality left? And then responds that the roots will always be there no matter how many times the original source has been edited? Perhaps someone in the class discussion can elaborate on or explain this part further.

I will end this discussion with the quote that I enjoyed most from the reading:

I was just sort of cautioning against was the idea that “we are our playlists.” I just didn’t want [our discussion] to become like, “because we listen to these things, this is who we are,” especially at a moment when the privileged can listen to anything.

— I agree; we are so many things and we should embrace that through our most important individual improv, the life itself!

Some people asked me how to make the ‘misty’ effect. This is my code and you can kinda figure it out. I also changed the hue() with cc value and the effect is really cool.

shape(3).kaleid(3).scale(0.4).out(o0)
osc(10,0.1,0.7).hue(0.6).modulate(noise(2,0.01),0.5).out(o1)
src(o2).modulate(src(o1).add(solid(1,1),-0.5),0.007).blend(o0,0.1).out(o2)
src(o2).mult(o1).out(o3)
render(o3)Â