When a person holds an instrument to play, they don’t always intend to produce the resulting sounds. Sometimes they play a single random note–more so with the intent to play than the intent to create pleasant sounds–and another one follows, and another one; initial chaos that eventually becomes an orderly symphony. How can we reproduce this process through code? How can we make randomly pleasant sounds? True randomness is hard to define in reality, and I’m sure the author would argue that the very first notes played by the person in my example, aren’t truly random. He’d argue that they are rather the result of meaningful noise that loses its meaning in a musical context. A ‘random’ function, and even the seemingly natural ‘noise’, are based on randomly occurring physical phenomena–like cosmic radiation or radioactive decay. The thought that some other phenomenon that has a random frequency that resonates with music could exist is fascinating, and I would love to see its applications.

 

In the meanwhile anything generated randomly on Tidal (even with noise) sounds awful lol.

This reading reiterates various concepts that we have discussed in class over the past few weeks. Most of the ideas were touched upon in one way or another, and a point that I, as well as anyone else I assume, resonate with is that as we listen to music, “we wait for something more, for change, uncertainty, the unpredictable”. Once any of these are encountered, the song or music would become more interesting or less boring. The change could be anything from introducing new instruments, breaking a cycle, speeding up the sounds, or adding a beat drop, which we have also discussed in class as we are always anticipating anything of this sort. The unpredictability, aspect is often a hit or miss I feel. Considering live coding, for example, if we would have been put on the spot to perform using tidal cycles with no prior background in what different audios or samples sound like and go into an unpredictable track, there is a chance it might be too noisy. However, there is still always a chance that it could also lead to a happy accident.


The “noise” I am bringing up is not the same as what the author mentions when she suggests the introduction of noise to make it sound more interesting. She discusses it in the same way that uncertainty would result in more engaging sounds, maybe by incorporating random corruption. This is a point that Aaron has made several times in class, adding ‘?’ or ‘degradeBy’ to drop random notes. This adds some spice to the music that we’re creating, avoiding repetition and possible boredom for the listener. From making these changes live together in class and when I was testing out and experimenting for the live performance earlier this week, I would definitely agree that these small changes actual have a significant impact on the sound. However, this leads to the same question that the author poses, is it more musical?

Complexity does not ensure the musical value of a piece of composition. A piece can be totally creative and random but does not hold much significance in a musical sense. The very nature of music makes live coding music more difficult than creating visuals. That is to say, it requires more deliberations and more experimentations with sound from us,. Not only do we need to make 10 times more the effort with which we experiment with images, but we also need to know in what directions are we making the efforts. In other words, we need to know how to improve the musical value of a piece of composition. 

Namely, a good musical piece always has development, evolution, and form. Music, since it carries more emotions, takes on a more important role in conveying them. For example, it is easier to make people feel happy by feeding them an upbeat melody than by presenting them with visual patterns. Admittedly, we can also use certain types of images and visuals to signal emotions, such as slow waves for tranquility, the dark color theme for horror, or messy lines for anxiety. But in my opinion, visuals are not as powerful as sounds. Therefore, the design of sounds comes in the front. 

Looping the music helps increase the musicality, but only to a certain extent. As time goes by, the entropy decreases and it is not that attractive to the audience anymore. This is where we want to notch it up a little by introducing noise, the random factor. At first, it seemed a little bit counterintuitive to me because good music needs to retain a certain form. But noise brings information. It all depends on how well you are using the noise. A successful introduction of noise that takes noise vulnerability factors, the nature of noise, etc. helps us to achieve a balance between entropy(the information) and musicality.

“Put simply, information theory is a mathematical theory of how to optimize a signal for communication in a noisy channel and of how communication degrades in such a medium.”

The starting line of this article immediately caught my attention. There is something about the way the definition was so precise, and the way that the author portrayed it so confidently. As a reader, it draws me in and intrigues me, making me want to explore what exactly they mean by the phrase above. As I continued reading, I quite enjoyed the way that the author slowly built up to their point, the same way a musician would slowly build up to the final composition of their song. Instead of directly introducing their point, the broke it down into smaller pieces, which when put together, mended together to form the whole point. It felt familiar in a way, and it is something I quite enjoyed.

Another thing that stood out to me was when the author asks the question “Is it musical?”. Although mentioned in different context, it is a phrase that was mentioned quite a few times in the text. It felt as though they are echoing their thoughts and ideas. As a reader I felt included in the process. Not only that, but I felt validated, in the sense that I felt like I was not the only person who has these questions in their head. When it comes to composition, you are relying on your own sound. What makes this so difficult and intimidating is that every person has a sound that is unique to them. That is why when it comes to sharing your work, you become intimidated by the fact that your work may not resonate with people, because it may not meet a certain standard. We even saw this in during our latest live coding performance in class. Although the content that we learn is the same, the way that each person approached their project was completely different. To link it back to the main point, by saying “is it musical?”, the author is in a way confiding in the readers, showing that while each person has a different process, there is comfort in knowing that there is somewhat of a shared and common struggle when it comes to the composition of a piece. I am not quite sure how relevant it is to the reading, but it is a correlation that immediately came to mind, one that I wanted to share. That is why when the author also casually and consistently mentions this question, and outwardly expresses their thought process, I feel even more connected to the work.

I grew up playing classical piano, and my father who plays the guitar and is a math enthusiast, always told me that music theory is just pure math. My experience in this class is further proving his point. In this week’s reading, Spiegel breaks down the concept of information theory and how it could be used in music. I particularly found her explanation of choosing sounds “vulnerable to corruption” to be informative, as I usually just experiment with random values and introduce noise, but now I can do it with a bit more intentionality.

Speigel also brings up the idea of whether there is such a process as composition, and I’ve been thinking about this for a while! After transitioning from classical music to different genres, I joined a band, and I found the times when we were brainstorming lyrics to write or melodies really challenging because nothing ever felt original. Whatever lyric I wrote, I can pinpoint a source song or artist that it was directly influenced by or borrowed from. We always hear from older generations that there is no longer music like what they had during their times. I still wonder: are we past the point of originality and novelty in music, and if we’re not, how can we ensure that what we’re making is a new composition? Or actually, does that even matter if the music speaks to someone and is enjoyed by audiences?

“[I]nformation theory is a mathematical theory of how to optimize a signal for communication in a noisy channel and of how communication degrades in such a medium.” Okay, math is not my cake, but the application of mathematical theory in composing seems interesting and promising. My profession with music was mainly gained from traditional instrument practice (flute). It was by playing in the school’s orchestra team that I learned more theories (in English), but I’m still quite limited in music accomplishment and composing. In the meantime, I do know that music and composing have been involving more and more math and programming as new media emerge. It is again, in fact, stimulating our thoughts on computational creativity. 

 

Laurie Spiegel thinks that random noise can be meaningful signals in other contexts. That makes sense considering the noise of repairing roads. However, I’m not sure how to position the purely numerical randomness in digital music. For d1 $ sound “hh*8” # gain (range 0.8 1.5 rand), what is the context that makes more sense and what is not? Furthermore, Spiegel claims that at the essence of auditory imagination is just re-generation/transformation of previous materials in humans’ perceptual and cognitive systems. In that case, computational creativity might not be about creating something completely new, but new “permutation and combination” that can trigger emotional reactions in certain environments. 

 

What I didn’t see discussed is the subjectivity about creation (composing). Yes, noise works. Entropy variable works. The outcome does affect audiences’ emotions, but what about the composer? Creating compositional models should definitely be different from traditional composing regarding subjective experience and reward. Making an analogy with “art” versus “design”, this process seems more like design: it’s using principles to create things for people. 

 

Written in 1997, the article already looks quite advanced for me. I wonder what has changed, and what new theories or models have appeared from that time till now.

For our performance this week, one of the ways I tried to brainstorm was by listening to specific songs and deconstructing them — trying to understand the layers that make up each song, and how I can use that as inspiration for my own performance. One of the main elements of my live coding performance, for example, was this bass drum beat:

d1 $ sound "{808bd:12 [808bd:73]}" # room "0.03"

I came to this beat after listening to Fred Again.. ‘s song Marnie (wish i had u) and trying to understand how he gradually constructs and put together relatively simple layers and elements that make up the song. I eventually couldn’t escape this way of thinking while shuffling my playlist and thought of putting together a collaborative class playlist where we can drop in tracks that inspire us while we’re figuring out how to live code music. I started a collaborative Spotify playlist that you can check out and add to here. Let me know if you have any thoughts, or if another platform like youtube could be more accessible to people! (✨pls add songs I love exploring and learning what other people listen to✨)