In this text, the author discusses the impact and role of the notation in live coding. It was interesting to read about different literature in music and computer science intertwined with the history of live coding. One of the parts that was most interesting to me was that different live coding platforms have their own perspective on live performance. The author compares using a different live coding language to switching between different natural languages, and that it can change how we shape our thoughts as coders. When I used Sonic Pi for the Research Project, I felt a similar transition in ways of writing the codes compared to TidalCycles. In Sonic Pi, you need to run the whole program again when you change the code, while you can choose to run a block of code in TidalCycles. Due to this difference in its system, I got the impression that I need to think from a broader perspective when using Sonic Pi, considering the big picture as I start a performance. I wonder if it would be also a different experience when I try another live coding platform. Another part I found interesting is how the author connects pattern-based music representations to computer algorithms and to textile weaving and braiding. Here, the author refers to another article by composer Laurie Spiegel, whose writing we read previously in class, about categories of pattern manipulation in computer music. I think it is really interesting that some of these pattern manipulation techniques can be directly mapped to basic computer operations such as bit shifting (‘<<’ and ‘>>’) to rotation.

I found the author’s comparison between computer languages and human languages intriguing, as it highlights how different languages influence our expressions, molding our thoughts and personalities. The author suggests that switching to another language could even alter our physical gestures while speaking. In live coding, these effects are particularly pronounced, given that languages are typically high-level and often crafted with particular visual or musical styles in mind, thereby imposing creative constraints. This point made me wonder how the notation systems of Tidalcycles and Hydra might be affecting our class’s live coding outputs. At the beginning of the class, I found myself sticking to highly vibrant and psychedelic-ish visuals. However, after going through Hydra documentation and also seeing my classmates’ performances, it showed me that a variety of visual styles can be achieved depending on the approach taken.

The text also highlights the evolving nature of live coding culture, particularly regarding its stance on commercialization and consumption. As documentation becomes more prevalent, the once anti-commercialization ethos of live coding seems to be shifting. However, this shift is not necessarily negative; rather, it fosters an open-source community where knowledge-sharing and collaboration thrive. In my case, I often find myself resorting to online examples to explore the possibilities of languages I’m attempting to learn. Observing someone implement a specific function or utilize an unusual notation sparks ideas for me to experiment with those techniques in my own projects. By saving and sharing code, practitioners contribute to a pool of resources that enriches the community and promotes collective learning and inspiration.

I found the reading’s discussion on algorithms to be quite interesting, especially when applied to something artistic such as music. My intuition is to associate algorithms with numbers, but through our class, it’s evident that there are multiple ways for us to interact with sound algorithmically. When applying principles of computing to the arts, it becomes clear that there is a strong connection between how composers design repeating patterns and how we interact with TidalCycles. However, we have the added benefit of being able to connect to other digital mediums too. I’m quite interested to see how our Live Coding performances can pull from other digital sources, such as web-based interactions or some creative interpretation of large datasets.

The text invites us to consider how live coding, as a blend of computational algorithms and artistic expression, challenges our traditional understandings of music and performance. It implies a dynamic interplay between structure and spontaneity, where the act of coding live doesn’t just create music but also questions the nature of creativity itself. This nuanced dance between the coder’s intent and the system’s capabilities reflects broader themes of control, collaboration, and the unexpected outcomes inherent in merging technology with art. How does this interplay affect our perception of authorship and authenticity in digital art forms? And what does this say about the future of artistic expression in an increasingly digitized world?

The text delves into the distinctions between stylism, traditionalism, and restructuralism in music, highlighting how notation influences music’s evolution into tradition or style. It’s fascinating to think about how notation serves not just as a record of music but as a dynamic language for real-time creation and modification. Notation in live coding isn’t fixed; it’s fluid, adaptable, and, importantly, executable. This adaptability raises questions about the permanence and reproducibility of musical works, blurring the lines between composition, performance, and improvisation. This conversation around notation could be enriched by looking at John Cage’s work with indeterminate compositions, where the notation serves more as a set of possibilities than as definitive instructions, inviting a reevaluation of the role of notation in defining the boundaries of a musical work.

In the world of live coding, notation is like the script for an impromptu play where you’re both the director and the lead actor, communicating with your computer to shape music in real time. This back-and-forth turns traditional music creation on its head, making the process vibrant and ephemeral, akin to sketching on water where each performance is unique, never to be repeated in the same way. This form of notation isn’t just about documenting; it’s about exploring, experimenting, and experiencing the joy of creation as it happens.

We have experienced firsthand that much of live coding consists of playing around with parameters and reacting to the results that emerge from such adjustments. It is then interesting to think about the context behind the available parameters—parameters that are actively chosen by the authors of the system. Hydra and TidalCycles sport the capabilities and features they do because their creators made the active decision to sculpt the systems in that particular way. Hence the resulting space nor the actors (live coders) that perform within it are neutral—as described in the excerpt, “A live coding language is an environment in which live coders express themselves, and it is never neutral.” The contours of the chosen space are sculpted with intention, and the live coder molds what is available with further personal intent.

I also took interest in the discussion regarding the ephemeral nature of live coding near the end of the excerpt. Describing live coding as “an example of oral culture,” the authors write that “The transience of the code they [live coders] write is an essential aspect of being in the moment.” Live coding is highly dependent on the immediate spatiotemporal context of each moment and the wider setting of the performance itself. It is the process of spontaneity and recursive interaction that is most important. As such, notation in live coding is a means to enable the continuation of this process, to take the next step (as opposed to defining and achieving a certain outcome).

I really really liked this chapter of Notation. It begins with the classification of music practices as stylism, traditionalism and restructuralism. It mentions how notation determines whether music becomes a tradition or a style. The chapter then proceeds to critically explore the concept of music. Is it just patterns or something more? Notation in live coding allows us to play with parameters, how do any of these alter the practice of music? I was quite surprised to read that these parameters of pattern manipulation have been classified by Laurie Siegel. I wonder if these all have a different impact on music practice. They probably do, can we isolate and identify these effects? Could I, for example, claim that combination of patterns will lead to a new style rather than a tradition? This is still a bit confusing to me but very interesting. 

1) transposition, 2) reversal, 3) rotation, 4) phase offset, 5) rescaling, 6) interpolation, 7)  extrapolation, 8) fragmentation, 9) substitution, 10) combination, 11) sequencing, and 12) repetition

This chapter also talks about how notation can

“be seen as threefold: it is the syntactic
structure read by the language interpreter that executes the program, it is the action or
movement of the performer that is projected to the live audience, and it is the artistic
(e.g., choreographic, musical, visual) output of the process that is notated and manipulated by the live coder.”

This concept is fascinating, and makes me understand the symbolic depth of live coding. When performing we are interacting with notation on the code that I am writing, the code that is running the software, and the output as well. This analysis is taken one step further when the author mentions how different Live Coding software is constructed based on what the artist wants to achieve. Thus the artistic limits of these constructions are reflected and shaped by their vision. I wonder what one could find out by applying this thought to all languages. Say, could we invent a new language that facilitates certain behavior? Perhaps this is too broad of a perspective, but I am fascinated by this idea, as presented by the author. I also think about incommensurability, or the inability to translate certain concepts. I had never thought that this was also the case across programming languages. In retrospect it makes sense, but it is not something I think about usually… I apply this to Mercury, the live coding platform made in Max I researched for my presentation. It is truly different from TidalCyles, as it prioritizes accessibility. How does this affect the music I create?

The more and more I learn about Live Coding the more I understand its groundbreaking nature, and the more I like it. I am eager to keep exploring this practice, and perhaps incorporate it in some way or another into my career. 

I liked the comparison of different live coding platforms to languages, and I agreed with the author that the platform/language you work with will heavily dictate how you approach things. When I was working with Cascade for my research project, I found myself leaning towards making simple HTML shapes as that was what Cascade afforded me on the surface. With Hydra, I found myself making heavy use of the oscillators as it was what is available. Could I make anything I visualized in my imagination in both platforms? Probably! But they would take so much work, and in a way, being limited by the constraints/affordances of Cascade/Hydra allowed me to think more of making the most out of what I am given, instead of being forced to answer a question as abstract as ‘make anything you want’.

I found it funny how the author emphasized the ephemeral nature of live coding, especially the “live coders [that] celebrate not saving their code”. In our class, I found myself and many of my classmates “pre-gramming” (as called by the author) as much as we could to make the performance as smooth as possible. Perhaps this is just a side-effect of being new to the platform, or having little knowledge, but I’m still fearful of doing the ‘live’ part of live coding, and would rather trust the pre-gramming approach.

As the author compares computers to instruments, I wonder then if a syntax error in the live coding performance could be treated similarly to a mistake as playing the wrong note? However, I think the comparison is not fair for both sides. I find live coding platforms like TidalCycles to be an abstraction layer between the creator, and the music itself. With a guitar or piano, there is no such layer between the creator and the music, and you can easily map your physical sense of self ( of your fingers/hands ) to the music being produced. There is a big physical disconnect with live coding platforms as it depends heavily on visuals to make sure that you’re playing the right notes. You have to look at the code, process that information with your eyes, and then further process what that block of code will sound like. Live coding loses access to one of the best features of being human — your proprioception, that even with my eyes closed I can make music just by feel if I play an instrument well enough. I suppose you could argue that you can type with your eyes closed but I feel that it’s a bit of a stretch for making music…