I found the reading’s discussion on algorithms to be quite interesting, especially when applied to something artistic such as music. My intuition is to associate algorithms with numbers, but through our class, it’s evident that there are multiple ways for us to interact with sound algorithmically. When applying principles of computing to the arts, it becomes clear that there is a strong connection between how composers design repeating patterns and how we interact with TidalCycles. However, we have the added benefit of being able to connect to other digital mediums too. I’m quite interested to see how our Live Coding performances can pull from other digital sources, such as web-based interactions or some creative interpretation of large datasets.

The text invites us to consider how live coding, as a blend of computational algorithms and artistic expression, challenges our traditional understandings of music and performance. It implies a dynamic interplay between structure and spontaneity, where the act of coding live doesn’t just create music but also questions the nature of creativity itself. This nuanced dance between the coder’s intent and the system’s capabilities reflects broader themes of control, collaboration, and the unexpected outcomes inherent in merging technology with art. How does this interplay affect our perception of authorship and authenticity in digital art forms? And what does this say about the future of artistic expression in an increasingly digitized world?

The text delves into the distinctions between stylism, traditionalism, and restructuralism in music, highlighting how notation influences music’s evolution into tradition or style. It’s fascinating to think about how notation serves not just as a record of music but as a dynamic language for real-time creation and modification. Notation in live coding isn’t fixed; it’s fluid, adaptable, and, importantly, executable. This adaptability raises questions about the permanence and reproducibility of musical works, blurring the lines between composition, performance, and improvisation. This conversation around notation could be enriched by looking at John Cage’s work with indeterminate compositions, where the notation serves more as a set of possibilities than as definitive instructions, inviting a reevaluation of the role of notation in defining the boundaries of a musical work.

In the world of live coding, notation is like the script for an impromptu play where you’re both the director and the lead actor, communicating with your computer to shape music in real time. This back-and-forth turns traditional music creation on its head, making the process vibrant and ephemeral, akin to sketching on water where each performance is unique, never to be repeated in the same way. This form of notation isn’t just about documenting; it’s about exploring, experimenting, and experiencing the joy of creation as it happens.

We have experienced firsthand that much of live coding consists of playing around with parameters and reacting to the results that emerge from such adjustments. It is then interesting to think about the context behind the available parameters—parameters that are actively chosen by the authors of the system. Hydra and TidalCycles sport the capabilities and features they do because their creators made the active decision to sculpt the systems in that particular way. Hence the resulting space nor the actors (live coders) that perform within it are neutral—as described in the excerpt, “A live coding language is an environment in which live coders express themselves, and it is never neutral.” The contours of the chosen space are sculpted with intention, and the live coder molds what is available with further personal intent.

I also took interest in the discussion regarding the ephemeral nature of live coding near the end of the excerpt. Describing live coding as “an example of oral culture,” the authors write that “The transience of the code they [live coders] write is an essential aspect of being in the moment.” Live coding is highly dependent on the immediate spatiotemporal context of each moment and the wider setting of the performance itself. It is the process of spontaneity and recursive interaction that is most important. As such, notation in live coding is a means to enable the continuation of this process, to take the next step (as opposed to defining and achieving a certain outcome).

I really really liked this chapter of Notation. It begins with the classification of music practices as stylism, traditionalism and restructuralism. It mentions how notation determines whether music becomes a tradition or a style. The chapter then proceeds to critically explore the concept of music. Is it just patterns or something more? Notation in live coding allows us to play with parameters, how do any of these alter the practice of music? I was quite surprised to read that these parameters of pattern manipulation have been classified by Laurie Siegel. I wonder if these all have a different impact on music practice. They probably do, can we isolate and identify these effects? Could I, for example, claim that combination of patterns will lead to a new style rather than a tradition? This is still a bit confusing to me but very interesting. 

1) transposition, 2) reversal, 3) rotation, 4) phase offset, 5) rescaling, 6) interpolation, 7)  extrapolation, 8) fragmentation, 9) substitution, 10) combination, 11) sequencing, and 12) repetition

This chapter also talks about how notation can

“be seen as threefold: it is the syntactic
structure read by the language interpreter that executes the program, it is the action or
movement of the performer that is projected to the live audience, and it is the artistic
(e.g., choreographic, musical, visual) output of the process that is notated and manipulated by the live coder.”

This concept is fascinating, and makes me understand the symbolic depth of live coding. When performing we are interacting with notation on the code that I am writing, the code that is running the software, and the output as well. This analysis is taken one step further when the author mentions how different Live Coding software is constructed based on what the artist wants to achieve. Thus the artistic limits of these constructions are reflected and shaped by their vision. I wonder what one could find out by applying this thought to all languages. Say, could we invent a new language that facilitates certain behavior? Perhaps this is too broad of a perspective, but I am fascinated by this idea, as presented by the author. I also think about incommensurability, or the inability to translate certain concepts. I had never thought that this was also the case across programming languages. In retrospect it makes sense, but it is not something I think about usually… I apply this to Mercury, the live coding platform made in Max I researched for my presentation. It is truly different from TidalCyles, as it prioritizes accessibility. How does this affect the music I create?

The more and more I learn about Live Coding the more I understand its groundbreaking nature, and the more I like it. I am eager to keep exploring this practice, and perhaps incorporate it in some way or another into my career. 

I liked the comparison of different live coding platforms to languages, and I agreed with the author that the platform/language you work with will heavily dictate how you approach things. When I was working with Cascade for my research project, I found myself leaning towards making simple HTML shapes as that was what Cascade afforded me on the surface. With Hydra, I found myself making heavy use of the oscillators as it was what is available. Could I make anything I visualized in my imagination in both platforms? Probably! But they would take so much work, and in a way, being limited by the constraints/affordances of Cascade/Hydra allowed me to think more of making the most out of what I am given, instead of being forced to answer a question as abstract as ‘make anything you want’.

I found it funny how the author emphasized the ephemeral nature of live coding, especially the “live coders [that] celebrate not saving their code”. In our class, I found myself and many of my classmates “pre-gramming” (as called by the author) as much as we could to make the performance as smooth as possible. Perhaps this is just a side-effect of being new to the platform, or having little knowledge, but I’m still fearful of doing the ‘live’ part of live coding, and would rather trust the pre-gramming approach.

As the author compares computers to instruments, I wonder then if a syntax error in the live coding performance could be treated similarly to a mistake as playing the wrong note? However, I think the comparison is not fair for both sides. I find live coding platforms like TidalCycles to be an abstraction layer between the creator, and the music itself. With a guitar or piano, there is no such layer between the creator and the music, and you can easily map your physical sense of self ( of your fingers/hands ) to the music being produced. There is a big physical disconnect with live coding platforms as it depends heavily on visuals to make sure that you’re playing the right notes. You have to look at the code, process that information with your eyes, and then further process what that block of code will sound like. Live coding loses access to one of the best features of being human — your proprioception, that even with my eyes closed I can make music just by feel if I play an instrument well enough. I suppose you could argue that you can type with your eyes closed but I feel that it’s a bit of a stretch for making music…

Antony Author’s insights into musical notation highlight how when translated onto computers, our expression gets distilled into numerical data, as evident in grid-based music using MIDI standards. What’s really interesting is the comparison of live coding languages to spoken languages, suggesting that these languages aren’t neutral for expression. Language design significantly shapes users’ creative decisions and the ultimate output they produce.

This got me thinking about how different tools and constraints influence my own expression as an artist dabbling in various mediums. I wonder if other multidisciplinary artists embrace or resist these influences and whether it benefits their creative process.

The influence of language designers on creative outcomes in live coding and visual programming showcases the intricate decisions artists face within these systems. Instead of a one-size-fits-all approach, we’ve seen a rise in diverse, personalized systems, each reflecting the unique vision of its creator and offering unique pathways for artistic exploration.

What’s particularly captivating about this decentralized setup is how creative tech software ecosystems keep evolving. With every new software release, we not only get the core platform but also a bunch of additional packages and plugins created by enthusiasts. These additions often stretch the boundaries of what the original creators had in mind, opening up new possibilities for artists.

Sure, it might seem overwhelming at first for newcomers to navigate this sea of options. But in the end, it all adds to the richness and diversity of artistic practice. Thanks to the collective efforts of enthusiasts, algorithmic artists aren’t confined to the limitations of a single software package. Instead, they have a wide array of tools and resources they can tailor to their specific artistic visions.

<How it started>

There was no end goal at first. I was trying combinations of different sound, thinking about how they would sound like if they worked as my intro etc. Playing around, I came up with two combinations that I liked. They gave a vibe of somebody maybe being chased, or maybe being high. Then a game I watched a video of years ago popped up in my head, so I decided to maybe try making a visual similar to this. (It’s an educational game and this project too, is far from promoting drug abuse.)

<Performance>

For the performance, I realized that it would be chaotic to go back and forth between the music and the visuals. At the same time, I wanted some aspect of live coding to be there. To get around this, I made the music as a complete piece, allowing myself to focus on the visuals and evaluating the codes according to the preset music.

I could not do any type of screen recording because whenever I tried, the hydra code lagged so much, completely stopping the video in the background and freezing the screen. Because of that, some sounds of the music sounds a little bit different in the video above.

<Tidal Cycles Code>


intro = stack [slow 2 $ s "[ ~ cp]*4", slow 2 $ s "mash*4"]
baseOne = stack [s "bd bd ht lt cp cp sd:2 sd:2", struct "<t(4,8) t(3,8,1)>" $ s "ifdrums" # note "c'maj f'min" # room 0.4]
baseTwo = stack[slow 2 $ s "cosmicg:3 silence silence silence" # gain 0.7, slow 2 $ s "gab" <| n (run 10)]
baseThree = stack[s "haw" <| n (run 4),  s "blip" <| n (run 13) # gain (range 0.9 1.2 rand)]
buildHigh = palindrome $ s "arpy*8" # note ((scale "augmented" "7 3 1 6 5") + 6) # room 0.4
buildMidOne = sound "bd bd ht lt cp cp sd:2 sd:2"
buildMidTwo = stack[s "<feel(5,8,1)>" # room 0.95 # gain 1.3 # up "1", s "feel" # room 0.95 # gain 1.3 # up "-2"]
buildLow = s "f" # stretch 2 # gain 0.75
explosion = slow 2 $ s "sundance:1" # gain 1.2
mainHigh = s "hh*2!4"
mainMid = stack[fast 1.5 $ s "bd hh [cp ~] [~ mt]", fast 1.5 $ s "<mash(3,8) mash(5,8,1)>" # speed "2 1" # squiz 1.1, s "circus:1 ~ ~ ~ "]
endBeep = s "cosmicg:3 silence silence silence" # gain 0.7

-- midi
midiIntro = ccv "2 4 -2 1" # ccn 0 # s "midi"
midiEnd = ccv "2" # ccn 0 # s "midi"
midiBase = ccv "127 0 0 64 127 0" # ccn 1 # s "midi"
midiBuild = ccv "40 10" # ccn 2 # s "midi"
midiMain = ccv "2" # ccn 2 # s "midi"
midiSlow = ccv "20 16 12 8 4" # ccn 3 # s "midi"

playMusic = do {
  d2 $ qtrigger $ filterWhen (>=0) $ seqP [
    (0, 6, intro),
    (0, 42, fast 4 midiIntro),
    (6, 12, intro # gain 0.8)
  ];
  d3 $ qtrigger $ filterWhen (>=0) $ seqP [
    (6, 50, midiBase),
    (6, 42, baseOne),
    (12, 42, baseTwo),
    (18, 22, baseThree),
    (22, 26, baseThree # up 4),
    (26, 30, baseThree),
    (30, 34, baseThree # up 4),
    (34, 42, degradeBy(0.5) baseThree # gain 0.8),
    (42, 46, degradeBy(0.5) baseThree # gain 0.65),
    (46, 50, degradeBy(0.5) baseThree # gain 0.45)
  ];
  d4 $ qtrigger $ filterWhen (>=0) $ seqP [
    (42, 58, buildHigh),
    (46, 58, buildMidOne # gain 1.1),
    (50, 58, buildMidTwo),
    (50, 60, fast 6 midiBuild),
    (50, 58, buildLow),
    (58, 59, explosion)
  ];
  d5 $ qtrigger $ filterWhen (>=0) $ seqP [
    (60, 62, mainHigh),
    (60, 86, midiEnd),
    (60, 86, midiMain),
    (60, 86, midiSlow),
    (62, 84, mainMid)
  ];
  d6 $ qtrigger $ filterWhen (>=0) $ seqP [
    (68, 76, baseOne # gain 0.5),
    (68, 80, baseTwo # gain 0.5),
    (68, 68, baseThree # gain 0.5),
    (76, 86, midiEnd),
    (76, 78, slow 2 endBeep),
    (78, 82, degradeBy(0.7) $ slow 2 endBeep # gain 0.6),
    (82, 86, degradeBy(0.5) $ slow 3 endBeep # gain 0.5)
  ]
}

playMusic

hush

<Hydra Code>


s0.initVideo("/Users/mayalee/Desktop/Spring\ 2024/Live\ Coding/classPerformance/composition/runningVideo.mp4")

src(s0)
  .out(o0)
render(o0)

src(s0)
  .add(
    osc(200,1,0.9)
      .saturate(()=>cc[0])
      .modulate(noise(()=>Math.sin(cc[1] / 100 * time) * 2 + 1))
      .add(shape(30).scale(()=>cc[2]*20).rotate(2))
  )
  .out(o0)
render(o0)

osc(200,1,0.9)
  .rotate(1, 0.1)
  .saturate (()=>cc[0]*5)
  .modulate(noise(()=>Math.sin(0.01 * time) * 2 + 1))
  .add(shape(30).scale(2.5).rotate(2))
  .out(o0)
osc(0.001, 900, 0.8)
  .diff(o0)
  .scale(()=>cc[1] /3000 - 100)
  .modulate(o1, 0.1)
  .out(o2)
render(o2)

osc()
  .shift(0.1,0.9,0.3)
  .out(o2)
render(o2)

osc(200,1,0.9)
  .rotate(1, 0.1)
  .saturate (1)
  .modulate(noise(()=>Math.sin(0.001 * time) * 2 + 1))
  .add(shape(30).scale(2.5).rotate(2))
  .out(o2)
osc(0.001, 900, 0.8)
  .diff(o2)
  .rotate(()=>Math.sin(0.1 * time) * 2 + 1, ()=>cc[3] / 10)
  .scale(0.3)
  .modulate(o1, 0.1)
  .out(o1)
render(o1)

hush()

<Future Improvements>

When I started working on the visuals, I actually thought it would be a good idea to have the beat of the visuals a bit off from the music to create that psycho mood. Looking at the end result, I’m not sure if it was implemented in the way I intended. I think the idea was okay, but making something off in a harmonized way was not such an easy job. I also think there could’ve been more filler visuals. I think there’s a part or two where the visuals are repetitive for a while- making more visuals for these parts, I think, would have made it more intense to watch.