Visuals (Aadhar & Chenxuan)

The idea was to combine hydra visuals with an animation overlayed on top. Aadhar drew a character animation of a guy falling, which was used on top to drive the story and the sounds. Blender was used to draw the frames and render the animation.

The first issue that came up with overlaying the animation was turning the background of the animated video transparent. We tried really hard to get a video with a transparent background into hydra, but that didn’t work because the background showed up black at the end no matter what we did. Then, we used hydra code itself to turn the background transparent, using its luma function that was relatively easier to do.

Then, because we had a video that was 2 minutes long, we couldn’t get all of it into hydra. Apparently, hydra only accepts 15-second-long clips. So we had to chop it up into eight 15-second-long pieces and trigger each video at the right time to make the animation flow. However, it wasn’t as smooth as we thought it would be. It took a lot of rehearsals for us to get used to triggering the videos at the right time – which didn’t come off even till the end. The videos were looping before we could trigger the next one (which we tried our best to cover and the final performance really reflected it). Other than the animation itself, different shaders were used to create the background of the thing. 

Chenxuan was responsible for this part of the project. We created a total of six shaders. Notably, the shader featuring the colorama effect appears more two-dimensional, which aligns better with the animation style. This is crucial because it ensures that the characters and the background seem to exist within the same layer, maintaining visual coherence.

However, we encountered several issues with the shaders, primarily due to a variety of errors during loading. Each shader seems to manifest its unique problem. For example, some shaders experience data type conflicts between integers and floats. Others have issues with multiple declarations of ‘x’ or ‘y’ variables, which cause conflicts within the code.

Additionally, the shaders display inconsistently across different platforms. On Pulsar, they perform as expected, but on Flok, the display splits into four distinct windows, which complicates our testing and development process. 

Code for the visuals can be found here.

Audio (Ian)

The audio is divided into three main parts: one before the animation, one during the animation, and one after the animation. The part before the animation features a classic TidalCycles buildup—the drums stack up, a simple melody using synths comes in, and variation is given by switching up the instruments and effects. This part lasts for roughly a minute, and its end is marked by a sample (Transcendence by Nujabes) quietly coming in. Other instruments fade out as the sample takes center stage. This is when the animation starts to come in, and the performance transitions to the second part.

The animation starts by focusing on the main character’s closed eyes. The sample, sounding faraway at first, grows louder and more pronounced as the character opens their eyes and begins to fall. This is the first out of six identifiable sections within this part, and this continues until a moment in which the character appears to become emboldened with determination—a different sample (Sunbeams by J Dilla) comes in here. This second part continues until the first punch, with short samples (voice lines from Parappa the Rapper) adding to the conversation from this point onwards.

Much of the animation features the character falling through the sky, punching through obstacles on the way down. We thought the moments where these punches occur would be great for emphasizing the connection between the audio and the visuals. After some discussion, we decided that we would achieve this by switching both the main sample and the visuals (using shaders) with each punch. Each punch is also made audible through a punching and crashing sound effect. As there are three punches total, the audio changes three times from the aforementioned second part. These are the third to fifth sections (one sample from Inspiration of My Life by Citations, two samples from 15 by pH-1).

The character eventually falls to the ground, upon which the animation rewinds quickly and the character falls back upwards. A record scratch sound effect is used to convey the rewind, and a fast-paced, upbeat sample (Galactic Funk by Casiopea) is used to match the sped-up footage. This is the sixth and final section of this part. The animation ends by focusing back on the character’s closed eyes, and everything fades out to allow for the final part to come in.

The final part seems to feature another buildup. A simple beat is made using the 808sd and 808lt instruments. A short vocal(ish?) sample is then played a few times with varying effects, as if to signal something left to be said—and indeed there is.

Code for the audio and the lyrics can be found here.

The spirit of Live Coding:

Embrace the spontaneity

Today’s me might fail me

But there’s always tomorrow

나도 기다려져 내일이!

In what sense is live coding “live”? Live coding is live in that it is an active conversation between the coder, the machine, the audience, and everything else that surrounds and permeates the setting at hand; live coding is live in that it proceeds in a constant state of spontaneity and is characterized by its resistance to being defined and boxed. Live coding is an amorphous creature, very much alive in that it is never stagnant and craves change. It materializes not just as lines of code or musical notes; live coding is a representation of the relationships that manifest between everything that is present and around the performance itself.

As described in the excerpt, live coding by nature resists a singular definition—and in turn, a singular liveness. It thus demands a nuanced understanding that takes an array of perspectives into account: “The interdisciplinary nature of live coding . . . requires that its very liveness be understood from more than one epistemological and ontological perspective” (159). Live coding is unique in that it transcends spatiotemporal conventions. A performance may exist and be experienced in the present, but certain sequences and samples may be prescripted and prerecorded. Then there is the concept of the undetermined future, which live coding embraces in all its uncertainty. Live coding also creates a platform upon which the physical and digital coexist; it lives in the in-between. 

What emerges from a live coding performance is vastly different for each individual that experiences it, whether they be the live coder on the stage or a member of the audience. Live coding is thus an indeterminate manticore that assumes a different form to all and can only be defined through this ambiguity. Live coding is alive, thriving and pulsating in the bricolage of the predetermined and the yet-unborn.

The reading inspired much thought concerning the boundaries drawn between and around different disciplines of art. While the reading focuses most on the emergence of a specific blend of fields (the artist-musician), it discusses the convergence of artistic fields at large. I took particular interest in how sentiments seeking to challenge such boundaries culminated in modern movements “in which works were dematerialized and became increasingly independent of materials, techniques, media, and genres.” This sort of deconstruction seems particularly prevalent and relevant in today’s age of digital media and art. Characterized by an unprecedented scope of accessibility, variety, and volatility, the digital landscape innately carries an ethos that questions traditional definitions and lines that have made up the art world. This ethos also muddles the edges of non-digital art, as everything that exists in the physical world comes to exist in relation to a digital counterpart and context.

As I made my way through the reading, I was also struck by how topical this discussion was to our class in specific. In the context of live coding, the fogginess of this distinction between artistic fields becomes all the more apparent. As we live code, we are engaging with a mode of composition that sculpts both music and visuals (hence the live coder as the artist-musician/musician-artist) through the language of code and patterns—it is a composite of various art forms, an active  conversation between the coder and the computer, and a performance that situates it all in front of a live audience. Live coding embodies and simulates the feeling of synesthesia. Stimuli bounce around and riff off of each other in what is a dynamic art form that defies the possibility of a singular definition and—hearkening back to the previous reading—even challenges the basic conventions of how we understand knowledge.

We have experienced firsthand that much of live coding consists of playing around with parameters and reacting to the results that emerge from such adjustments. It is then interesting to think about the context behind the available parameters—parameters that are actively chosen by the authors of the system. Hydra and TidalCycles sport the capabilities and features they do because their creators made the active decision to sculpt the systems in that particular way. Hence the resulting space nor the actors (live coders) that perform within it are neutral—as described in the excerpt, “A live coding language is an environment in which live coders express themselves, and it is never neutral.” The contours of the chosen space are sculpted with intention, and the live coder molds what is available with further personal intent.

I also took interest in the discussion regarding the ephemeral nature of live coding near the end of the excerpt. Describing live coding as “an example of oral culture,” the authors write that “The transience of the code they [live coders] write is an essential aspect of being in the moment.” Live coding is highly dependent on the immediate spatiotemporal context of each moment and the wider setting of the performance itself. It is the process of spontaneity and recursive interaction that is most important. As such, notation in live coding is a means to enable the continuation of this process, to take the next step (as opposed to defining and achieving a certain outcome).

This excerpt from Live Coding: A User’s Manual describes live coding as a unique hybrid art form that questions traditional assumptions regarding knowledge and challenges the categorical divisions of epistemology. Indeed, live coding is a multidisciplinary practice that draws on various modes of knowledge and the expression of said knowledge. If one is to envision knowledge as a spectrum ranging from the tacit and unconscious to the structured and explicitly taught, live coding is a practice that wills the practitioner to utilize both ends of the spectrum. And in doing so, live coding “demonstrates the coexistence, cooperation, and even complementarity between seemingly divergent knowledge paradigms” (219).

I have always felt that much of the appeal of live coding lies in its highly experimental nature. The philosophy of live coding encourages experimentation and contingency—in fact, it is very much defined by this acceptance of indeterminacy. Though the practice may demand some form of background competence (in computing and art, for example), the knowledge required most by live coding is that which enables one to interact with uncertainty. Live coding is thus an art form that is fueled by no-how rather than know-how; there is no set methodology or script that it abides by. It was fascinating to engage with the thought that the very existence of live coding, in embracing indeterminacy as a core tenet, prompts conversation regarding the subsequent need for alternative ways of and terms for understanding knowledge at large.

Alda is a “text-based music composition programming language” developed by Dave Yarwood in 2012. As a software engineer with a background in classical music training, Yarwood sought for a way to link his interests and create an accessible, general-purpose audio programming language. The result was Alda, which he designed with the purpose of providing both musicians without programming knowledge and programmers without musical knowledge with a unique and simple method of composition. 

Alda allows for users to write and play music using just the command line and a text editor. The interactive Alda REPL lets the user get immediate feedback from typing and entering in lines of code in the command line. Longer scores can be written in a text file, which can then be fed to and played by Alda (also via the command line). Staying true to this purpose of accessibility, musical notation is rendered simple—notes are represented by their respective letters (a~g), octaves can be set and switched using o1~o7, rests are simply r, + and – are used to denote sharps and flats, and so forth.

There is a noticeable lack of a “live” component in Alda compared to the other coding platforms on the list. Alda is only able to output exactly what it is given, and it does so in a finalized form (in that the output cannot be edited in real time). The REPL is perhaps a little more interactive in this aspect, as the focus is on maintaining a quick exchange of input and output. Alda also does support writing music programmatically, as Alda code can be generated through programs in other languages. Even so, Alda itself does not have any innate functions that allow for algorithmic composition/live coding.

I personally had a lot of fun experimenting with Alda. Following the tutorial on the Alda website, I was able to mess around with the REPL with ease—the simple code and instant output makes it easy to test things out and understand how they work. With a grasp of all the commands and methods gained from that session, I was able to put together longer and more complex scores using multiple instruments through text files.

I should note that I do have some classical training from my years of playing cello and am familiar with sheet music and musical notation. However, I still struggle with letter notation (A-G), as I learned music through the solmization syllable system (do re mi). This complicated my efforts to make things sound the way I wanted them to and made me feel that one would have to have at least some knowledge of music theory and notation to really use Alda to its full potential. Regardless, there is no denying that Alda makes composition much less daunting in both appearance and experience to people who might not be used to reading/writing sheet music.

For my mini project with Alda, I attempted to recreate the starting portion of the beautiful instrumental for Where the Flower Blooms, a track by rapper and producer Tyler the Creator. The text file that I wrote up is as follows:

(key-sig! '(a major))

violin "violin-1":
(pp) o5 f1 | f | f | f |
f | f | f | f |

violin "violin-2":
o4 r8 a+ > f < a+ > g < a+ > f < a+ | r a+ > f < a+ > g < a+ > f < a+ |
r g > e < g > f < g > e < g | r f > c < f > d+ < f > c < f |
o5 b8 f e d+ b f e d+ | b f d < c > b f d < c |
a g f c a g f c | a g f c a g f c |

viola:
o4 r8 f b f > c < f b f | r f b f > c < f b f | e a e b e a e | d+ a d+ b d+ a d+ |

cello:
(pp) o3 c1 | c- | b6 d b d b d | b d b d b d |
c1 | c- | >b | b |

piano "piano-1":

(p) o3 r1 | f/>c-/f | <f/a/>f | f/>c-/f |
(mf) d+4./f/b/>d+  d+8/f/b/>d+ d+ <b4. | e/g-/b e/g-/b | c/f/a/>c <c/f/a/>c | c/f/a/>c <c/f/a/>c |
o4 e/g


piano "piano-2":

r1 | r1 | r1 | r1 |
(mf) o3 c2 c4. >c8 | <c-2 c- | <b/>f r8 f4 <b8 | b2/>f <b/>f |
o3 c1

And here is a short video of what it sounds like:

Spiegel’s writing prompts the reader to reflect upon the process of composition through the lens of information theory. This means that one is encouraged to think of composition as this act of piecing together sequences with the calculated introduction of noise/randomness; I found this idea of sequences to be particularly resonant with the way we had been using TidalCycles (running loops and adjusting said loops to vary each time meshes nicely with this idea of music as varied sequences). Similarly, Spiegel’s portrayal of music as “a communications medium in a noisy world” works nicely with TidalCycles’s visualization of music in the form of code (as opposed to musical notation), which makes it easier to perceive music in such a way.

Also fascinating was Spiegel’s description of the relativity of randomness: “any signal, no matter how internally consistent or meaningful it is within its own context, may be perceived as random noise relative to some other coherent signal.” Randomness is thus not a hard value in that it cannot be inextricably assigned to a certain signal, but rather a quality hugely dependent on the context that surrounds said signal. Questions then arise: When might something be too random—can something even be “too” random if randomness is contextual? Different people will experience and interpret the randomness of a certain signal differently—how can one account for this “felt” randomness? Can anything be 100% random, or more importantly, 0% random? I was accordingly left to wonder about the relationship between perceived randomness and phenomenology. How does one experience the random?

One last idea that led me to further thought was Spiegel’s suggestion that “what we interpret as spontaneous generation may be just the transformation of previously experienced material as it moves within the human perceptual and cognitive systems.” I found this thought to be hugely interesting—everything is a remix of a remix of a remix, but to what degree? There is something poetic in this idea that we as humans express ourselves in ways that are individually unique yet inevitably affected by how others have expressed themselves (plus, “the noise of our many coexistent memories and thoughts” is a beautiful phrase). We humans wish to communicate in a noisy world—and we sing and dance and cry as composite creatures, shaped by those we have listened to and shaping those who listen to us. All amid a noisy, noisy world.