To define something is to stake a claim to its future, to make a claim about what it should be or become. This makes me hesitate to define live coding.
This quote caught my eye as soon as I started reading the text, it resonated with me because I think it captures the fluid and dynamic nature of live coding. It is fascinating to see live coding not just as a technical practice but as an artistic/creative approach that challenges programming norms. Before this class, I associated the phrase “live coding” with writing efficient code in real time for some work related things. However, now I think of it as an improv performance, something similar to rap freestyles, thus making me rethink the rigid perception of programming.
I found it kind of intriguing how live coding embraces transparency, exposing the thought process behind the code and allowing the audience to witness the creative journey in real time. Of course it comes with some extra pressure on the programmer/artist because everything needs to be done live and mistakes can make the performance a bit awkward. This visibility challenges the traditional black-box approach in tech, where software operates behind the scenes.
Live coding changed the notion of coding for me, now I see coding as an entity that adapts and evolves with me as a potential performer. The more I practice, the more I would be able to create and express via coding. It is safe to say that I started to appreciate coding not just like a tool but as a medium of self-expression.
As a new form of performance that features creating audio and visual effects via text editors, live coding is both a primitive and advanced form of human-computer interaction. The performers are directly editing the performance by accurately controlling the effects digit by digit, it provides more accuracy and more space to experiment with as the technology of shaders and audio editors keeps evolving.
I remember someone saying this during ICLC 2024 to describe the experience of live coding: “Everything is impossible, but nothing is easy”. Theoretically, there can be countless possibilities of a combination of effects created by coding, but advanced performance will take much time and energy for experiments and research. Live coding is realized by programming and requires lots of experience with both programming and composing, so it might be very complicated for those who are not familiar with computers and programming, and the way to practice live coding is not like to practice playing traditional instruments by following the existing scores. However, it still has its significance compared with other performances like DJ or jazz sessions, which as well contain improvisational content, as coding is also a part of the performance. The audience can observe the complete process of coding.
The primitive nature enables live coders to implement all kinds of JavaScript libraries to the text editors, which gives them great freedom for self-expression. They can type their feelings in the text editors, add 3D models, or even add their real-time drawings to the canvas, which is a groundbreaking revolution for performance in my point of view.
As computer science keeps evolving, I’m thrilled to witness how the form of art can be changed, and the emergence of live coding is a good showcase.
This particular quote from the reading was especially captivating as it illuminates upon the expansive possibilities of coding that goes beyond the rigid, structured nature many associate with programming. It reframes coding as a more fluid, flexible and expressive act akin to playing a musical instrument. It highlights programming as not just a (behind the scene) tool for solving problems but as a more dynamic, performative practice that blurs the line between technical precision and artistic expression.
In this sense, live coding is impactful as it challenges our ‘traditional’, one-dimensional understanding of programming. It deconstructs widely perpetuated understandings of coding to be a linear act of instructing machines to perform and produce specific, predetermined outcomes and instead propels a multidimensional perspective on what programming can be. This is why I think Live Coding as a class perfectly embodies the essence of Interactive Media as a major; it showcases its interdisciplinary nature as a blend of the sciences and arts.
These paragraphs explore the concept of live coding and why it attracts people. As an interdisciplinary practice combining coding, art, and improvised performance, live coding appeals to both technicians and artists. It provides a unique medium to appreciate the beauty of coding and the artistic aspects often hidden within what is typically seen as a highly technical and inaccessible field.
I encountered live coding for the first time while working as a student staff member at ICLC2024. These performances gave me a basic understanding of live coding as a layperson. Reading this article later deepened my perspective and sparked new thoughts.
The article describes live coding as a way for artists to interact with the world and each other in real-time through code. Watching live coding performances, I initially assumed artists focused entirely on their work, treating the performance as self-contained and unaffected by external factors. However, I may have overlooked the role of the audience, the venue, and the environment in inspiring the artists and adding new layers to the improvisation. As someone who loves live performances, I now see live coding as another form where interaction between the artists and their surroundings is crucial.
The article also mentions how projecting code on the screen as the main visual makes the performance more transparent and accessible. While I agree with this, it also raises a concern. A friend unfamiliar with live coding once referred to it as a “nerd party,” commenting that it’s less danceable than traditional DJ performances and difficult for non-coders—or even coders unfamiliar with live coding languages—to follow. I wonder if this limits the audience’s ability to understand and fully appreciate the performance or the essence of the art form. Although this may not be a significant issue, it’s something I’m curious about.
The idea was to combine hydra visuals with an animation overlayed on top. Aadhar drew a character animation of a guy falling, which was used on top to drive the story and the sounds. Blender was used to draw the frames and render the animation.
The first issue that came up with overlaying the animation was turning the background of the animated video transparent. We tried really hard to get a video with a transparent background into hydra, but that didn’t work because the background showed up black at the end no matter what we did. Then, we used hydra code itself to turn the background transparent, using its luma function that was relatively easier to do.
Then, because we had a video that was 2 minutes long, we couldn’t get all of it into hydra. Apparently, hydra only accepts 15-second-long clips. So we had to chop it up into eight 15-second-long pieces and trigger each video at the right time to make the animation flow. However, it wasn’t as smooth as we thought it would be. It took a lot of rehearsals for us to get used to triggering the videos at the right time – which didn’t come off even till the end. The videos were looping before we could trigger the next one (which we tried our best to cover and the final performance really reflected it). Other than the animation itself, different shaders were used to create the background of the thing.
Chenxuan was responsible for this part of the project. We created a total of six shaders. Notably, the shader featuring the colorama effect appears more two-dimensional, which aligns better with the animation style. This is crucial because it ensures that the characters and the background seem to exist within the same layer, maintaining visual coherence.
However, we encountered several issues with the shaders, primarily due to a variety of errors during loading. Each shader seems to manifest its unique problem. For example, some shaders experience data type conflicts between integers and floats. Others have issues with multiple declarations of ‘x’ or ‘y’ variables, which cause conflicts within the code.
Additionally, the shaders display inconsistently across different platforms. On Pulsar, they perform as expected, but on Flok, the display splits into four distinct windows, which complicates our testing and development process.
The audio is divided into three main parts: one before the animation, one during the animation, and one after the animation. The part before the animation features a classic TidalCycles buildup—the drums stack up, a simple melody using synths comes in, and variation is given by switching up the instruments and effects. This part lasts for roughly a minute, and its end is marked by a sample (Transcendence by Nujabes) quietly coming in. Other instruments fade out as the sample takes center stage. This is when the animation starts to come in, and the performance transitions to the second part.
The animation starts by focusing on the main character’s closed eyes. The sample, sounding faraway at first, grows louder and more pronounced as the character opens their eyes and begins to fall. This is the first out of six identifiable sections within this part, and this continues until a moment in which the character appears to become emboldened with determination—a different sample (Sunbeams by J Dilla) comes in here. This second part continues until the first punch, with short samples (voice lines from Parappa the Rapper) adding to the conversation from this point onwards.
Much of the animation features the character falling through the sky, punching through obstacles on the way down. We thought the moments where these punches occur would be great for emphasizing the connection between the audio and the visuals. After some discussion, we decided that we would achieve this by switching both the main sample and the visuals (using shaders) with each punch. Each punch is also made audible through a punching and crashing sound effect. As there are three punches total, the audio changes three times from the aforementioned second part. These are the third to fifth sections (one sample from Inspiration of My Life by Citations, two samples from 15 by pH-1).
The character eventually falls to the ground, upon which the animation rewinds quickly and the character falls back upwards. A record scratch sound effect is used to convey the rewind, and a fast-paced, upbeat sample (Galactic Funk by Casiopea) is used to match the sped-up footage. This is the sixth and final section of this part. The animation ends by focusing back on the character’s closed eyes, and everything fades out to allow for the final part to come in.
The final part seems to feature another buildup. A simple beat is made using the 808sd and 808lt instruments. A short vocal(ish?) sample is then played a few times with varying effects, as if to signal something left to be said—and indeed there is.
Code for the audio and the lyrics can be found here.
Breakdown: Noah + Aakarsh worked on music mainly. Aakarsh made pt2,3,6,7,8 while Noah made pt 4,5. Nicholas made all the visual effects while the group decided the videos+text to be displayed mostly together.
The music is inspired by various hyperpop, dariacore, digicore and other internet-originated microgenres. The album’s Dariacore, Dariacore 2, Dariacore 3 by artist Leroy were particularly the inspirations in mind. People on the internet jokingly describe Dariacore as maxxed out plunderphonics and the ADHD-esque hyper intensity of the genre couple with meme-culture infused pop sampling was what particularly attracted me and noah. While originally starting as a Dariacore project, this 8-track project eventually ended up spanning multiple genres to provide narrative arcs and various downtempo-uptempo sections. This concept is inspired by Machine Girl’s うずまき (Uzumaki) , a song that erratically cuts between starkly different music genres and emotional feels. We wanted our piece to have a combination of this song’s composition and a DJ set as our composition. Here’s a description of the various sections: Here’s a description of various sections:
For the visuals, we wanted to incorporate pop culture references and find the border between insanity and craziness. We use a combination of real people, anime, and NYUAD references to keep the viewer guessing what’ll come next. I tried to get around Hydra’s restrictions when it comes to videos by developing my own FloatingVideo class that enabled us to play videos in p5 that we could put over our visuals. I also found a lot of use in the blend and layer functions that allowed us to combine different videos and sources onto the canvas.