This excerpt from Live Coding: A User’s Manual describes live coding as a unique hybrid art form that questions traditional assumptions regarding knowledge and challenges the categorical divisions of epistemology. Indeed, live coding is a multidisciplinary practice that draws on various modes of knowledge and the expression of said knowledge. If one is to envision knowledge as a spectrum ranging from the tacit and unconscious to the structured and explicitly taught, live coding is a practice that wills the practitioner to utilize both ends of the spectrum. And in doing so, live coding “demonstrates the coexistence, cooperation, and even complementarity between seemingly divergent knowledge paradigms” (219).

I have always felt that much of the appeal of live coding lies in its highly experimental nature. The philosophy of live coding encourages experimentation and contingency—in fact, it is very much defined by this acceptance of indeterminacy. Though the practice may demand some form of background competence (in computing and art, for example), the knowledge required most by live coding is that which enables one to interact with uncertainty. Live coding is thus an art form that is fueled by no-how rather than know-how; there is no set methodology or script that it abides by. It was fascinating to engage with the thought that the very existence of live coding, in embracing indeterminacy as a core tenet, prompts conversation regarding the subsequent need for alternative ways of and terms for understanding knowledge at large.

The epistemological survey of live coding through the guise of it being artistic research was a particularly eye-opening exploration of this medium for me. I appreciate these readings, as they help us place live coding – a relatively new practice – in broader contexts and conversations regarding media, art, technology, music, and culture. Live coding existing as a paradigm and a way of thinking at the intersection of all these goes to highlight the novel experimentalism of this language the author highlights. The discussions regarding improvisation, “making mistakes” and play during live coding as forms of experimentation, help me start seeing how we might approach this practice as a live band improv jam/open jam rather than a carefully rehearsed orchestra performance (a mindset I had been approaching this with).

The author explores the intersection of life coding with artistic research, delving into its fusion of technical expertise and intuitive, craft-based approaches. This blend fosters a dynamic interplay between problem-solving and the generation of obstacles. Viewing life coding as a discipline of trial and error, akin to “feeling one’s way,” highlights its reliance on technical computational knowledge while embracing a more intuitive process. In the realm of creative coding, the author often employs a technique of constant questioning, applying logic to resolve challenges in a perpetual cycle of doing and undoing, ultimately culminating in something beautiful. Yet, akin to art, determining the endpoint of this process proves elusive, as completion remains subjective.

Live coding, performed within a defined timeframe, prompts reflection on the nature of performance art—whether it is an ongoing process or a singular event. The author draws inspiration from Paolo Virno’s assertion that performing artists’ actions lack extrinsic goals, existing solely for their own occurrence. This concept reminds me of the Chafa Ghaddar’s fresco currently exhibited at the NYUAD Art Gallery. The fresco’s creation was partly performative, completed within the span of just a week, and the temporal constraints faced prompt contemplation regarding its ultimate completion. Ghaddar’s acceptance of presenting an unfinished piece reflects the essence of life coding, embracing real-time creation over finality.

Contemplating the eventual loss of Ghaddar’s fresco underscores the notion of permanence. The act of producing and subsequently losing artwork prompts deeper reflection on the essence of art itself, its ability to provoke thought, and its intrinsic value beyond material existence.

gibber is a live coding environment for audiovisual performance, which combines music synthesis and sequencing with ray-marching 3d graphics. Gibber was created by Charlie Roberts, who is a researcher and artist interested in live coding, computer music, and interactive systems. Gibber allows users to use JavaScript. To start with Gibber, there’s no need to install anything; you can begin coding directly in your web browser.There are lots of amazing live coding systems out there, a few things that make Gibber different include:

  • Novel annotations and visualizations within the code editing environment.
  • Unified semantics for sequencing audio and visuals.
  • Support for coding with external audiovisual libraries, such as p5.js and hydra.
  • Support for networked ensemble performances.

One of the strengths of Gibber is its intuitive interface and approach to visualizing and managing sequences for each channel.

It can also intigrate with visual.Gibber allows for the integration of visuals that can be manipulated in real-time alongside sound, offering a cohesive performance of audiovisual art. Visuals can react to the music, providing a more immersive and engaging experience for both the performer and the audience.

To sum up, Gibber provides an intuitive platform for users to explore musical concepts, programming, and audiovisual integration through immediate feedback and visual representation of sequences (even you don’t know music theory!)

Here’s the link to my presentation:

https://www.canva.com/design/DAF8yNswVM8/asdv-QO-gv_DvemEwHOCgg/view?utm_content=DAF8yNswVM8&utm_campaign=designshare&utm_medium=link&utm_source=editor

SuperCollider is an environment and programming language designed for real-time audio synthesis and algorithmic composition. It provides an extensive framework for sound exploration, music composition, and interactive performance.
The application consists of three parts: the audio server (referred to as scsynth); the language interpreter (referred to as sclang) and which also acts as a client to scsynth; and the IDE(referred to as scide). The IDE is built-in and the server and the client (language interpreter) are two completely autonomous programs.

What is the Interpreter, and what is the Server?

SuperCollider is made of two distinct applications: the server and the language.
To summarize in easier words:

Everything you type in SuperCollider is in the SuperCollider language (the client): that’s where you write and execute commands, and see results in the Post window.
Everything that makes sound in SuperCollider is coming from the server—the “sound engine”— controlled by you through the SuperCollider language.

SuperCollider for real-time audio synthesis: SC is optimized for the synthesis of real-time audio signals. This makes it ideal for use in live performance, as well as, in sound installation/event contexts.
SuperCollider for algorithmic composition: One of the strengths of SC is to combine two, at the same time both complementary and antagonistic, approaches to audio synthesis. On one hand, it makes it possible to carry out low-level signal
processing operations. On the other hand, it does enable the composers to express themselves at higher level abstractions that are more relevant to the composition of music (e.g.: scales, rhythmical patterns, etc).
SuperCollider as a programming language: SC is also a programming language. It belongs to the broader family of “object-oriented” languages. SC is also a language interpreter for the SC programming language. It’s based on Smalltalk and C, and has a very strong set of Collection classes like Arrays.

Some code snippets from the demo:

For drum roll sounds:

An interesting example of 60Hz Gabber Rave 1995 that I took from the internet:

Here’s a recording of some small sound clips made with SuperCollider (shown in class):

https://drive.google.com/file/d/1I_HxymG_OLzdw_rirGJ9iYXLxzyK8n_k/view?usp=drive_link

Cascade

Self-described as “a live coding parasite to Cascading Style Sheets”, this live coding platform is built on top on of CSS, one of the main elements of any webpage. As CSS is a styling language, and Cascade uses CSS as input for creating music, this creates a very nice side effect — you get the visuals as part of creating the audio.


How it works

How is used to CSS to create music? Simply put, some key CSS properties were chosen to be interfaced with sound. Those specific CSS properties were chosen because they have strong graphical effects, and have little dependencies with other CSS properties. For example, background-color impacts the visuals heavily, so it is chosen. Font-size is not a good candidate for this purpose because it can be reliant on parent properties ( em/rem font sizes ) and it does not affect the visuals much.

A summary picture of the documentation is attached below.



Limitations

At first use, I enjoyed how the audio/visuals tied-in so easily with each other. Unlike what we had to do in Hydra/TidalCycles, there is no need for separately creating each element. By just creating a beat that sounded decent, I also had a bonus of the visuals already being created for me, and it’s synced with the music.This was both a boon and a limitation. As the audio is directly dependent on the visuals, it meant you are limited in what you want to express and you are tied-in to what options are available in Cascade. As it uses CSS as an interface, it meant that if you want to create some complex visuals you will have to perform some complicated DOM manipulations on your own to create something that other live coding platforms offer in a single function.

Slides ( accessible only to NYU emails! )

Why I chose FoxDot:

I honestly started with clive because I’m a bit interested in C language and wanted to explore something with it. However, there was a lack of information on clive. The documentations on installing and composing music with it was not well done, making it hard to explore. So, I started looking at other options and found FoxDot quite similar to TidalCycles. It also had better documentations to help beginners get it started. (Although I had to change from clive to FoxDot, I still think this exploration was meaningful as it highlights the importance of documentation, especially in more non-common fields such as live coding.) Also, FoxDot uses SuperCollider to output sound, which made it more familiar for me.

What is FoxDot:

FoxDot is an open source live coding enviornment, designed specifically for live coding. It’s in Python, making it more intuitive for the users. (At the same time, after looking at other enviornments our classmates have been exploring, I think there are many more options that are easier and more intuitive.)

Many live coders liked FoxDot mostly because of the fact it was in Python. However, it has been explicitly announced that there will be no further developments/updates: only minor changes when issues are found.

Advantages and Disadvantages of FoxDot:

Personally, I think live coding environments that have higher technology and more reflection of physical music composing are harder for people like me (who don’t have experience in composing music) to understand and play around with. As somebody with a background in computer science, environments like FoxDot, where composition can be done in a more code-like manner, is much more approachable. While I was not able to get to the level of using it, FoxDot can be used along with Sonic Pi, making it more attractive for live coders.

The downside is that since it’s not being updated, there are so many issues that happen with installing and running it. There were many clashes with python and I had to manually edit multiple piles to make it running. Also, there’s a limited number of sounds in FoxDot’s sound library, which may limit the mood and style of music live coders can create.

Example Performance:

Here is a video of a live coding performance with FoxDot done in Momentum Tech Conference 2019, Karachi.

Short Performance:

Below is the code I prepared to show for our in class performance. I left out some parts by mistake, so hopefully this sounds better.

p1 >> play("x-o[--]")

p2 >> jbass(dur=3, sus=1)

progression = [0, 7, 2, 4]
rythm = P[0.5, 0.5, 0.25, 0.25, 0.25, 0.25]
bassline = P[progression].stutter(len(rhythm))

d1 >> pluck(bassline, dur=rhythm, amp=0.5)

d2 >> pluck(bassline, dur=rhythm, amp=0.8, echo=0.5, room=0.5)

melody = P[0,2,4,5,7]

d3 >> pads(melody, dur=0.5, oct=(4,5))

d4 >> scatter(melody)

d5 >> arpy(bassline, dur=rhythm)

p1.stop()
p2.stop()
d1.stop()
d2.stop()
d3.stop()
d4.stop()
d5.stop()