Mercury is a beginner-friendly, minimalist, and highly readable language designed specifically for live-coding music performances. It was first developed in 2018 by Timo Hoogland, a faculty member at HKU University of the Arts Utrecht.

Mercury is structured similar to Javascript and ruby, being written highly abstracted. The audio engine and openGL visual engine is built based on Cycling ’74 Max 8, utilizing Max/MSP for real-time audio synthesis, while Jitter and Node4Max handle live visuals. Additionally, a web-based version of Mercury leverages WebAudio and Tone.js, making it more accessible.

How Mercury Work

The mercury language roots in Serialism, which is a musical composition style where all parameters such as pitch, rhythm and dynamics are expressed in a series of values (called list in Mercury), used to adjust the instruments stage over time.

In Mercury, the code is executed sequentially from top to bottom, and variables must be declared before being used in an instrument instantiation, if the instrument relies on them. The functionalities are divided into three categories. To define a list of values, the list command is used. To generate sound, an instrument—such as a sampler or synthesizer—can be instantiated using the new command.

Mercury in the Context of Live Coding

What makes Mercury outstanding is the following highlights:

  1. The minimal, readable and highly abstract language: It fosters greater clarity and transparency between performers and audiences. By breaking down barriers in live coding, it enhances the immediacy of artistic expression. True to its name, Mercury embodies fluidity and quick thinking, enabling artists to translate their mental processes into sound and visuals effortlessly.
  2. Code Length Limit and Creativity: In the early version of Mercury, there is a code length limit up to 30 lines for the performer, which encourage the innovation and constant dynamics by pushing the iteration of the existing code rather than writing another long list of code.

Demo

Below is the performance demo of me playing around in the Mercury Web Editor Playground:

Slides

The article discusses the importance of human embodied presence in music, emphasizing how both intentional and unintentional “imperfections”—along with the physical movements of musicians—play a crucial role in shaping the “soul” of music.

At first, I found myself wondering the same thing: Does electronic music really have a soul? The perfection in electronic music—precise timing, flawless pitch, and speeds that human musicians cannot physically achieve—often creates a robotic and somewhat inaccessible quality. This seems to contrast with the warmth and expressiveness of human-performed music.

However, as I reflected on my own experiences with techno and electronic music, I realized that we are actually drawn to its cold, half-human, and futuristic aesthetic. As the author describes, it embodies a “disembodied, techno-fetishistic, futuristic ideal.” In this sense, electronic music’s unique identity is not about replicating human imperfection but about embracing a different kind of artistic expression.

The evolution of electronic music challenges us to rethink the essence of musical “soul.” Does music require human musicians physically playing instruments to be considered soulful? Ultimately, both electronic sounds and traditional instruments are merely mediums for artistic expression. Defining musical soul solely based on the medium—whether electronic or acoustic—seems arbitrary. If digital music is purely “cold,” does that mean instrument is purely”warm”?

Even when electronic music fully embraces mechanical perfection, it can still be deeply expressive, depending on how the artist uses it. As I mentioned earlier, techno and other electronic genres transform cold precision into something deeply moving. The soul of music does not come from imperfection alone, but from the wild and imaginative ideas of the artist. Rather than rigidly defining musical soul based on how “human” a sound is, we should recognize that it is the artist’s vision that gives music its depth, emotion, and meaning.


P.S. When I was reading through the “backbeat” part and the microscopic line about the snare drum always played slightly later than the midpoint between two consecutive pulses, I tried to replicate the rhythm in Tidal (Not sure if it’s right but sounds so). Then I searched on Youtube about the drummer playing Backbeat. Unfortunately I wasn’t able to hear the tiny time difference between the two. Maybe I would need more listening training for this:)

d1 $ stack [
    s "bd ~ sn ~ bd bd sn ~",
    s "hh*8"
] # room 0.3

These paragraphs explore the concept of live coding and why it attracts people. As an interdisciplinary practice combining coding, art, and improvised performance, live coding appeals to both technicians and artists. It provides a unique medium to appreciate the beauty of coding and the artistic aspects often hidden within what is typically seen as a highly technical and inaccessible field.

I encountered live coding for the first time while working as a student staff member at ICLC2024. These performances gave me a basic understanding of live coding as a layperson. Reading this article later deepened my perspective and sparked new thoughts.

The article describes live coding as a way for artists to interact with the world and each other in real-time through code. Watching live coding performances, I initially assumed artists focused entirely on their work, treating the performance as self-contained and unaffected by external factors. However, I may have overlooked the role of the audience, the venue, and the environment in inspiring the artists and adding new layers to the improvisation. As someone who loves live performances, I now see live coding as another form where interaction between the artists and their surroundings is crucial.

The article also mentions how projecting code on the screen as the main visual makes the performance more transparent and accessible. While I agree with this, it also raises a concern. A friend unfamiliar with live coding once referred to it as a “nerd party,” commenting that it’s less danceable than traditional DJ performances and difficult for non-coders—or even coders unfamiliar with live coding languages—to follow. I wonder if this limits the audience’s ability to understand and fully appreciate the performance or the essence of the art form. Although this may not be a significant issue, it’s something I’m curious about.