In my research project, I explored FoxDot. FoxDot is a live coding platform with Python and SuperCollider. The platform is designed for improvising and composing music, which is similar to Tidal Cycles. The significance of FoxDot is its Python driven environment, and is the major reason why I chose this platform: I am more familiar with the syntax of Python.

One Fun Fact Discovered While Researching:

The creator of FoxDot, Ryan Kirkbride attributes the inspiration of building a live coding platform to Alex McLean, the creator of Tidal Cycles and the term algorave. It turned out that Alex McLean mentored Ryan Kirkbride in the computer music field when he was pursuing his master degree.

Why Python?

With FoxDot I wanted to create an application that bridged the gap between software engineering and Live Coding so that users who were entry level to programming, composition, or both would still be able to grasp the concepts and make music.

What makes Python stand out? First is its object oriented programming. The object oriented design makes it easier to trace the current states of different variables as the image showed below:

The changes applied to p1 can easily be tracked based on the output showed on the terminal. Its clean syntax helps both live coders and the audiences to follow the code. A wide range of third-party libraries greatly expands what live coding can achieve. For example, tkinter package to build a customized GUI; extensions that can support Sonic Pi, voice renderization…

From a beginner’s perspective, the rapid auto-completion feature significantly improves coding fluency. This responsiveness is a notable advantage over platforms like Tidal Cycles, which can be laggy in practice.

Cultural Impact

Broad compatibility expands creative expression. FoxDot works well with many libraries and software tools, allowing artists to mix different digital resources and create unique audiovisual experiences.

User-friendly language lowers the learning curve. The simplicity of Python as the programming language makes it accessible to beginners and non-programmers, lowering the barrier to entry and encouraging more people to engage in live coding.

Creates a large live coding community. By making live coding more approachable, FoxDot has built a sizable and active community. This community supports idea sharing and collaboration, drawing a diverse range of artists and enthusiasts into the live coding culture.

Demonstration of Performance

The following video is me trying to play with some functions and effects in FoxDot:

Overview of Sonic Pi

Released in 2012, Sonic Pi is a live coding environment based on the programming language Ruby. It is initially designed by Sam Aaron in the University of Cambridge Computer Laboratory for the purpose of teaching computing lessons in schools. It allows users to use up to 10 buffers to create audio and can create visual effects via other platforms like p5js and Hydra.

As Sam Aaron wrote in the tutorial of Sonic Pi, this software encourages users to learn about both computing and music through play and experimentation. It provides instant feedback for students at school, and as it produces music instead of typical text outputs, it’s more attractive to students compared with traditional coding environments like Java or Python. It also allows users to connect computers with instruments and make remixes in the software.

Interface of Sonic Pi

The interface of Sonic Pi can basically be divided into 9 parts:

  1. Play Controls
    • Play control buttons are responsible for starting and stopping sounds. Clicking Run initiates the current track, and clicking Stop will stop all running code.
    • Record button enables users to save audio played in Sonic Pi with high fidelity.
    • Save and Load buttons enables users to load .rb files from their computers and save the current code as .rb files.
  2. Code Editor
    • Users can write code and compose or perform music here.
  3. Scope Viewer
    • The scope viewer allows users to see the sound waves played on both channels.
  4. Log Viewer
    • Displays the updates within the program.
  5. Cue Viewer
    • All internal and external events (called cues in Sonic Pi) are automatically logged in the Cue Viewer.
  6. Buffer Switch
    • Lets the user switch between 10 buffers provided by the software.
  7. Link Metronome, BPM scrubber and timewrap setter
    • Link Metronome allows users to link the software to local metronomes and synchronizes BPM with the metronome.
    • The Tap button allows users to tap manually in a specific speed, then measures the BPM and automatically adjusts the BPM in Sonic Pi.
    • The BPM displayer shows the current BPM, and users can modify it.
    • Timewrap setter allows the user to manually set whether to trigger every sound earlier or later.
  8. and 9. Help system
    • Displays the tutorial for Sonic Pi. The user can check out all documentations and preview the samples via the help system.

Performance Demo

Reflection: Pros and Cons of Sonic Pi

Sonic Pi, as an educational software, has done a great job by embedding detailed tutorials and documentation in the software. Its large collection of samples and synthesizers allows the users to make all kinds of music. However, the quality of samples is not very stable and requires lots of learning and adjustments to produce high-quality music.

Mercury is a beginner-friendly, minimalist, and highly readable language designed specifically for live-coding music performances. It was first developed in 2018 by Timo Hoogland, a faculty member at HKU University of the Arts Utrecht.

Mercury is structured similar to Javascript and ruby, being written highly abstracted. The audio engine and openGL visual engine is built based on Cycling ’74 Max 8, utilizing Max/MSP for real-time audio synthesis, while Jitter and Node4Max handle live visuals. Additionally, a web-based version of Mercury leverages WebAudio and Tone.js, making it more accessible.

How Mercury Work

The mercury language roots in Serialism, which is a musical composition style where all parameters such as pitch, rhythm and dynamics are expressed in a series of values (called list in Mercury), used to adjust the instruments stage over time.

In Mercury, the code is executed sequentially from top to bottom, and variables must be declared before being used in an instrument instantiation, if the instrument relies on them. The functionalities are divided into three categories. To define a list of values, the list command is used. To generate sound, an instrument—such as a sampler or synthesizer—can be instantiated using the new command.

Mercury in the Context of Live Coding

What makes Mercury outstanding is the following highlights:

  1. The minimal, readable and highly abstract language: It fosters greater clarity and transparency between performers and audiences. By breaking down barriers in live coding, it enhances the immediacy of artistic expression. True to its name, Mercury embodies fluidity and quick thinking, enabling artists to translate their mental processes into sound and visuals effortlessly.
  2. Code Length Limit and Creativity: In the early version of Mercury, there is a code length limit up to 30 lines for the performer, which encourage the innovation and constant dynamics by pushing the iteration of the existing code rather than writing another long list of code.

Demo

Below is the performance demo of me playing around in the Mercury Web Editor Playground:

Slides

What Is Live Coding? 

From the reading, I managed to gain an insightful understanding of what Live Coding is.From my own perspective, I would claim that it is a practice of improvisatory live performance through the use of code. Ultimately, we use code to connect ourselves to our artistic desires and visions, and doing it in real time means that there is a level of improvisation that live coders indulge in. Therefore I do agree with Ogborn’s resistance to define live coding — as it gives it a fixed state, and does not acknowledge its flexible nature. 

Live coding removes the curtain between the audience and the performer — to project the code from the screen, the audience can connect with the performer by being able to visualise how the programmer thinks in real time. Thus the act of writing in public adds an element of interactivity, honesty, and even creativity — all of which are pillars to the process of live coding.

Reading about microtiming made me think differently about rhythm and how much subtle timing variations shape the way we experience music. I’ve always felt that some songs just “hit different,” but I never really considered how small delays in a drumbeat or a slightly rushed note could create that feeling. The discussion on African and African-American musical traditions, especially how groove emerges from microtiming, reminded me of songs that make me want to move even if I’m just sitting still. It’s fascinating how something so precise—down to milliseconds—can make music feel more human.

The idea of being “in the pocket” stood out to me, especially in relation to genres like funk, hip-hop, and R&B, where rhythm feels alive and interactive. I’ve noticed that in a lot of my favorite songs, the backbeat isn’t rigid but slightly laid back, creating that smooth, effortless vibe. It also makes me think about live performances versus studio recordings—sometimes, a live version feels more engaging because it has those natural imperfections that quantized beats remove. This makes me appreciate how rhythm isn’t just about keeping time but about shaping emotion and energy.

This chapter also made me reflect on how technology influences our sense of groove. With so much music being produced digitally, there’s a balance between precision and feel. Some tracks use quantization to sound perfect, but others intentionally keep human-like imperfections to maintain a groove. I’ve noticed how producers add swing to drum patterns in genres like lo-fi hip-hop, recreating the organic feel of live drumming. It’s interesting to see how microtiming isn’t just a technical detail but a crucial part of musical expression, bridging tradition and innovation in ways I hadn’t fully appreciated before.

When I was in middle school, during one of his rare visits, my father showed me an Aphex Twin song from Syro. At that point, having grown accustomed to Ariana Grande and Justin Bieber radio hits, I had said, “It’s just noise.” He responded, “You’ll be able to see the patterns–the music–someday.” My brother and I would sit in front of our battered Bluetooth speaker and listen to Aphex Twin songs in order to understand what our father possibly saw in these strange metal-like songs. We would point out a sound when it arrived what at first seemed too early or late. We would gape at sounds that surprised us because they arrived and repeated in ways we hadn’t expected them to, and gradually, we began to love this sense of musically organized disintegration. Ariana Grande and Justin Bieber no longer––that bored us. My brother eventually became a jazz drummer and obsessed over the likes of Miles Davis and John Coltrane and is now majoring in music. I trace his interest back to this anecdote.

Aphex Twin is inseparable from experimental electronic music, and yet, despite being so electronic, his work is undeniably intimate and human. I believe one of the reasons for this is his genius knack for variation and timing, something he learned, as the reading pointed out, from African rhythms. Stockhausen wasn’t the biggest fan of Aphex Twin’s work because he claimed he should “stop with all these post-African repetitions” and should “look for changing tempi and changing rhythms.” But Aphex Twin’s ability to continuously repeat interlocking rhythms with genius variations is why people have come to love and idolize him so much. Not everyone can do so much with so little. Another artist that this reading reminded me of was Tune-Yards, who also draws from African rhythms to create human-like complexities and variations in her work. She has a lyric that goes: “I use my white woman’s voice to tell stories of African men…” As we go full throttle into a tactile-less, screen, technocratic world, I believe prioritizing the HUMAN in music is more important than ever. Or, not even important, but the kind of music we will increasingly seek out because we need it. I was never a fan of dubstep and EDM because they were too saccharine and simple. I want imperfection. I want the human body, with all its limitations and attempted breaking of those limitations. I really enjoyed this reading and the insight I was able to glean into the music I love through it.

This reading made me realize how much emotion hides in tiny timing details! I never thought a snare drum hit slightly late could create that “laid-back” groove feeling. It’s wild how West African traditions—like stomping/clapping in the ring shout—evolved into modern drumset backbeats.

After reading this paper the idea that “soul” comes from human imperfection stuck with me. Even drum machines today try to fake those micro-delays to sound more “human.” But when tech goes too far , music feels robotic—like it’s missing a body. On the flip side, artists like George Lewis use computers to add new layers of creativity, blending human and machine in improvised jazz. I also found the link between body movement and rhythm fascinating. Bass drum = foot, snare = hand. That connection to dance and ritual explains why groove feels so physical. It’s not just sound; it’s like the music is a body moving.

This made me listen differently. Now I notice how tiny delays or “mistakes” give music its heartbeat. Even pop stars like Madonna try to inject “soul” into electronic beats—but maybe the real magic is already in those micro-moments we feel but don’t always hear.