Alda is a text-based, open-source programming language designed for musicians to compose music in a text editor without needing complex graphical user interface (GUI) software.
The Alda music programming language was created by Dave Yarwood in 2012. Interestingly, he was a classically trained musician long before he was a competent programmer
Why Alda?
In contrast to working with complex GUI applications available at the time, Dave Yarwood found that programming pieces of music in a text editor is a pleasantly distraction-free experience.
How it Works
The process is beautifully simple:
Write the notes in a text file using Alda’s syntax
Run the file through the Alda interpreter
Hear the music come to life
Key Features
Alda uses the General MIDI sound set — giving you access to over 100 instruments.
Basic Syntax
Pitch: The letter represents the pitch. c is C, d is D, e is E, and so on.
Duration: The number indicates how long it lasts in beats. In Alda, smaller numbers mean longer notes—it’s backwards from how we normally think!
Octave: The octave number tells Alda how high or low to play. c4 is middle C, c5 is one octave higher, c3 is one octave lower.
Handy Shortcuts
> moves you up one octave
< moves you down one octave
Chords and Rests
The forward slash / is your chord maker. It tells Alda to play notes at the exact same time.
Rests use r instead of a note name. The same duration rules apply.
Visual Aid
The vertical bar | does absolutely nothing to the sound. It’s just there to help you read the music more easily.
Alda represents a beautiful intersection between programming and musicianship—proof that sometimes the simplest tools can inspire the most creative work.
This project explores Glicol, a graph oriented live coding language that runs directly in the browser. Unlike traditional music software, Glicol allows users to build sound by connecting small units called nodes. Each node performs a simple function such as generating a wave, shaping volume, filtering sound, or adding effects. By chaining these nodes together, complex sound structures can be created in real time.
In my live demo, I demonstrated oscillators, sequencing patterns, envelope shaping, frequency modulation, filtering, and delay effects. I also showed how small code changes immediately affect the output, which makes Glicol powerful for experimentation and performance.
What makes Glicol interesting for interactive media is its accessibility. It requires no installation and combines code, sound, and creativity in a very direct way. This makes it suitable for beginners while still allowing advanced exploration.
Max, also known as Max/MSP/Jitter, is a visual programming language for music and multimedia developed and maintained by software company Cycling ’74. Over its more than thirty-year history, it has been used by composers, performers, software designers, researchers, and artists to create recordings, performances, and installations. It offers a realtime multimedia programming environment where you build programs by connecting objects into a running graph, so time, signal flow, and interaction are visible in the structure of the patch. Miller Puckette began work on Max in 1985 to provide composers with a graphical interface for creating interactive computer music. Cycling ’74’s first Max release, in 1997, was derived partly from Puckette’s work on Pure Data. Called Max/MSP (“Max Signal Processing” or the initials Miller Smith Puckette), it remains the most notable of Max’s many extensions and incarnations: it made Max capable of manipulating real-time digital audio signals without dedicated DSP hardware. This meant that composers could now create their own complex synthesizers and effects processors using only a general-purpose computer like the Macintosh.
The basic language of Max is that of a data-flow system: Max programs (named patches) are made by arranging and connecting building-blocks of objects within a patcher, or visual canvas. These objects act as self-contained programs (in reality, they are dynamically linked libraries), each of which may receive input (through one or more visual inlets), generate output (through visual outlets), or both. Objects pass messages from their outlets to the inlets of connected objects. Max is typically learned through acquiring a vocabulary of objects and how they function within a patcher; for example, the metroobject functions as a simple metronome, and the random object generates random integers. Most objects are non-graphical, consisting only of an object’s name and several arguments-attributes (in essence class properties) typed into an object box. Other objects are graphical, including sliders, number boxes, dials, table editors, pull-down menus, buttons, and other objects for running the program interactively. Max/MSP/Jitter comes with about 600 of these objects as the standard package; extensions to the program can be written by third-party developers as Max patchers (e.g. by encapsulating some of the functionality of a patcher into a sub-program that is itself a Max patch), or as objects written in C, C++, Java, or JavaScript.
Max is a live performance environment whose real power comes from combining fast timing, real time audio processing, and easy connections to the outside world like MIDI, OSC, sensors, and hardware. Max is modular, with many routines implemented as shared libraries, and it includes an API that lets third party developers create new capabilities as external objects and packages. That extensibility is exactly what produced a large community: people can invent tools, share them, and build and remix each other’s work, so the software keeps expanding beyond what the core program ships with. In practice, Max can function as an instrument, an effects processor, a controller brain, an installation engine, or a full audiovisual system, and it is often described as a common language for interactive music performance software.
Cycling ’74 formalized video as a core part of Max when they released Jitter alongside Max 4 in 2003, adding real time video, OpenGL based graphics, and matrix style processing so artists could treat images like signals and build custom audiovisual effects. Later, Max4Live pushed this ecosystem into a much larger music community by embedding Max/MSP directly inside Ableton Live Suite, so producers could design their own instruments and effects, automate and modulate parameters, and even integrate hardware control, all while working inside a mainstream performance and production workflow.
Max/MSP and Max/Jitter helped normalize the idea that you can build and modify a performance instrument while it is running, not just “play” a finished instrument. This is the ethos of live coding – to “show us your screens” and treat computers and programs as mutable instruments.
Live Coding + Cultural Context
With the increased integration of laptop computers into live music performance (in electronic music and elsewhere), Max/MSP and Max/Jitter received attention as a development environment available to those serious about laptop music/video performance. Max is now commonly used for real-time audio and video synthesis and processing, whether that means customizing MIDI controls, creating new synths or sampler devices (especially with Max4Live), or just creating entire generative performances within Max/MSP itself. Live reports and interviews repeatedly framed their shows and laptops running Max MSP, which helped cement the idea that algorithmic structure and realtime patch behavior can be the performance. In short, the laptop became a serious techno performance instrument. In order to document the evolution of Max/MSP/Jitter in popular culture, I compiled a series of notable live performances and projects where Max was a central part of the setup, inspiring their fans to use Max for their own music-making performances and practices. Max because a central programming language between artists who performed on the stage and in the basement, uniting an entire community around live computer music.
“This is a live recording, captured at Ego club in Duesseldorf, June 5 1999. The music has been created with a self written step sequencer, the PX-18, controlling a basic sample player and effects engine, all done in Max MSP, running on a Powerbook G3. The step sequencer had some unique features, e.g. the ability to switch patterns independently in each track, which later became an important part of a certain music software” from RobertHenke.com.
“Flint’s was premiered San Francisco Electronic Music Festival 2000. Created and performed on a Mac SE-30 using Max/MSP with samples prepped in Sound Designer II software. A somewhat different version of the piece appeared on the 2007 release Al-Noor on the Intone label” from “Unseen Worlds.”
He used Max to create live music from Nike shoes, and generally talked about how he likes using Max MSP and Jitter to map real time physical oscillations (like grooves and needles – or the bending of shoes) into live audiovisual outcomes.
MaxMSP allows for a collective performance format, such as laptop orchestras and networked ensembles. Princeton Laptop Orchestra lists MaxMSP among the core environments used to build the meta instruments played in ensemble, turning patching into a social musical practice, not just an individual studio practice.
By highlighting MaxMSP/Jitter’s centrality within a wider canon of musicians and subcultures, and linking examples of its development through time, I hope I have demonstrated how Max has contributed to community building around live computer music. Both MaxMSP and Live Coding express the same underlying artistic proposition that the program is the performance, but it lowers the barrier for people who think in systems and signal flow rather than syntax, and it scales all the way from DIY one person rigs to commercial mixed music productions to mass user communities.
My Experience
Computers are very intimidating animals to me. Because of this, I especially appreciated Max/MSP for a couple reasons: One, because it’s so prolific within the live coding/music community, like Arduino, there is an inexhaustible well of resources to draw from to get started. For every problem or idea you have, there is a Youtube tutorial or a reddit/Cycling ’74/Facebook forum to help you out. The extensive community support and engagement makes Max extremely accessible and approachable, and promotes outreach between both beginners and serious artists. Two, Max gives immediate feedback that easily hooks and engages you.
I liked using Jitter, because visuals are just so fun. This is max patch I made some time ago, where I would feel old camcorder videos into various filters and automatically get colorful, glitchy videos.
But for the purposes of this demonstration, I wanted to create a patch where visuals and sound were reacting to each other somehow. I searched up cool Max jitter patches and found a particle system box on Github, linked here: https://github.com/FedFod/Max-MSP-Jitter/blob/master/Particles_box.maxpat. Then, I searched up how I could make a basic sound that affects the size of particles, making them react to the noise realtime. I created the input object or sound (cycle), connected it to the volume (the wave cycles between those parameters, in this case -0.2 and +0.2) and the output object is the speaker, so we can hear it. In order to link this sound to the jitter patch, metro tells the transition objects to read the signal. I connected the sound signal to abs, snapshot and scale to remap it into measurements/numbers that can be read by “point-size” and visualized in jitter. Toggling the metro off stops “point size” from reading the measurements from the audio signal, severing the connection between the sound and the particles, as I demonstrate in my video. The result is, the louder the sound, the bigger the particles and vice versa. I liked using Max because it was extremely helpful to make the “physical” connections and see how an effect is created by visually seeing the signal flow, and more easily understanding how audio can be translated to visual signals in real time.
Mercury. A browser-based live coding environment created by Timo Hoogland. It is designed specifically to make algorithmic music performance human-readable and accessible to beginners. Unlike traditional programming languages that require complex syntax, Mercury uses a simplified, English-like structure (e.g., “new sample beat”), allowing the code to be understood by the audience as written.
Mercury operates as a high-level abstraction over the Web Audio API, running entirely in the browser without requiring external software or heavy audio engines. A key feature of the platform is its integrated audiovisual engine. It seamlessly connects audio generation with visual synthesis, often powered by Hydra, allowing performers to generate sound and 3D graphics simultaneously within a single interface. This design transforms the act of coding into a live, improvisational performance art, blurring the line between technical scripting and musical expression.
I chose Sardine because I am more comfortable with Python, and I thought it would be interesting to use it for live coding. Since it is relatively new and especially because it is an open-source project, it caught my attention. I like open-source projects because, as a developer, they allow us to build communities and collaborate on ideas. I also wanted to be part of that community while trying it out.
Sardine was created by Raphaël Maurice Forment, a musician and self-taught programmer based in Lyon and Paris. It was developed in 2022 (or around I am not too sure) for his PhD dissertation in musicology at the University of Saint-Étienne.
So, what can we create with Sardine? You can create music, beats, and almost anything related to sound and musical performance. By default, Sardine utilizes the SuperDirt audio engine. Sardine sends OSC messages to SuperDirt to trigger samples and synths, allowing for audio manipulation.
And Sardine works with Python 3.10 or above versions.
How does it work?
Sardine follows a Player and Sender structure.
Pa is a player and it acts on a pattern.
d() is the sender and provides the pattern.
* is the operator that assigns the pattern to the player.
Syntax Example:
Isn’t the syntax simple? I found it rather straightforward to work with. Especially after working with SuperDirt, it looks similar and even easier to understand.
There are two ways to generate patterns with Sardine:
Player a shorthand syntax built on top of it
@swim functions
Players can also generate complex patterns, but you quickly lose readability as things become more complicated.
The @swim decorator allows multiple senders, whereas Players can only play one pattern at a time.
My Experience with Sardine
What I enjoyed most about working with Sardine is how easy it is to set up and start creating. I did not need a separate text editor because I can directly interact with it from the terminal. There is also Sardine Web, where I can write and evaluate code easily.
Very well-written documentation, even though there are not many tutorials online
Easy to synchronize with visuals by installing Claude (Claude is another open-source tool for synchronizing visuals with audio in a live-coding context. It has a Sardine extension and allows you to control an OpenGL shader directly from Sardine.)
Gibber is a browser-based live coding environment developed by Charlie Roberts and JoAnn Kuchera-Morin, who worked together with their UCSB team to create an accessible platform for live music and audiovisual programming. The existing live coding tools create obstacles for most learners because of their need to be installed and the requirement to learn specialized programming languages and complete intricate setup procedures. Gibber addresses these challenges by running entirely in the web browser and using pure JavaScript, which enables users to start coding immediately without any installation requirements. The system provides users with brief development tools that create Web Audio API functions enabling them to build oscillators, FM synthesis, granular synthesis, audio effects and musical patterns through minimal coding efforts. The timing system of the system provides two types of timing functions, which enable users to schedule audio at sample level and use musical time values. The platform provides sample-accurate time functions together with Seq and Score tools, which enable users to create both freeform musical patterns and organized musical compositions. The platform combines visual rendering capabilities with real-time code collaboration, which enables performers to edit code together while the system uses CRDTs to ensure performance consistency throughout the collaboration.
Gibber’s design provides multiple benefits which transform it into an effective educational instrument and performance tool. The system allows users to create music through its simplified syntax which enables beginners to achieve musical results yet permits advanced users to conduct complex testing. The application runs in web browsers which provides all users access to its features which makes it suitable for use in educational settings and training programs and online group performances. The integrated graphics system of Gibber enables artists to create audio-responsive visual content which works together with interactive drawings and multimedia shows through a unified coding platform. The software provides users with pattern sequencers and modulation tools and synthesis options which enable them to create music across multiple styles that include rhythmic beat-making and experimental sound design. The collaborative features further distinguish Gibber which enables multiple performers to code together in real time while they share musical ideas through their common code instead of using audio stream synchronization. The software enables users to create music through its flexible design which serves as a learning platform for users to practice and create together with others in the field of electronic art.
I researched P5LIVE, a collaborative live coding platform for p5.js that runs in the browser. Live coding is an art practice where the process is the performance. You write and change code in real time, and the visuals update immediately, often with the code visible too. P5LIVE was created by Ted Davis, a media artist and educator in Basel, originally for a Processing Community Day event where people wanted to live code visuals during a DJ party.
What I like about P5LIVE is that it treats live coding as a social activity. It lowers friction by running in the browser, and it makes collaboration feel natural through links and shared rooms. It is not just an editor. It is a space where teaching, performance, and experimentation overlap. Instead of coding being private and finished, P5LIVE encourages coding as something collective and ongoing.
P5LIVE’s key feature is COCODING, which works like Google Docs for code. You create a room link, others join, and everyone edits the same sketch together in real time while the visuals run locally for each person. It also includes classroom and performance features like lockdown mode, chat, and SyncData, which lets people share live inputs like MIDI or mouse data with the group. In my demo, I will show the instant feedback loop and a basic COCODING session.