What is Sardine?

For my research project, I chose the live coding platform Sardine. I decided to go with Sardine because it stands out as a relatively new and exciting option built with Python. Sardine is a live coding environment and library for Python 3.10 and above. What sets Sardine apart is its focus on modularity and extensibility. Key components like clocks, parsers, and handlers are designed to be easily customized and extended. Think of it as a toolkit that allows you to build your own personalized live coding setup. It allows the customisation of IO logic, without the need to rewrite or refactor low-level system behaviour.

In its complete form, Sardine is designed to be a flexible toolkit for building custom live coding environments. The core components of Sardine are:

  • A Scheduling System: Based on asynchronous and recursive function calls.
  • A Modular Handler System: Allowing the addition/removal of various inputs/outputs (e.g., OSC, MIDI).
  • A Pattern Language: A general-purpose, number-based algorithmic pattern language.
    • For example, a simple pattern might look like this:
      D('bd sn hh cp', i=i)
  • The FishBowl: A central environment for communication and synchronization between components.

However, configuring and using the full Sardine environment can be complex. This is where Sardine Web comes in. Sardine Web is a web-based text editor and interface built to provide a user-friendly entry point into the Sardine ecosystem. It simplifies the process of writing, running, and interacting with Sardine code.

The Creator: Raphaël Maurice Forment

Sardine was created by Raphaël Maurice Forment, a musician and live-coder from France, based in Lyon and Paris. Raphaël is not a traditional programmer but has developed his skills through self-study, embracing programming as a craft practice. He is currently pursuing his PhD at the Jean Monnet University of Saint-Etienne, focusing on live coding practices. His work involves building musical systems for improvisation, and he actively participates in concerts, workshops, and teaching live coding both in academic and informal settings.

Sardine began as a side project to demonstrate techniques for his PhD dissertation, reflecting his interest in exploring new ways to integrate programming with musical performance. Raphaël’s background in music and his passion for live coding have driven the development of Sardine, aiming to create a flexible tool that can be adapted to various artistic needs.

My Live Demo

To demonstrate Sardine in action, I created a simple piece that highlights its scheduling and pattern language capabilities.

In this code snippet I used two different types of senders (d for Player and D for swim) where Player used for shorthand pattern creation, and @swim used for fundamental mechanism to create patterns.

Sardine in the Context of Live Coding

Sardine builds upon the ideas of existing Python-based live coding libraries like FoxDot and TidalVortex. However, it emphasizes flexibility and encourages users to create their own unique coding styles and interfaces. It tries to avoid enforcing specific ‘idiomatic patterns’ of usage, pushing users to experiment with different approaches to live performance and algorithmic music.

The creators of Sardine were also inspired by the Cookie Collective, a group known for complex multimedia performances using custom setups. This inspired the idea of a modular interface that could be customized and used for jam-ready synchronization. By allowing users to define their own workflows and interfaces, Sardine fosters a culture of experimentation and innovation within the live coding community.

For my research project, I chose to experiment with the platform LiveCodeLab 2.5

LiveCodeLab 2.5 is an interactive, web-based programming environment designed for creative coding and live performances. It allows users to create 3D visuals and generate sounds in real-time as they type their code4. This unique platform is particularly suited for live coding, more particularly visuals as the range of sound samples that are offered are not much.

Live coding lab has however, many samples to work with, meaning it is an excellent introduction for perhaps younger audiences or those that are beginning their journey with live coding.

Unfortunately, I was looking forward to experimenting with sound manipulation, however, I found that this platform worked mainly with manipulating and editing visuals. Therefore, I decided to expand and start polishing my skills with live coding visuals.

https://drive.google.com/file/d/1YrtH6dgI-Y8YJtzzENxbCvzYfVMTkSlP/view?usp=sharing

For the research project, the live coding platform that I picked is Motifn. Motifn enables users to make music using JavaScript. It has 2 modes: a DAW mode and a fun mode. The DAW mode lets users connect their digital audio workstation, like Logic, to the platform, by which a user can orchestrate synths into their DAW using JS. The fun mode on the other hand lets you start producing music in the browser right away. I used the fun mode for the project.

The coolest feature about Motifn is that visualises the music for you. Similar to how we see the selected notes in a MIDI region in Logic, Motifn lays out all the different tracks along with the selected notes underneath the code. This allows the user to better understand the song structure and is an intuitive way to lay out the song which makes it user friendly.

To get started, I started reading the examples on the platform. There is a long list of examples right next to the coding section on the website. All of the examples are interactive, which makes it easier to experiment with different things. Since it is right next to the coding section of the website it is also convenient to try out a lot of examples because there was no need to open different tabs to refer to the documentation. Having an interactive, short, and to-the-point documentation enabled me to experiment with different things Motifn has to offer.

After playing around it for a while, I discovered that the platform let’s you decide the structure of the song before you even finish coding the song itself. So, using the let ss = songStructure ({}), I decided to a song structure.

Motifn has a lot of synth options (some of them made using Tone.js) and I am huge fan of synths. So I started my song with that. Followed by the addition of bass in the first bridge, synth + bass + notes in 2nd chorus, bass + hihats in the 2nd bridge, kicks + snare + hihats + bass + chords in the first and second verse, remove the drums in the third chorus and then bring them back in the next one. After that I just take out thre instruments one by one and the song finishes.

Here is the demo of what I made on Motifn.

There isn’t a lot of information about Motifn online. I was unable to find the year it was developed in or even the founders. I would place this platform somewhere in the middle of live coding and DAW music production. I felt as if there was less flexibility to experiment and make music on the fly as compared to TidalCycles. Motifn seems more structured and intentional. But there are a lot of cool sounds and controls on the platform, like adding groove to instruments making it play either behind (like we read in class) or ahead of the beat by a few ms or modulating the harmonicity of a synth over time. Its integration of JavaScript for music composition makes it accessible to a broad range of users which reflects the live coding community’s values of openness and innovation. Overall, it is a fun platform to use and I am happy with the demo song that I made using Motifn.

What is Melrose?

Melrose is a MIDI producing environment for creating (live) music, using golang-based music programming language. It is created by a Dutch software artist named Ernest Micklei in 2020. Melrōse offers a more human-centric approach to algorithmic composition of music. It defines a new programming language and includes a tool to play music by evaluating expressions in that langage. With Melrōse, exploring patterns in music is done by creating musical objects that can be played and changed at the same time.

Melrose offers a variety of features:

– Musical objects: Create a Note, Sequence, Chord, Progression to start your musical journey.

– Operations: Apply to musical objects such Repeat, Resequence, Replace, Pitch, Fraction, Mapping to create new musical combinations.

– Playing: Play any musical object directly, or in a Loop or in a Track, alone or together with others.

– Interactive: While playing you can change any musical object by re-evaluating object definitions.

– MIDI: Send MIDI to any sound producting system such as a DAW or your hardware synthesizer. React to or record MIDI input to create new musical objects.

How is Melrose Used in the Real World?

Melrose allows composers to generate and manipulate MIDI sequences using code. It can send MIDI messages to external synths or DAWs (Digital Audio Workstations); so therefore, musicians can write scripts to trigger MIDI sequences on-the-fly. Coders and musicians can use Melrose to explore harmonies, scales, and rhythmic structures programmatically. And developers can use Melrose to generate adaptive soundtracks for games.

Impact on Music Community:

Melrose empowers coders & musicians and bridge the gap between coding and music production. It also enables complex, non-repetitive compositions, unlike traditional DAWs, it allows music to be generated with rules and randomness. Music coders can have access to a lightweight, code-driven alternative to DAWs that are not too expensive and complicated to master. Especially for education, schools and educators can utilize the tool to teach interactive and generative music composition.

Sample Code:

it = iterator(6,1,4,-1) // delta semitones
pm = pitchmap('1:0,1:7,1:12,1:15,1:19,1:24',note('c')) // six notes
sm = fraction(8,resequence('1 4 2 4 3 6 5 4',pm)) // note sequence of eights
lp_sq = loop(pitch(it,sm),next(it)) // loop the sequence and change pitch on every iteration


p = progression('d', 'ii V I')

bpm(120)
s5 = sequence('16.e 16.e 16.e 8.c 8.e 8.g 8.g3 8.c 8.g3 16.e3 16.a3 16.b3 16.a#3 16.a3 8.g3 8.e 8.g4 16.a4 16.f4 16.g 16.e 16.c 16.d 16.b 16.c')

bpm(100)  
s7 = sequence('16.f#3+ 16.f#3+ 16.d3 8.b2 8.b2 8.e3 8.e3 16.e3 16.g#3+ 16.g#3+ 16.a3 16.b3 16.a3 16.a3 16.a3 8.e3 8.d3 8.f#3 8.f#3 16.f#3 16.e3 16.e3 16.f#3 16.e3')
s8 = loop(s7)

f1 = sequence('C D E C')
f2 = sequence('E F 2G')
f3 = sequence('8G 8A 8G 8F E C')
f4 = sequence('2C 2G3 2C 2=') 

v1 = join(f1,f1,f2,f2,f3,f3,f4,f4)

kick=note('c2') // 1
clap=note('d#2') // 2
openhi=note('a#2') // 3
closehi=note('g#2') // 4
snare=note('e2') // 5
R4=note('=') // 6
rim=note('f3') // 7

all = join(kick, clap, openhi, closehi, snare,R4, rim)

bpm (120)
drum1=duration(16,join("1 6 3 6 (1 2 5) 6 3 6 1 4 3 6 (1 2 5) 6 3 6",all))
Lp_drum1 = Loop(drum1)

// Define individual drum sounds with specific durations
kick = note('16c2')  // Kick drum
snare = note('16d2') // Snare drum
hihat = note('16e2') // Hi-hat

// Create patterns for each drum part using notemap
kickPattern = notemap('!...!...!...!', kick)  // Kick on beats 1, 3, 5, and 7
snarePattern = notemap('.!!..!!..!!..', snare) // Snare on beats 2 and 4
hihatPattern = notemap('!!!!!!!!!!!!', hihat) // Hi-hat plays every beat

// Merge all patterns into a single drum track
drumTrack = merge(kickPattern, snarePattern, hihatPattern)

// Play the drum track in a loop
loop(drumTrack)

Here’s a video of the author demonstrating the language

Recorded Demo:

What is Alda?

Alda is a music composition programming language that offers an easier and more intuitive way to create musical scores using simple syntax. It was designed for musicians with no programming experience and programmers with no music experience. Alda provides an accessible way for composing music through code without the need for much prior experience in either.

The Creator: Dave Yarwood

Alda was created and developed by Dave Yarwood, a musician and programmer with a background in bassoon performance and music composition. Fed up with existing graphical music composition tools like Sibelius, Yarwood took matters in his own hands to create a musical programming language. Yarwood described existing graphical music composition softwares to be more like graphic design softwares than music composition environments. He wanted a lower-level, distraction-free way to compose music—something similar to the modularity and flexibility of command-line tools. The process took him 4 years, and in 2012 Alda was born.

Why Alda?

The motivation behind Alda stemmed from Yarwood’s desire to make music composition and development easier. He viewed exisitng graphical notation software as an IDE (Integrated Development Environment) for music, more often than not he would get distracted by the interface rather than focus on composition. Alternatively, Alda allows musicians to compose scores using simple text, making it easier to focus on the musical structure rather than navigating complex UI elements, memorizing hotkeys and looking for shortcuts.

By removing graphical distractions, it offers a more direct and expressive way to write music, much like writing sheet music by hand but with the benefits of digital tools and the ability to run the composition in realtime.

Influences and Design Philosophy

While developing Alda, Yarwood explored existing text-based notation systems but found none that exactly fulfilled his needs. However, several languages and frameworks influenced Alda’s design:

  • ABC and LilyPond – Both are markup-based tools designed for generating sheet music, inspiring Alda’s approach to notation.
  • Music Macro Language (MML) – Provided an expressive syntax for defining music programmatically.
  • Clojure – Yarwood leveraged Clojure to hook into MIDI synthesizers, allowing Alda to generate sound from its text-based notation.

How Alda Works

Alda is a mix between a markup language and a functional programming language. It allows composers to write musical scores in plain text, which are then parsed and evaluated into Clojure code. This approach ensures flexibility, making it easy to compose, edit, and experiment with musical ideas in a streamlined and efficient manner.

Demo

Mini composed song

Slide Deck

Devine Lu Linvega currently lives on a sailboat, and can be found lurking in discord chat rooms hinting at possible bugs and easter eggs in his work. He also created ORCA, along with its companion programs (such as the synthesiser Pilot which we will be using ) under the design collective 100rabbits aboard Devine’s sailboat.

What is ORCA?

Orca is a visual programming environment for making music. Except, there are no words, loops or long lines of code. The IDE is simply a grid composed of placeholder ‘*’ characters. The program does not produce sound on its own, and instead produces MIDI events, OSC connections and UDP packets to communicate what sound should be played. I use Pilot, which is a synthesizer that Devines’ team created to produce my sounds. Even then, I could not send MIDI events to pilot. However, I could send them to superCollider. This approach did not work as Supercollider began picking up some immutable background noise through the same port. Hence I used UDP packets with the “;” operator to create sounds.

How does ORCA work?

The interaction of code and computer flows through “bangs”, a concept borrowed from Max/MSP and Pure Data. live programming concepts of immediate execution. These bangs differ from traditional programming languages like C which have complex blocks of code.

The bangs are maintained by frames which are discrete time steps. Each frame represents one “tick” of the sequencer’s clock (its rate is determined by the BPM). Every frame, the grid is updated and evaluated. Uppercase operators in ORCA execute automatically on every frame, ensuring a continuous, rhythmic flow of actions.

ORCAs IDE was built on inspiration from early 2000s games like Dwarf Fortress.

Operators
There are only 26 alphabetical operators. They go from A-Z and have 8 special operators. Some familiar operators to what we have seen in Tidalcycles are:
C clock(rate mod): Outputs modulo of frame.
D delay(rate mod): Bangs on modulo of frame.
U uclid(step max): Bangs on Euclidean rhythm.
: midi(ch oct note velocity*): Send a midi note.
; pitch(oct note): Send pitch byte out. (Sends this as a UDP message to the synthsizer)
Also interestingly, operators N, S, E, W, which represent the directions can be used to send bangs in their respective directions. Devine shows that this works lille sending marbles to collide with an object.

The above operators produce bangs, combined with other functions such as add and variables, we can make these bangs come in contact with events that we created. An example MIDI event is “:94g..”

The tools roots are from a misunderstanding about tidal that Devine had. ORCA tries to give an accessible way to do generative programming without a large programming background. The tool in a way gamifies the experience of making music in contrast to coding, making music. As ORCA does not compile and run in to errors, live coders experiment will not face syntax errors or compilation errors while editing the code.

However, the packaging of ORCA may seem inaccessible to some, the browser based package requires webMidi to be installed and configured and the locally run version that runs on node requires building on the terminal. Furthermore ORCA is a very “cool” tool to use in front of an audience as the interface between the computer and programmer transcends that of traditional programming languages. From a performance perspective, Orca can also connect to Unity, enabling interaction with 3D graphics. ORCA represents a paradigm shift in live coding by proving that constraints breed creativity. As Devine Lu Linvega states: “It’s for children. The documentation fits in a tweet”, yet professionals use it for everything from IDM sets to interactive installations. ORCA is both entry point and advanced tool in the ecosystem of computational art.

Below is a demo of ORCA in action.

Many ORCA users are technically proficient, employing the tool for solo experimentation with synths or DAWs, yet the platform thrives on community involvement and input. An engaged live coding community collaborates on Discord and Reddit, sharing techniques and directly interacting with creator Devine Lu Linvega, who actively participates in discussions and guides the ecosystem’s evolution through hints and open dialogue.

LiMuT is a live coding platform that combines both audio and visuals in a web browser. According to its creator, Stephen Clibbery, it was inspired by FoxDot, it aims to make creative coding more accessible by running entirely in any modern browser with no installation required. This allows users to experiment with generative music and real-time graphics seamlessly, making it a powerful tool for both live performances and creative exploration.

The platform’s strength lies in its easy setup, as it can be run directly from any modern browser at https://sdclibbery.github.io/limut/. While it supports both visuals and audio, there are limited options for integrating the two. Many of the visual tools are fixed, offering little room for customization. Despite this, LiMuT remains a powerful live coding tool that provides flexibility for performance. I decided to dive into the documentation, experimenting with and tweaking parts of the examples to better understand how each element interacts with the others.

LiMuT uses its own custom live coding language, which is primarily based on text-based patterns to generate music algorithmically. It draws some similarities from other live coding languages like TidalCycles and Sonic Pi, but its syntax and structure are unique to Limut.

The language itself is highly pattern-driven, meaning you write sequences that define musical events (like notes, rhythms, effects, etc.) and then manipulate these patterns in real-time. The language isn’t directly derived from a general-purpose programming language but instead is designed specifically for musical composition and performance.

Here’s how it stands out:

  • Pattern-based syntax: You define rhythms, pitches, and effects in a highly modular, shorthand format.
  • Time manipulation: Limut provides commands that adjust tempo, duration, amplitude, and other musical properties in real time.
  • Easy to Navigate: The user interface is designed in a way that makes navigation easy.

Here’s a demo of me live coding using LiMuT