Mosaic is an application designed by Emmanuel Mazza that combines Live Coding with Visual Programming in order to streamline the creative process without sacrificing artistic vision. Based on OpenFrameworks, an open source creative coding toolkit founded by Zachary Lieberman, Theo Watson and Arturo Castro, it utilizes a wide variety of programming languages including C++, GLSL, Python, Lua, and Bash. As seen in my patch above, it works by connecting blocks of sonic and visual data, facilitating an easy-to-follow data flow. Because Mosaic allows for more complex possibilities while remaining accessible, it is suitable for both beginners and advanced users who wish to create a wide range of real-time audio-visual compositions.

The diagram above outlines how Mosaic hybridizes Live Coding and Visual Programming paradigms in order to reinforce feedback and promote human-machine interaction. Its page, linked here, quotes Henri Bergson: “The eyes see only what the mind is prepared to comprehend.” For someone like me, whose comprehension of programming is limited, I am grateful for applications like Mosaic that allow me to create projects I can understand.

Having previously worked with MaxMSP, I found Mosaic’s interface buggy but still intuitive and to use. For my project, I wanted to create audio-reactive visuals that convey feelings of nostalgia and loss of memory. I found this old video of my father recording a song he had made for my brother and me. In Mosaic, I navigated to ‘Objects’ and then ‘Texture’ to find all the nodes that could manipulate and export video.

As seen above, I juggled various concepts and imported multiple videos to explore how Mosaic is able to warp and blend textures to serve whatever concept I landed on. I really liked how the video grabber blended my real-time position via the MacBook camera with the singing video of my father to convey how memories stay with us even as we change and grow. Because Mosaic can only play sound and video separately, I extracted the audio file from the video using VLC media player, and started focusing on how I wanted to manipulate the audio to convey a sense of loss.

As seen above, I used the compressor and bit cruncher objects to add distortion to the sound file so that I could lower or amplify the distortion real-time by lowering the thresh and moving the slider. The entire time, I was reflecting on how if I was using with a platform that only allowed for written code, like TidalCycles, I would have to manually write out these effects, but using Mosaic, I could drag and drop the objects that I wanted and simply connect them to achieve the control the audio the way I wanted to.

The most difficult part of my project was figuring out how to connect visual components with audio so that I could manipulate the blended video of myself and my father as I increased or decreased distortion. I really liked this audio analyzer object because, as seen by the yellow input and green output, it allowed me to do just that, and as an additional bonus, it manipulated the video by sound that were playing real-time, so I could speak into the Mac microphone and the video would distort even further. I really liked how this object strengthened the concept of my project, because I could speak about memory loss and the video would distort even further in response.

The audio analyzer object converted the sound data into numerical data that could then be converted back into visual data. And I blended visual distortion with the video of my father by controlling the sliders as seen above. I really loved how accessible the controls were, allowing me to manipulate the video and sound real-time according to the demands of the performance.

The finalized video and audio components of the project can be viewed here and here respectively. Through live manipulating the video and audio with my voice and the Mosaic controls as seen in the footage, I was able to convey concepts like memory loss and nostalgia for the purposes of this project. I really loved the potential for creativity via Visual Programming that Mosaic offers, and I will definitely continue to use this application for personal projects into the future.

Max is a visual programing language for music and multimedia. It is used by composers, perfomers, software designers, researchers, and artists to create recordings, perfomances, and installations.

Max MSP’s origin can be traced back to the work of Max Mathews, who is often referred to as the father of computer music. In the 1960s, Mathews developed the Music-N series of programming languages. These early languages provided the foundation for digital sound synthesis and influenced many future music programming environments, including Max MSP. Mathews’ contributions laid the groundwork for interactive computer music by demonstrating that computers could generate and manipulate sound in real-time, an idea that continues to drive Max MSP’s development today.

Max Mathews

At its core, Max MSP operates through a visual patching system, where users connect objects that perform specific functions, such as generating sound, processing video, or managing data flow. Each object has inputs and outputs that allow users to design complex behaviors by linking them together in an intuitive graphical interface. This modular approach makes it accessible to both beginners and experienced programmers, as it eliminates the need for traditional text-based coding while still offering deep customization and extensibility. Additionally, Max supports JavaScript, Java, and C++ integration, enabling users to create custom objects and extend the software’s capabilities beyond its built-in tools. Another key strength of Max MSP is its seamless integration with hardware and external devices. It supports MIDI and OSC (Open Sound Control), making it compatible with a wide range of musical instruments, synthesizers, and external controllers. Additionally, it can interface with Arduino, Raspberry Pi, and other microcontrollers, allowing users to build interactive installations.

Max MSP is considered a live coding platform due to its real-time interaction capabilities. Unlike traditional programming languages, where code must be compiled before execution, Max allows users to modify patches on the go, adjusting parameters, adding new elements, and altering behavior without stopping the performance. This flexibility makes it particularly valuable in live music, audiovisual performances, and interactive installations.

My demo:

Vuo is a visual programming environment designed for artists, designers, and creative developers to build interactive media applications without traditional coding. It is especially popular in fields like live visuals, generative art, motion graphics, and real-time interactive installations.

What makes Vuo unique is its node-based interface, which allows users to create complex visual and audio-driven projects through a modular drag-and-drop system. Unlike traditional coding environments, Vuo’s event-driven architecture enables seamless real-time interactivity, making it ideal for projects that require immediate feedback and dynamic responsiveness. It also supports Syphon, OSC, MIDI, and 3D graphics, making it a versatile tool for multimedia creators.

Vuo was developed by Kosada as an alternative to Apple’s now-discontinued Quartz Composer, which was widely used for real-time graphics. Launched in 2014, Vuo was designed to be a modern, GPU-accelerated, cross-platform tool that extends beyond Quartz Composer’s capabilities. Over time, it has grown to support 3D rendering, audio-reactive visuals, MIDI control, and OSC communication, making it a powerful tool for digital artists.

Today, Vuo is widely used for live performances, interactive installations, and experimental visual art, providing an intuitive and powerful platform for creative expression. It is also popular in VJing, projection mapping, and interactive museum exhibits, making it an essential tool in modern digital art. 

Below are some images of what I implemented.

Noise-Based Image Generator

Simple Pattern

What is Sardine?

For my research project, I chose the live coding platform Sardine. I decided to go with Sardine because it stands out as a relatively new and exciting option built with Python. Sardine is a live coding environment and library for Python 3.10 and above. What sets Sardine apart is its focus on modularity and extensibility. Key components like clocks, parsers, and handlers are designed to be easily customized and extended. Think of it as a toolkit that allows you to build your own personalized live coding setup. It allows the customisation of IO logic, without the need to rewrite or refactor low-level system behaviour.

In its complete form, Sardine is designed to be a flexible toolkit for building custom live coding environments. The core components of Sardine are:

  • A Scheduling System: Based on asynchronous and recursive function calls.
  • A Modular Handler System: Allowing the addition/removal of various inputs/outputs (e.g., OSC, MIDI).
  • A Pattern Language: A general-purpose, number-based algorithmic pattern language.
    • For example, a simple pattern might look like this:
      D('bd sn hh cp', i=i)
  • The FishBowl: A central environment for communication and synchronization between components.

However, configuring and using the full Sardine environment can be complex. This is where Sardine Web comes in. Sardine Web is a web-based text editor and interface built to provide a user-friendly entry point into the Sardine ecosystem. It simplifies the process of writing, running, and interacting with Sardine code.

The Creator: Raphaël Maurice Forment

Sardine was created by Raphaël Maurice Forment, a musician and live-coder from France, based in Lyon and Paris. Raphaël is not a traditional programmer but has developed his skills through self-study, embracing programming as a craft practice. He is currently pursuing his PhD at the Jean Monnet University of Saint-Etienne, focusing on live coding practices. His work involves building musical systems for improvisation, and he actively participates in concerts, workshops, and teaching live coding both in academic and informal settings.

Sardine began as a side project to demonstrate techniques for his PhD dissertation, reflecting his interest in exploring new ways to integrate programming with musical performance. Raphaël’s background in music and his passion for live coding have driven the development of Sardine, aiming to create a flexible tool that can be adapted to various artistic needs.

My Live Demo

To demonstrate Sardine in action, I created a simple piece that highlights its scheduling and pattern language capabilities.

In this code snippet I used two different types of senders (d for Player and D for swim) where Player used for shorthand pattern creation, and @swim used for fundamental mechanism to create patterns.

Sardine in the Context of Live Coding

Sardine builds upon the ideas of existing Python-based live coding libraries like FoxDot and TidalVortex. However, it emphasizes flexibility and encourages users to create their own unique coding styles and interfaces. It tries to avoid enforcing specific ‘idiomatic patterns’ of usage, pushing users to experiment with different approaches to live performance and algorithmic music.

The creators of Sardine were also inspired by the Cookie Collective, a group known for complex multimedia performances using custom setups. This inspired the idea of a modular interface that could be customized and used for jam-ready synchronization. By allowing users to define their own workflows and interfaces, Sardine fosters a culture of experimentation and innovation within the live coding community.

For my research project, I chose to experiment with the platform LiveCodeLab 2.5

LiveCodeLab 2.5 is an interactive, web-based programming environment designed for creative coding and live performances. It allows users to create 3D visuals and generate sounds in real-time as they type their code4. This unique platform is particularly suited for live coding, more particularly visuals as the range of sound samples that are offered are not much.

Live coding lab has however, many samples to work with, meaning it is an excellent introduction for perhaps younger audiences or those that are beginning their journey with live coding.

Unfortunately, I was looking forward to experimenting with sound manipulation, however, I found that this platform worked mainly with manipulating and editing visuals. Therefore, I decided to expand and start polishing my skills with live coding visuals.

https://drive.google.com/file/d/1YrtH6dgI-Y8YJtzzENxbCvzYfVMTkSlP/view?usp=sharing

For the research project, the live coding platform that I picked is Motifn. Motifn enables users to make music using JavaScript. It has 2 modes: a DAW mode and a fun mode. The DAW mode lets users connect their digital audio workstation, like Logic, to the platform, by which a user can orchestrate synths into their DAW using JS. The fun mode on the other hand lets you start producing music in the browser right away. I used the fun mode for the project.

The coolest feature about Motifn is that visualises the music for you. Similar to how we see the selected notes in a MIDI region in Logic, Motifn lays out all the different tracks along with the selected notes underneath the code. This allows the user to better understand the song structure and is an intuitive way to lay out the song which makes it user friendly.

To get started, I started reading the examples on the platform. There is a long list of examples right next to the coding section on the website. All of the examples are interactive, which makes it easier to experiment with different things. Since it is right next to the coding section of the website it is also convenient to try out a lot of examples because there was no need to open different tabs to refer to the documentation. Having an interactive, short, and to-the-point documentation enabled me to experiment with different things Motifn has to offer.

After playing around it for a while, I discovered that the platform let’s you decide the structure of the song before you even finish coding the song itself. So, using the let ss = songStructure ({}), I decided to a song structure.

Motifn has a lot of synth options (some of them made using Tone.js) and I am huge fan of synths. So I started my song with that. Followed by the addition of bass in the first bridge, synth + bass + notes in 2nd chorus, bass + hihats in the 2nd bridge, kicks + snare + hihats + bass + chords in the first and second verse, remove the drums in the third chorus and then bring them back in the next one. After that I just take out thre instruments one by one and the song finishes.

Here is the demo of what I made on Motifn.

There isn’t a lot of information about Motifn online. I was unable to find the year it was developed in or even the founders. I would place this platform somewhere in the middle of live coding and DAW music production. I felt as if there was less flexibility to experiment and make music on the fly as compared to TidalCycles. Motifn seems more structured and intentional. But there are a lot of cool sounds and controls on the platform, like adding groove to instruments making it play either behind (like we read in class) or ahead of the beat by a few ms or modulating the harmonicity of a synth over time. Its integration of JavaScript for music composition makes it accessible to a broad range of users which reflects the live coding community’s values of openness and innovation. Overall, it is a fun platform to use and I am happy with the demo song that I made using Motifn.

What is Melrose?

Melrose is a MIDI producing environment for creating (live) music, using golang-based music programming language. It is created by a Dutch software artist named Ernest Micklei in 2020. Melrōse offers a more human-centric approach to algorithmic composition of music. It defines a new programming language and includes a tool to play music by evaluating expressions in that langage. With Melrōse, exploring patterns in music is done by creating musical objects that can be played and changed at the same time.

Melrose offers a variety of features:

– Musical objects: Create a Note, Sequence, Chord, Progression to start your musical journey.

– Operations: Apply to musical objects such Repeat, Resequence, Replace, Pitch, Fraction, Mapping to create new musical combinations.

– Playing: Play any musical object directly, or in a Loop or in a Track, alone or together with others.

– Interactive: While playing you can change any musical object by re-evaluating object definitions.

– MIDI: Send MIDI to any sound producting system such as a DAW or your hardware synthesizer. React to or record MIDI input to create new musical objects.

How is Melrose Used in the Real World?

Melrose allows composers to generate and manipulate MIDI sequences using code. It can send MIDI messages to external synths or DAWs (Digital Audio Workstations); so therefore, musicians can write scripts to trigger MIDI sequences on-the-fly. Coders and musicians can use Melrose to explore harmonies, scales, and rhythmic structures programmatically. And developers can use Melrose to generate adaptive soundtracks for games.

Impact on Music Community:

Melrose empowers coders & musicians and bridge the gap between coding and music production. It also enables complex, non-repetitive compositions, unlike traditional DAWs, it allows music to be generated with rules and randomness. Music coders can have access to a lightweight, code-driven alternative to DAWs that are not too expensive and complicated to master. Especially for education, schools and educators can utilize the tool to teach interactive and generative music composition.

Sample Code:

it = iterator(6,1,4,-1) // delta semitones
pm = pitchmap('1:0,1:7,1:12,1:15,1:19,1:24',note('c')) // six notes
sm = fraction(8,resequence('1 4 2 4 3 6 5 4',pm)) // note sequence of eights
lp_sq = loop(pitch(it,sm),next(it)) // loop the sequence and change pitch on every iteration


p = progression('d', 'ii V I')

bpm(120)
s5 = sequence('16.e 16.e 16.e 8.c 8.e 8.g 8.g3 8.c 8.g3 16.e3 16.a3 16.b3 16.a#3 16.a3 8.g3 8.e 8.g4 16.a4 16.f4 16.g 16.e 16.c 16.d 16.b 16.c')

bpm(100)  
s7 = sequence('16.f#3+ 16.f#3+ 16.d3 8.b2 8.b2 8.e3 8.e3 16.e3 16.g#3+ 16.g#3+ 16.a3 16.b3 16.a3 16.a3 16.a3 8.e3 8.d3 8.f#3 8.f#3 16.f#3 16.e3 16.e3 16.f#3 16.e3')
s8 = loop(s7)

f1 = sequence('C D E C')
f2 = sequence('E F 2G')
f3 = sequence('8G 8A 8G 8F E C')
f4 = sequence('2C 2G3 2C 2=') 

v1 = join(f1,f1,f2,f2,f3,f3,f4,f4)

kick=note('c2') // 1
clap=note('d#2') // 2
openhi=note('a#2') // 3
closehi=note('g#2') // 4
snare=note('e2') // 5
R4=note('=') // 6
rim=note('f3') // 7

all = join(kick, clap, openhi, closehi, snare,R4, rim)

bpm (120)
drum1=duration(16,join("1 6 3 6 (1 2 5) 6 3 6 1 4 3 6 (1 2 5) 6 3 6",all))
Lp_drum1 = Loop(drum1)

// Define individual drum sounds with specific durations
kick = note('16c2')  // Kick drum
snare = note('16d2') // Snare drum
hihat = note('16e2') // Hi-hat

// Create patterns for each drum part using notemap
kickPattern = notemap('!...!...!...!', kick)  // Kick on beats 1, 3, 5, and 7
snarePattern = notemap('.!!..!!..!!..', snare) // Snare on beats 2 and 4
hihatPattern = notemap('!!!!!!!!!!!!', hihat) // Hi-hat plays every beat

// Merge all patterns into a single drum track
drumTrack = merge(kickPattern, snarePattern, hihatPattern)

// Play the drum track in a loop
loop(drumTrack)

Here’s a video of the author demonstrating the language

Recorded Demo: