Cascade is a web-based live coding environment introduced by Raphaël Bastide in 2021. It transforms built-in CSS and HTML in web browsers into a tool for creating sound and visuals. Since Cascade is entirely browser-based, no additional setup or external modules are required, users simply reference the core scripts in an HTML file and/or a CSS file to get started.

How it works?

Cascade generates sound and visuals based on the shapes and positions of elements on a webpage. Each property of an element (such as width, height, and position) influences specific musical attributes. For instance, the width-to-height ratio represents the number of beats and steps, following Godfried Toussaint’s Euclidean rhythm distribution. This integration of web design with music encourages users to think about both the visual aesthetics and how they contribute to sound production. As I tried it by myself, it was quite difficult to come up with a meaningful visual along with a good sound. However, thanks to the CSS properties, the animations are interpreted in real-time allowing more dynamic visuals.

c.add('width:10vh; height:30px; background:cyan; top:80vh;')

For example, the above line allows to add a div into the body so that it has width of 10vh and height of 30px. The background is cyan which will be match with the according instrument set by Cascade. The vertical position (top: 80vh) adjusts the note pitch, making the note higher and louder.

Why Cascade?

Cascade offers a gentle learning curve for those familiar with web development, as it introduces no new language or syntax beyond standard CSS and HTML. This makes it an accessible entry point for newcomers to live coding while providing a creative playground for experienced developers. It also fosters collaboration between performers and audiences. Performers can modify, see, and listen in real-time, while audiences can watch and listen to the evolving performance.

Cascade bridges the gap between web development and live coding, making it a powerful tool for exploring sound through visual design. It allows users to combine or learn both disciplines simultaneously. Every design decision affects the resulting sound, prompting a thoughtful approach to composition and layout. This blend of sound and visual design invites users to experience the intersection of aesthetics and music in new and exciting ways.

Demo


Link (in case the preview does not work)

Citation

“Cascade.” Raphaelbastide.com, 2025, raphaelbastide.com/cascade/#cascade-pool-collab-mode. Accessed 13 Feb. 2025.

About Strudel:

Strudel is a version of Tidal Cycles that was initiated by Alex McLean and Felix Roos and developed in 2022. It’s a web browser based platform (hence does not require internal, external software installation unlike Tidal Cycles) that primarily focuses on musical composition and sound synthesis and used to create algorithmic music patterns. It’s a beginner friendly platform, written in JavaScript through pattern-based programming which makes it easy to parse. This pattern-based syntax enables users to define musical elements: rhythms, melodies, effects, using simple text-based patterns. Essentially, music is described using structured symbols and sequences rather than utilizing traditional sheet music, or necessitating complex programming.

Live coding with Strudel is different from conventional performance art. It has no associations with generative art but rather places a central focus on music. This artistic performance is embodied through code projection with Strudel itself allowing for instant feedback, enabling for a seamless and fluid performance and further enhancing live coding culture. Moreover, it’s innovative as a cultural expression for one of 2 interrelated reasons. Strudel allows for democratization of music creation. By lowering the technical barriers that demarcate coding, this performance act is made more accessible to a greater range of audience that embody diverse backgrounds. Moreover, Strudel as a platform, also contributes to the ethos of live coding which accentuates transparency and shared knowledge. This is profound as it reflects the prominent cultural shift toward openness and collective growth in digital arts where code itself materializes as a means of creative exchange. Platforms like Strudel therefore promote live coding to be a more inclusive and participatory artistic practice.

Live Coding Demo and Process:

live coding demo!

Overall, I’m incredibly satisfied with the final result. What I have come to realize is the immense joy and fulfillment derived from the process of experimentation and exploration in creating an ever-evolving sonic experience! Given my stronger affinity for sound over visual elements, this journey has been particularly enjoyable. Initially, I felt overwhelmed by the unfamiliar notations, but after reviewing the workshop documentation—First Sounds, First Notes, First Effects, and Pattern Effects—I began integrating new functions to observe firsthand how the sounds would transform. This process involved considerable trial and error—duplicating and adjusting code, fine-tuning numerical values, and mix-matching sound effects, all while combining them with varying parameters. Once I established an initial piano melody, I proceeded to build upon it by layering additional musical elements: 2) drum beat, 3) main melody, 4) supporting guitar melody. The outcome surpassed my expectations, and I am genuinely pleased with how it has all come together.

Time taken – 2.5-3 hours!!!

A web-based audio-visual programming system, Tweakable prompts users to “make interactive and generative music, sound and visual art” by making them choose from a wide range of components to design their own algorithmic systems while also setting up controls that allow them to adjust how the algorithm works in real-time, hence the name “Tweakable.” According to the Wayback Machine, Tweakable.org was first active on June 6, 2002, and then was inactive from 2003 till 2020; finally, the website we currently have was a version launched on December 17, 2020 (“Wayback”).

There are 3 big components in Tweakable: Data Input/Flow, Sequencing, and Audio, with an additional option to create custom modules as well. Data Input/Flow includes control inputs like sliders and MIDI and controls how information flows in the platform. Sequencing generates and transforms musical patterns using grids, scripts, and mathematical functions, while Audio converts sequences into sound through instruments, oscillators, and automated effects (Woodward). When the user wants to create a new project, they will first choose from a pre-built library made of sequences, audio, video, effects, etc., and they will then connect those components together to create an algorithmic system. Finally, after the system has been made, they will also need to build a user interface so that their algorithm can be tweaked (“Tweakable”).

With its main goal is lowering the entry barrier for programming music or visual art for all users, Tweakable invites users with no background knowledge to not only easily create their own works but also share their projects “without worrying about missing dependencies” since it’s web-based. By being a web where users can tweak and experiment with parameters in live time, Tweakable was one of the earliest pioneering live coding platforms at the time of its creation; and not only does it encompass live coding’s key characteristic of allowing users to write and modify code in real-time to create music and visuals, it also made the algorithmic generation of art more accessible and intuitive with its visual component-based approach, thus opening up the platform to anyone ranging from a total novice to an expert (Woodward).

Finally, here’s a video of me playing around with Tweakable, and the slides are here 🙂

Works Cited

“Tweakable.” IRCAM Forum, forum.ircam.fr/projects/detail/tweakable/#project-intro-anchor. Accessed 12 Feb. 2025. 

“Wayback Machine.” Expand Web Menu, web.archive.org/. Accessed 12 Feb. 2025. 

Woodward, Julian. Tweakable – NTNU, www.ntnu.edu/documents/1282113268/1290817988/WAC2019-CameraReadySubmission-10.pdf/bf702376-a6e4-a270-6581-f80f55bbbfec?t=1575408891372. Accessed 12 Feb. 2025.

In my research project, I explored FoxDot. FoxDot is a live coding platform with Python and SuperCollider. The platform is designed for improvising and composing music, which is similar to Tidal Cycles. The significance of FoxDot is its Python driven environment, and is the major reason why I chose this platform: I am more familiar with the syntax of Python.

One Fun Fact Discovered While Researching:

The creator of FoxDot, Ryan Kirkbride attributes the inspiration of building a live coding platform to Alex McLean, the creator of Tidal Cycles and the term algorave. It turned out that Alex McLean mentored Ryan Kirkbride in the computer music field when he was pursuing his master degree.

Why Python?

With FoxDot I wanted to create an application that bridged the gap between software engineering and Live Coding so that users who were entry level to programming, composition, or both would still be able to grasp the concepts and make music.

What makes Python stand out? First is its object oriented programming. The object oriented design makes it easier to trace the current states of different variables as the image showed below:

The changes applied to p1 can easily be tracked based on the output showed on the terminal. Its clean syntax helps both live coders and the audiences to follow the code. A wide range of third-party libraries greatly expands what live coding can achieve. For example, tkinter package to build a customized GUI; extensions that can support Sonic Pi, voice renderization…

From a beginner’s perspective, the rapid auto-completion feature significantly improves coding fluency. This responsiveness is a notable advantage over platforms like Tidal Cycles, which can be laggy in practice.

Cultural Impact

Broad compatibility expands creative expression. FoxDot works well with many libraries and software tools, allowing artists to mix different digital resources and create unique audiovisual experiences.

User-friendly language lowers the learning curve. The simplicity of Python as the programming language makes it accessible to beginners and non-programmers, lowering the barrier to entry and encouraging more people to engage in live coding.

Creates a large live coding community. By making live coding more approachable, FoxDot has built a sizable and active community. This community supports idea sharing and collaboration, drawing a diverse range of artists and enthusiasts into the live coding culture.

Demonstration of Performance

The following video is me trying to play with some functions and effects in FoxDot:

Overview of Sonic Pi

Released in 2012, Sonic Pi is a live coding environment based on the programming language Ruby. It is initially designed by Sam Aaron in the University of Cambridge Computer Laboratory for the purpose of teaching computing lessons in schools. It allows users to use up to 10 buffers to create audio and can create visual effects via other platforms like p5js and Hydra.

As Sam Aaron wrote in the tutorial of Sonic Pi, this software encourages users to learn about both computing and music through play and experimentation. It provides instant feedback for students at school, and as it produces music instead of typical text outputs, it’s more attractive to students compared with traditional coding environments like Java or Python. It also allows users to connect computers with instruments and make remixes in the software.

Interface of Sonic Pi

The interface of Sonic Pi can basically be divided into 9 parts:

  1. Play Controls
    • Play control buttons are responsible for starting and stopping sounds. Clicking Run initiates the current track, and clicking Stop will stop all running code.
    • Record button enables users to save audio played in Sonic Pi with high fidelity.
    • Save and Load buttons enables users to load .rb files from their computers and save the current code as .rb files.
  2. Code Editor
    • Users can write code and compose or perform music here.
  3. Scope Viewer
    • The scope viewer allows users to see the sound waves played on both channels.
  4. Log Viewer
    • Displays the updates within the program.
  5. Cue Viewer
    • All internal and external events (called cues in Sonic Pi) are automatically logged in the Cue Viewer.
  6. Buffer Switch
    • Lets the user switch between 10 buffers provided by the software.
  7. Link Metronome, BPM scrubber and timewrap setter
    • Link Metronome allows users to link the software to local metronomes and synchronizes BPM with the metronome.
    • The Tap button allows users to tap manually in a specific speed, then measures the BPM and automatically adjusts the BPM in Sonic Pi.
    • The BPM displayer shows the current BPM, and users can modify it.
    • Timewrap setter allows the user to manually set whether to trigger every sound earlier or later.
  8. and 9. Help system
    • Displays the tutorial for Sonic Pi. The user can check out all documentations and preview the samples via the help system.

Performance Demo

Reflection: Pros and Cons of Sonic Pi

Sonic Pi, as an educational software, has done a great job by embedding detailed tutorials and documentation in the software. Its large collection of samples and synthesizers allows the users to make all kinds of music. However, the quality of samples is not very stable and requires lots of learning and adjustments to produce high-quality music.

Mercury is a beginner-friendly, minimalist, and highly readable language designed specifically for live-coding music performances. It was first developed in 2018 by Timo Hoogland, a faculty member at HKU University of the Arts Utrecht.

Mercury is structured similar to Javascript and ruby, being written highly abstracted. The audio engine and openGL visual engine is built based on Cycling ’74 Max 8, utilizing Max/MSP for real-time audio synthesis, while Jitter and Node4Max handle live visuals. Additionally, a web-based version of Mercury leverages WebAudio and Tone.js, making it more accessible.

How Mercury Work

The mercury language roots in Serialism, which is a musical composition style where all parameters such as pitch, rhythm and dynamics are expressed in a series of values (called list in Mercury), used to adjust the instruments stage over time.

In Mercury, the code is executed sequentially from top to bottom, and variables must be declared before being used in an instrument instantiation, if the instrument relies on them. The functionalities are divided into three categories. To define a list of values, the list command is used. To generate sound, an instrument—such as a sampler or synthesizer—can be instantiated using the new command.

Mercury in the Context of Live Coding

What makes Mercury outstanding is the following highlights:

  1. The minimal, readable and highly abstract language: It fosters greater clarity and transparency between performers and audiences. By breaking down barriers in live coding, it enhances the immediacy of artistic expression. True to its name, Mercury embodies fluidity and quick thinking, enabling artists to translate their mental processes into sound and visuals effortlessly.
  2. Code Length Limit and Creativity: In the early version of Mercury, there is a code length limit up to 30 lines for the performer, which encourage the innovation and constant dynamics by pushing the iteration of the existing code rather than writing another long list of code.

Demo

Below is the performance demo of me playing around in the Mercury Web Editor Playground:

Slides

What Is Live Coding? 

From the reading, I managed to gain an insightful understanding of what Live Coding is.From my own perspective, I would claim that it is a practice of improvisatory live performance through the use of code. Ultimately, we use code to connect ourselves to our artistic desires and visions, and doing it in real time means that there is a level of improvisation that live coders indulge in. Therefore I do agree with Ogborn’s resistance to define live coding — as it gives it a fixed state, and does not acknowledge its flexible nature. 

Live coding removes the curtain between the audience and the performer — to project the code from the screen, the audience can connect with the performer by being able to visualise how the programmer thinks in real time. Thus the act of writing in public adds an element of interactivity, honesty, and even creativity — all of which are pillars to the process of live coding.