Overview of Sonic Pi

Released in 2012, Sonic Pi is a live coding environment based on the programming language Ruby. It is initially designed by Sam Aaron in the University of Cambridge Computer Laboratory for the purpose of teaching computing lessons in schools. It allows users to use up to 10 buffers to create audio and can create visual effects via other platforms like p5js and Hydra.

As Sam Aaron wrote in the tutorial of Sonic Pi, this software encourages users to learn about both computing and music through play and experimentation. It provides instant feedback for students at school, and as it produces music instead of typical text outputs, it’s more attractive to students compared with traditional coding environments like Java or Python. It also allows users to connect computers with instruments and make remixes in the software.

Interface of Sonic Pi

The interface of Sonic Pi can basically be divided into 9 parts:

  1. Play Controls
    • Play control buttons are responsible for starting and stopping sounds. Clicking Run initiates the current track, and clicking Stop will stop all running code.
    • Record button enables users to save audio played in Sonic Pi with high fidelity.
    • Save and Load buttons enables users to load .rb files from their computers and save the current code as .rb files.
  2. Code Editor
    • Users can write code and compose or perform music here.
  3. Scope Viewer
    • The scope viewer allows users to see the sound waves played on both channels.
  4. Log Viewer
    • Displays the updates within the program.
  5. Cue Viewer
    • All internal and external events (called cues in Sonic Pi) are automatically logged in the Cue Viewer.
  6. Buffer Switch
    • Lets the user switch between 10 buffers provided by the software.
  7. Link Metronome, BPM scrubber and timewrap setter
    • Link Metronome allows users to link the software to local metronomes and synchronizes BPM with the metronome.
    • The Tap button allows users to tap manually in a specific speed, then measures the BPM and automatically adjusts the BPM in Sonic Pi.
    • The BPM displayer shows the current BPM, and users can modify it.
    • Timewrap setter allows the user to manually set whether to trigger every sound earlier or later.
  8. and 9. Help system
    • Displays the tutorial for Sonic Pi. The user can check out all documentations and preview the samples via the help system.

Performance Demo

Reflection: Pros and Cons of Sonic Pi

Sonic Pi, as an educational software, has done a great job by embedding detailed tutorials and documentation in the software. Its large collection of samples and synthesizers allows the users to make all kinds of music. However, the quality of samples is not very stable and requires lots of learning and adjustments to produce high-quality music.

Beat and rhythm can be a non-spoken language for people worldwide. Unlike orchestral music, percussion instruments have very low requirements for instruments – you can play them with almost any object, or even just your limbs to create some very interesting beats. This characteristic makes percussion instruments very popular. Each region has its own representative percussion rhythm, and rhythm is also very practical in conveying emotions.

Though percussion music alone might be raw and elemental, it creates a concrete structure in music. It can be perceived both musically and physically. You can listen to it, dance to it and just feel it.

As a new form of performance that features creating audio and visual effects via text editors, live coding is both a primitive and advanced form of human-computer interaction. The performers are directly editing the performance by accurately controlling the effects digit by digit, it provides more accuracy and more space to experiment with as the technology of shaders and audio editors keeps evolving.

I remember someone saying this during ICLC 2024 to describe the experience of live coding: “Everything is impossible, but nothing is easy”. Theoretically, there can be countless possibilities of a combination of effects created by coding, but advanced performance will take much time and energy for experiments and research. Live coding is realized by programming and requires lots of experience with both programming and composing, so it might be very complicated for those who are not familiar with computers and programming, and the way to practice live coding is not like to practice playing traditional instruments by following the existing scores. However, it still has its significance compared with other performances like DJ or jazz sessions, which as well contain improvisational content, as coding is also a part of the performance. The audience can observe the complete process of coding.

The primitive nature enables live coders to implement all kinds of JavaScript libraries to the text editors, which gives them great freedom for self-expression. They can type their feelings in the text editors, add 3D models, or even add their real-time drawings to the canvas, which is a groundbreaking revolution for performance in my point of view.

As computer science keeps evolving, I’m thrilled to witness how the form of art can be changed, and the emergence of live coding is a good showcase.