Here’s the link to my presentation:

https://www.canva.com/design/DAF8yNswVM8/asdv-QO-gv_DvemEwHOCgg/view?utm_content=DAF8yNswVM8&utm_campaign=designshare&utm_medium=link&utm_source=editor

SuperCollider is an environment and programming language designed for real-time audio synthesis and algorithmic composition. It provides an extensive framework for sound exploration, music composition, and interactive performance.
The application consists of three parts: the audio server (referred to as scsynth); the language interpreter (referred to as sclang) and which also acts as a client to scsynth; and the IDE(referred to as scide). The IDE is built-in and the server and the client (language interpreter) are two completely autonomous programs.

What is the Interpreter, and what is the Server?

SuperCollider is made of two distinct applications: the server and the language.
To summarize in easier words:

Everything you type in SuperCollider is in the SuperCollider language (the client): that’s where you write and execute commands, and see results in the Post window.
Everything that makes sound in SuperCollider is coming from the server—the “sound engine”— controlled by you through the SuperCollider language.

SuperCollider for real-time audio synthesis: SC is optimized for the synthesis of real-time audio signals. This makes it ideal for use in live performance, as well as, in sound installation/event contexts.
SuperCollider for algorithmic composition: One of the strengths of SC is to combine two, at the same time both complementary and antagonistic, approaches to audio synthesis. On one hand, it makes it possible to carry out low-level signal
processing operations. On the other hand, it does enable the composers to express themselves at higher level abstractions that are more relevant to the composition of music (e.g.: scales, rhythmical patterns, etc).
SuperCollider as a programming language: SC is also a programming language. It belongs to the broader family of “object-oriented” languages. SC is also a language interpreter for the SC programming language. It’s based on Smalltalk and C, and has a very strong set of Collection classes like Arrays.

Some code snippets from the demo:

For drum roll sounds:

An interesting example of 60Hz Gabber Rave 1995 that I took from the internet:

Here’s a recording of some small sound clips made with SuperCollider (shown in class):

https://drive.google.com/file/d/1I_HxymG_OLzdw_rirGJ9iYXLxzyK8n_k/view?usp=drive_link

Cascade

Self-described as “a live coding parasite to Cascading Style Sheets”, this live coding platform is built on top on of CSS, one of the main elements of any webpage. As CSS is a styling language, and Cascade uses CSS as input for creating music, this creates a very nice side effect — you get the visuals as part of creating the audio.


How it works

How is used to CSS to create music? Simply put, some key CSS properties were chosen to be interfaced with sound. Those specific CSS properties were chosen because they have strong graphical effects, and have little dependencies with other CSS properties. For example, background-color impacts the visuals heavily, so it is chosen. Font-size is not a good candidate for this purpose because it can be reliant on parent properties ( em/rem font sizes ) and it does not affect the visuals much.

A summary picture of the documentation is attached below.



Limitations

At first use, I enjoyed how the audio/visuals tied-in so easily with each other. Unlike what we had to do in Hydra/TidalCycles, there is no need for separately creating each element. By just creating a beat that sounded decent, I also had a bonus of the visuals already being created for me, and it’s synced with the music.This was both a boon and a limitation. As the audio is directly dependent on the visuals, it meant you are limited in what you want to express and you are tied-in to what options are available in Cascade. As it uses CSS as an interface, it meant that if you want to create some complex visuals you will have to perform some complicated DOM manipulations on your own to create something that other live coding platforms offer in a single function.

Slides ( accessible only to NYU emails! )

Why I chose FoxDot:

I honestly started with clive because I’m a bit interested in C language and wanted to explore something with it. However, there was a lack of information on clive. The documentations on installing and composing music with it was not well done, making it hard to explore. So, I started looking at other options and found FoxDot quite similar to TidalCycles. It also had better documentations to help beginners get it started. (Although I had to change from clive to FoxDot, I still think this exploration was meaningful as it highlights the importance of documentation, especially in more non-common fields such as live coding.) Also, FoxDot uses SuperCollider to output sound, which made it more familiar for me.

What is FoxDot:

FoxDot is an open source live coding enviornment, designed specifically for live coding. It’s in Python, making it more intuitive for the users. (At the same time, after looking at other enviornments our classmates have been exploring, I think there are many more options that are easier and more intuitive.)

Many live coders liked FoxDot mostly because of the fact it was in Python. However, it has been explicitly announced that there will be no further developments/updates: only minor changes when issues are found.

Advantages and Disadvantages of FoxDot:

Personally, I think live coding environments that have higher technology and more reflection of physical music composing are harder for people like me (who don’t have experience in composing music) to understand and play around with. As somebody with a background in computer science, environments like FoxDot, where composition can be done in a more code-like manner, is much more approachable. While I was not able to get to the level of using it, FoxDot can be used along with Sonic Pi, making it more attractive for live coders.

The downside is that since it’s not being updated, there are so many issues that happen with installing and running it. There were many clashes with python and I had to manually edit multiple piles to make it running. Also, there’s a limited number of sounds in FoxDot’s sound library, which may limit the mood and style of music live coders can create.

Example Performance:

Here is a video of a live coding performance with FoxDot done in Momentum Tech Conference 2019, Karachi.

Short Performance:

Below is the code I prepared to show for our in class performance. I left out some parts by mistake, so hopefully this sounds better.

p1 >> play("x-o[--]")

p2 >> jbass(dur=3, sus=1)

progression = [0, 7, 2, 4]
rythm = P[0.5, 0.5, 0.25, 0.25, 0.25, 0.25]
bassline = P[progression].stutter(len(rhythm))

d1 >> pluck(bassline, dur=rhythm, amp=0.5)

d2 >> pluck(bassline, dur=rhythm, amp=0.8, echo=0.5, room=0.5)

melody = P[0,2,4,5,7]

d3 >> pads(melody, dur=0.5, oct=(4,5))

d4 >> scatter(melody)

d5 >> arpy(bassline, dur=rhythm)

p1.stop()
p2.stop()
d1.stop()
d2.stop()
d3.stop()
d4.stop()
d5.stop()

Alda is a “text-based music composition programming language” developed by Dave Yarwood in 2012. As a software engineer with a background in classical music training, Yarwood sought for a way to link his interests and create an accessible, general-purpose audio programming language. The result was Alda, which he designed with the purpose of providing both musicians without programming knowledge and programmers without musical knowledge with a unique and simple method of composition. 

Alda allows for users to write and play music using just the command line and a text editor. The interactive Alda REPL lets the user get immediate feedback from typing and entering in lines of code in the command line. Longer scores can be written in a text file, which can then be fed to and played by Alda (also via the command line). Staying true to this purpose of accessibility, musical notation is rendered simple—notes are represented by their respective letters (a~g), octaves can be set and switched using o1~o7, rests are simply r, + and – are used to denote sharps and flats, and so forth.

There is a noticeable lack of a “live” component in Alda compared to the other coding platforms on the list. Alda is only able to output exactly what it is given, and it does so in a finalized form (in that the output cannot be edited in real time). The REPL is perhaps a little more interactive in this aspect, as the focus is on maintaining a quick exchange of input and output. Alda also does support writing music programmatically, as Alda code can be generated through programs in other languages. Even so, Alda itself does not have any innate functions that allow for algorithmic composition/live coding.

I personally had a lot of fun experimenting with Alda. Following the tutorial on the Alda website, I was able to mess around with the REPL with ease—the simple code and instant output makes it easy to test things out and understand how they work. With a grasp of all the commands and methods gained from that session, I was able to put together longer and more complex scores using multiple instruments through text files.

I should note that I do have some classical training from my years of playing cello and am familiar with sheet music and musical notation. However, I still struggle with letter notation (A-G), as I learned music through the solmization syllable system (do re mi). This complicated my efforts to make things sound the way I wanted them to and made me feel that one would have to have at least some knowledge of music theory and notation to really use Alda to its full potential. Regardless, there is no denying that Alda makes composition much less daunting in both appearance and experience to people who might not be used to reading/writing sheet music.

For my mini project with Alda, I attempted to recreate the starting portion of the beautiful instrumental for Where the Flower Blooms, a track by rapper and producer Tyler the Creator. The text file that I wrote up is as follows:

(key-sig! '(a major))

violin "violin-1":
(pp) o5 f1 | f | f | f |
f | f | f | f |

violin "violin-2":
o4 r8 a+ > f < a+ > g < a+ > f < a+ | r a+ > f < a+ > g < a+ > f < a+ |
r g > e < g > f < g > e < g | r f > c < f > d+ < f > c < f |
o5 b8 f e d+ b f e d+ | b f d < c > b f d < c |
a g f c a g f c | a g f c a g f c |

viola:
o4 r8 f b f > c < f b f | r f b f > c < f b f | e a e b e a e | d+ a d+ b d+ a d+ |

cello:
(pp) o3 c1 | c- | b6 d b d b d | b d b d b d |
c1 | c- | >b | b |

piano "piano-1":

(p) o3 r1 | f/>c-/f | <f/a/>f | f/>c-/f |
(mf) d+4./f/b/>d+  d+8/f/b/>d+ d+ <b4. | e/g-/b e/g-/b | c/f/a/>c <c/f/a/>c | c/f/a/>c <c/f/a/>c |
o4 e/g


piano "piano-2":

r1 | r1 | r1 | r1 |
(mf) o3 c2 c4. >c8 | <c-2 c- | <b/>f r8 f4 <b8 | b2/>f <b/>f |
o3 c1

And here is a short video of what it sounds like:

Sonic Pi is a live coding platform created by Sam Aaron, at University of Cambridge Computer Laboratory, in 2012 as a low-cost programmable computing platform for Raspberry Pi. Since it was originally created to make programming in Raspberry Pi more accessible, it is beginner-friendly and has friendly UI and documentation. 

Sonic Pi is also popular among live coding DJs and used by multiple artists. DJ_Dave is one of the artists who actively use Sonic Pi to create electronic music. Below is a live coded music she created using Sonic Pi.

I also experimented with Sonic Pi and created a edit of a track – Free Your Mind by Eden Burns. I also used some custom samples of bass drums and claps for the beat. The following is the code:

s = [path_to_main_track] # this is the path to the music file
ukgclap = [path_to_sample_dir]
ukghh = [path_to_sample_dir]

use_bpm 124

##| start
in_thread do
  sample s, start: 0.6542207792, finish: 0.7045454545, amp: 0.5, decay: 3
  sleep 8*4
  set :beat, 1
end

live_loop :sbd do
  sample s, onset: 0, amp: 1.6
  sleep 1
end

##| play beat
sync :beat
live_loop :clap do
  sleep 1
  sample ukgclap, "ukgclap_8", amp: 0.7
  sleep 1
end

live_loop :shh do
  sleep 0.5
  sample ukghh, "ukghh_1", amp: 0.7
  sleep 0.5
end

##| intro
in_thread do
  sample s, start: 0.163961039, finish: 0.2142857143, amp: 0.85
  sleep 8*4
  set :mainBeat, 1
end

##| play main
live_loop :main do
  sync :mainBeat
  sample s, start: 0.3022727273, finish: 0.4029220779
  sleep 8*16
end

Here, I chopped up different parts of the track and reordered with a coded house beat using custom samples. The hardest part was to make the set: and sync: work properly with the threads. I also had to calculate the bars manually to figure out the sleep time. I wonder if there is a better way to do this.

While the last block is being halfway played, I also executed the following code in another buffer of Sonic Pi:

s = [path_to_main_track] # this is the path to the music file

2.times do
  with_fx :reverb, phase: 0.2, decay: 5, mix: 0.75 do
    sample s, start: 0.6542207792, finish: 0.7045454545, amp: 0.5, decay: 3, rpitch: 1.3
    sleep 8*4
  end
end

It plays the vocal in a somewhat distorted way as it overlaps with the vocal that were already being played by the main block.

For my research project, I looked into several languages before I stumbled upon livecodelab. This platform, created by the team of Davide Dela Casa and Guy John, is a breath of fresh air for anyone wishing to experiment with digital creation without the inconvenience – “anyone” and “without inconvenience” being the key terms. Both of the creators’ backgrounds are in Computer Science and they’ve worked in tech giants like Meta, Amazon and Google as per my research. Lets look deeper into what they’ve developed now.

The Concept and The Language

What sets LiveCodeLab apart is its simplicity. The code is developed through JavaScript and runs in a native JS runtime. They’ve crafted a language, LiveCodeLang, that’s so straightforward, it feels like you’re having a conversation rather than writing lines of code. And the magic happens as you type – 3D visuals start taking shape, and sound clips start playing, all in real-time. It’s like watching your ideas come to life right before your eyes. The rendering happens at 60 fps and the sound plays at an adjustable bpm.

The Website: livecodelab.net

I was pleasantly surprised to find that LiveCodeLab works right in your browser. No need to download any fancy software or set up complicated environments. Just fire up your browser, head to livecodelab.net, and you’re ready to dive into the world of creative coding. What’s even cooler is that the visuals and sounds keep playing as long as you stay on the tab. They won’t distract you if you change tabs, but will still be there when you come back later.

Cool Features and Areas for Improvement

One thing I love about LiveCodeLab is its accessibility. You don’t need to be a coding whiz to appreciate what it can do. As Guy John puts it,

"You don't need to understand the code to appreciate it, much like you don't need to play guitar to enjoy a guitar performance."

It’s a welcoming space where anyone, regardless of their background, can unleash their creativity and see where it takes them.

On top of that, it’s open-source. That means anyone can contribute to its development and help make it even better. Whether it’s adding more shapes and visuals or finding ways to sync animations with sound beats, the possibilities are endless. Everything about how to build on top of LiveCodeLang is also a part of its GitHub documentation.

As great as the platform is, there are areas that could use some polish. More shapes and visuals would be great, and syncing animations with sound could take things to the next level.

However, even as it is, LiveCodeLab is a shining example of how simplicity and creativity can go hand in hand.

I opted for the live coding platform Improviz. The setup was quite straightforward. It required me to download the necessary files from GitHub onto my computer, launch Improviz via the command terminal, and then proceed to code within Improviz’s web-based editor. As I dived into the documentation, it became clear that the platform is primarily designed with beginner users in mind. The syntax/language it uses is similar to that of Processing or p5.js, focusing primarily on the creation of simple 3D shapes. A function that particularly stood out to me was the ability to use personal images and GIF animations as textures for the shapes, which adds the ability for unique customizations and visual appeals. Improviz also comes with a selection of pre-installed textures and materials that are both visually appealing and add to the creative possibilities. The syntax of Improviz is straightforward and intuitive, making it accessible for beginners, yet it offers enough functions to create amazing live arts.

Here’s a simple live art I made using Improviz. My idea was to have some geometric shapes with textures and materials changing colors dynamically, and make them move a little. I did this by using move, rotate and the sine function, which changes with time. The full code is on the left.