This chapter explores the idea of “liveliness” in live coding from a methodological perspective and how we should “live” code. I think the balance between having a pre-written code and starting from an absolute zero is a practical question that has also risen throughout our live coding practices in class. It was interesting to think about liveliness not only in terms of human performers but for computers since, as the writing suggests, its immediacy is closely related to the nature of liveliness in live coding. How the author highlights the timely nature of liveliness gives me the impression that it is also similar to mixing, or DJing, different music. The author states that “principles of knowing how and knowing when are as privileged as knowing what” in live coding performances. In DJing, the human operator also typically has the pre-organized set and timestamps in mind, though they can be improvised depending on the audience and atmosphere during the performance. I think there is a challenging yet interesting balance in live performances where you want to say what you want to say but also want to communicate and vibe with your audience.

In this essay, the authors discuss the mutual influences between art and music and interdisciplinary artist-musicians (or musician-artists). The writing covers the brief history of modern art from the early 20th-century paintings to the latest computer art with relevant musical movements. It was interesting how music and musical movements have influenced fine art by providing new languages and opening up new possibilities. The authors also consider the social aspects by discussing some musicians who studied at art school and artists who produced works in art and music. Thus, the art school could become a hub for both musicians and artists in diverse mediums. I agree that there is a deep connection between the media art and music in its nature. Especially, as the cultural impact of music is significant, I think the musical influences are in the foundation of various domains, including visual art.

In this text, the author discusses the impact and role of the notation in live coding. It was interesting to read about different literature in music and computer science intertwined with the history of live coding. One of the parts that was most interesting to me was that different live coding platforms have their own perspective on live performance. The author compares using a different live coding language to switching between different natural languages, and that it can change how we shape our thoughts as coders. When I used Sonic Pi for the Research Project, I felt a similar transition in ways of writing the codes compared to TidalCycles. In Sonic Pi, you need to run the whole program again when you change the code, while you can choose to run a block of code in TidalCycles. Due to this difference in its system, I got the impression that I need to think from a broader perspective when using Sonic Pi, considering the big picture as I start a performance. I wonder if it would be also a different experience when I try another live coding platform. Another part I found interesting is how the author connects pattern-based music representations to computer algorithms and to textile weaving and braiding. Here, the author refers to another article by composer Laurie Spiegel, whose writing we read previously in class, about categories of pattern manipulation in computer music. I think it is really interesting that some of these pattern manipulation techniques can be directly mapped to basic computer operations such as bit shifting (‘<<’ and ‘>>’) to rotation.

Performance recording

Audio code 🎧

setcps (130/60/4)

-- chords
d1 $ qtrigger $ filterWhen (>=0) $ stack [
    struct ("[1 0] [0 1] [0 0] [1 0]")
    $ s "longer:1" >| begin 0 >| end "0.003"
    # note ("<f'min'4 df'maj'4 c'min'4 ef'maj'4>") # hpf 1000,
    ccv "[<90 40 30 90> 0] [0 <90 40 30 90>] [0 0] [<90 40 30 90> 0]" # ccn "0" # s "midi"
  ]

-- bass drum
d2 $ qtrigger $ filterWhen (>=0) $ stack [
    struct ("1 1 [1 0 0 1] [0 0 1 0]") $ s "house",
    ccv "60 60 [60 0 0 60] [0 0 60 0]" # ccn "1" # s "midi"
  ]

-- melody
d3 $ qtrigger $ filterWhen (>=0) $ stack [
  fast 2 $ struct ("1 1 1 <0 1>") $ rolledBy 0.7 $ note (arp "diverge" "<f'min'4 ef'maj'4>")
  # s "bubble:2" # gain 1.3 # hpf 1000,
  ccv "30 30 30 <0 30>" # ccn "2" # s "midi"
]

-- kick & hh
do
  d4 $ qtrigger $ filterWhen (>=0) $ struct ("[1 0 1 1 <0 1> [0 1] [0 1] 0]") $ s "bubble:5" -- bubble kick
  d5 $ qtrigger $ filterWhen (>=0) $ struct ("0 1 0 1") $ s "jchh:1" # gain 0.8 # room 0.1

-- shaker?
do
  d7 $ qtrigger $ filterWhen (>=0)
    $ seqP [
      (2,3, "ukgfx:3@4")
    ]
  d8 $ qtrigger $ filterWhen (>=0) $ struct ("[1 1] [1 0] [0 1] 0") $ s "house:2" # lpf 2000

-- faster
do
  d1 $ qtrigger $ filterWhen (>=0) $ struct ("[1 0] [0 1] [0 0] [1 0]") $ s "longer:1" >| begin 0 >| end "0.003"
    # note ("<f'min'4 df'maj'4 c'min'4 ef'maj'4>")
  d5 $ qtrigger $ filterWhen (>=0)
    $ seqP [
      (0,4, struct ("[0 1] [0 1] [0 1] [0 1]") $ s "jchh:1" # gain "0.8" # room 0.1),
      (4,8, struct ("[0 1] [0 1] [0 1] [0 1]") $ s "jchh:1" # gain "0.8" # room 0.1)
    ]

-- little break
do
  d1 $ qtrigger $ filterWhen (>=0) $ struct ("1@4") $ s "longer:1" >| begin 0 >| end "0.003"
    # note ("<f'min'4 df'maj'4 c'min'4 ef'maj'4>")
  d2 $ silence
  d3 $ qtrigger $ filterWhen (>=0)
    $ seqP [
      (4,8, fast 2 $ struct ("1 1 1 <0 1>") $ rolledBy 0.7 $ note (arp "diverge" "<f'min'4 ef'maj'4>"))
    ] # s "bubble:2" # gain 1.4 # hpf 1000 -- louder bubble
  d5 $ silence
  d6 $ silence
  d8 $ silence

-- highlight
do
  d1 $ qtrigger $ filterWhen (>=0) $ stack [
      struct ("1@4") $ s "longer:1" >| begin 0 >| end "0.003"
      # note ("<f'min'4 df'maj'4 c'min'4 ef'maj'4>") # gain 1.07,
      ccv "[<90 40 30 90> 0] [0 <90 40 30 90>] [0 0] [<90 40 30 90> 0]" # ccn "0" # s "midi"
    ]
  d2 $ qtrigger $ filterWhen (>=0) $ stack [
      struct ("1 1 [1 0 0 1] [0 0 1 0]") $ s "house",
      ccv "60 60 [60 0 0 60] [0 0 60 0]" # ccn "1" # s "midi"
    ]
  d3 $ qtrigger $ filterWhen (>=0) $ fast 2 $ struct ("1 1 1 <0 1>") $ rolledBy 0.7 $ note (arp "diverge" "<f'min'4 ef'maj'4>")
    # s "bubble:2" # gain 1.3 # hpf 1000
  d4 $ qtrigger $ filterWhen (>=0) $ struct ("[1 0 1 1 <0 1> [0 1] [0 1] 0]") $ s "bubble:5" # hpf 1000
  d5 $ qtrigger $ filterWhen (>=0) $ struct ("[0 1] [0 1] [0 1] [0 1]") $ s "jchh:1" # gain 0.8 # room 0.1
  d6 $ qtrigger $ filterWhen (>=0) $ struct ("0 1 0 1") $ s "ukgclap:5" # gain 0.7 # room 0.1
  d7 $ silence
  d8 $ qtrigger $ filterWhen (>=0) $ struct ("[1 1] [1 0] [0 1] 0") $ s "house:2"

-- riser
d7 $ qtrigger $ filterWhen (>=0) $ stack [
    seqP [
      (0,1, s "jcfx:12@8" # gain 0.7 # room 0.2),
      (0,1, s "ukgriser:3@8" # lpf 2000 # gain 0.8 # room 0.2),
      (2,6, s "h:1@4" # n "<f'min'4 df'maj'4 c'min'4 ef'maj'4>" # crush 3 # cut 1 # hpf 1000 # gain 0.6 # room 0.4)
    ],
    ccv "<90 40 30 90>" # ccn "2" # s "midi"
  ]

-- only chords + vox
do
  d1 $ struct ("[1 0] [0 1] [0 0] [1 0]") $ s "longer:1" >| begin 0 >| end "0.003"
    # note ("<f'min'4 df'maj'4 c'min'4 ef'maj'4>")
  d2 $ silence
  d3 $ silence
  d4 $ silence
  d5 $ silence
  d6 $ silence
  d7 $ qtrigger $ filterWhen (>=0) $ s "h:1@4" # n "<f'min'4 df'maj'4 c'min'4 ef'maj'4>"
    # crush 3 # cut 1 # hpf 1000 # gain 0.6 # room 0.4
  d8 $ silence

-- ending..
do
  d1 $ silence
  d2 $ silence
  d3 $ silence
  d4 $ silence
  d5 $ silence
  d6 $ silence
  d7 $ qtrigger $ filterWhen (>=0) $ struct ("[1 0] [0 1] 0 0") $ s "longer:1" >| begin 0 >| end "0.003"
    # note ("<f'min'4 df'maj'4 c'min'4 ef'maj'4>" + "-3")
  d8 $ silence
  d9 $ silence

d7 $ struct ("[1 0] [0 1] 0 0") $ s "longer:1" >| begin 0 >| end "0.003"
    # note ("<f'min'4 df'maj'4 c'min'4 ef'maj'4>" + "-3") # lpf 2000 -- 2000 -> 1000

Visual code 🫧

hush()

bpm = 65
// -- load everything here --
// load video 1 (hand)
s1.initVideo('https://media.giphy.com/media/v1.Y2lkPTc5MGI3NjExOWRlamczNDZkNWwwNjl3cXMzN3FjYnR5bWlmbTNlbWZmaHVoOTJ2MiZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9Zw/3oKIPw0gVImPZISxlS/giphy.mp4')
src(s1).out(o1) // out at o1
// load video 2 (background)
s0.initVideo('https://media.giphy.com/media/v1.Y2lkPTc5MGI3NjExcGtoMWl5ZzN1czdxMW40N3kyZGhjOTJmczB4bjQ1YWg3anFsYnUzeCZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9Zw/xUOwGedYaNseUaJRa8/giphy.mp4')
// load noises
voronoi(10,1,5)
  .brightness(()=>Math.random()*0.15)
  .modulatePixelate(noise(() => (cc[2]*25),0.5), 100)
  .out(o2)
// --------------------------

// start
src(s0)
  .modulate(noise(() => (cc[0]*0.3)))
  // .blend(noise(() => (cc[2]*0.5)), 0.1).brightness(0.2) // melody
  // .blend(o2, 0.1) // shaker
  .out(o0)

// little break
speed = 0.01
src(s0).brightness(0.1).out(o0)

// highlight
speed = 1
src(s0)
  .modulate(noise(() => (cc[0]*0.3)))
  .blend(noise(() => (cc[2]*0.5)), 0.1).brightness(0.2)
  .blend(o2, 0.1)
  .add(src(s1), 0.2)
  .out(o0)

// chords + vox
speed = 1
src(s0)
  .add(src(s1), 0.4)
  // .brightness(0.2) // ending
  .out(o0)

For the composition project, I wanted to create pop-like, melodic music with nostalgic visuals. I first started experimenting with different rhythms for the beat and created a main beat using the ‘bubble’ sound sample from the DirtSamples library. I instantly liked how it sounded, so I decided to go with the lightweight melody that can go well with the bubble kick drum. The hardest part for me was to come up with the chord progression. With the professor’s help, I could devise the main chord that I used as an intro. I couldn’t find a synth that fit the rest of the beat, so I ended up chopping off the chord sound from another song (OMG by NewJeans). Then, I changed the notes to apply the custom chords. Once I had all the elements, I mostly focused on the progression of the sounds. I like music with a nice intro, so I spent a lot of time refining the initial build-up and making it consistently progress for listeners to really feel the build-up.

In this text, the author discusses the cognitive process of Live Coding and how it encompasses different ways of thinking. For example, the author states that live coding “pressures the if-then thinking of computational logic toward the what-if of speculative experimentation.” It is interesting to view live coding as an experimental way of computing where open experimentation is actively encouraged compared to the standard way of programming. In some way, I felt the same way because, in live coding, you first try something, modify it, and expand on it until you like the outcome, while in the standard way of computing, you want to have logic in mind before you start typing. However, I think both are similar when you start improvising the code. In both scenarios, you want to understand what you wrote and improve – to make the codes perform what you imagined and to make them cleaner. For example, in live coding, you might want to group multiple sound layers into a structure or assign them to a variable after you find something you like. Similarly, when writing codes for software, you want to remove unnecessary lines or structure them in a more consistent way after you implement the functionality. Therefore, I think there are some areas in the process where live coding and other ways of computing overlap and differ.

Sonic Pi is a live coding platform created by Sam Aaron, at University of Cambridge Computer Laboratory, in 2012 as a low-cost programmable computing platform for Raspberry Pi. Since it was originally created to make programming in Raspberry Pi more accessible, it is beginner-friendly and has friendly UI and documentation. 

Sonic Pi is also popular among live coding DJs and used by multiple artists. DJ_Dave is one of the artists who actively use Sonic Pi to create electronic music. Below is a live coded music she created using Sonic Pi.

I also experimented with Sonic Pi and created a edit of a track – Free Your Mind by Eden Burns. I also used some custom samples of bass drums and claps for the beat. The following is the code:

s = [path_to_main_track] # this is the path to the music file
ukgclap = [path_to_sample_dir]
ukghh = [path_to_sample_dir]

use_bpm 124

##| start
in_thread do
  sample s, start: 0.6542207792, finish: 0.7045454545, amp: 0.5, decay: 3
  sleep 8*4
  set :beat, 1
end

live_loop :sbd do
  sample s, onset: 0, amp: 1.6
  sleep 1
end

##| play beat
sync :beat
live_loop :clap do
  sleep 1
  sample ukgclap, "ukgclap_8", amp: 0.7
  sleep 1
end

live_loop :shh do
  sleep 0.5
  sample ukghh, "ukghh_1", amp: 0.7
  sleep 0.5
end

##| intro
in_thread do
  sample s, start: 0.163961039, finish: 0.2142857143, amp: 0.85
  sleep 8*4
  set :mainBeat, 1
end

##| play main
live_loop :main do
  sync :mainBeat
  sample s, start: 0.3022727273, finish: 0.4029220779
  sleep 8*16
end

Here, I chopped up different parts of the track and reordered with a coded house beat using custom samples. The hardest part was to make the set: and sync: work properly with the threads. I also had to calculate the bars manually to figure out the sleep time. I wonder if there is a better way to do this.

While the last block is being halfway played, I also executed the following code in another buffer of Sonic Pi:

s = [path_to_main_track] # this is the path to the music file

2.times do
  with_fx :reverb, phase: 0.2, decay: 5, mix: 0.75 do
    sample s, start: 0.6542207792, finish: 0.7045454545, amp: 0.5, decay: 3, rpitch: 1.3
    sleep 8*4
  end
end

It plays the vocal in a somewhat distorted way as it overlaps with the vocal that were already being played by the main block.

In this paper, Spiegel proposes how applying Information Theory to music composition can be a powerful approach to creating music. Spiegel begins her argument by explaining the nature of music and the different factors like repetition and noise to make it “musical.” This part was interesting to me since I knew little about music theory. While explaining the difference between random corruption, “noise,” and random generation, the author states that “music is self-referential and sensory rather than symbolic.” It made me think about the difference between what we consider noise and music, and how music, compared to visual media, is more sensory and transparent due to its momentary nature. Visual art allows the viewers to stay in the moment to contemplate and to see as long as they want. On the contrary, music is instant, and the moment we hear its sound, it has already moved on, only allowing us to feel what we feel at the exact moment. It was also interesting to see how the author says the noise is “something that makes sense in some other context” than the actual noise. I think it also applies when mixing music by overlapping different parts from two tracks to make something sound new.