The exploration of “liveness” in live coding provokes a reconsideration of how we define and interact with both technology and performance. The distinction between live coding and generative processes in audiovisual performances, where the latter is pre-coded and the former improvisational, raises questions about authenticity and originality in digital art forms. The text’s analysis of how live coding practices serve as a model of liveness — integrating human-machine interactions into a hybrid and complex system — challenges our traditional views on performance and audience engagement.

By reframing liveness not as a mere authentic experience but as a dynamic and interactive one, the text invites us to think about the implications of these live coding practices. For instance, how do these practices alter our understanding of control and creativity in performance arts? The idea of “machine liveness” — where technology responds instantly and semantically to the coder’s inputs — raises intriguing questions too: How does this immediacy transform the creative process? What does this seamless integration of action and response reveal about our potential to harmonize with increasingly intelligent systems? By emphasizing the continuous and collaborative nature of live coding, where technology is not merely a tool but a partner, the text invites us to reconsider our roles as creators and interactors within the digital landscape. This perspective not only challenges our traditional notions of artistic and technological domains but also suggests a future where the boundaries between creator, creation, and audience are fluid and dynamic.

Concept:

The project came together after deciding we wish to create a faster track, compared to our previous one. After Noah laid down some break beats and Aakarsh laid some textural pads, we decided on an atmospheric breakcore sound. Raya and Juanma took the visuals from this starting point and created an eclectic mix of old cartoon footage with hydra glitch effects.

Sound Composition:

The first sound is Sophie’s “It’s Okay To Cry” put through a striateBy to create an ambient pad. Then a voice line from Silent Hill 2 goes through a similar process. A superzow arp with long sustain kicks in creating a bleeding, noisy, driving sound. A trem adds rhythm to this sustained blended arp. A jungle bass and Amen break cut up kick in now. A superpwm with a smaller sustain kicks in afterwards.

The whole song goes double time now. The break is replaced with a faster, glitchier cut-up. A rhythmic chopped up voice memo replaces the superpwm arp. These are transitioned in with a gradual manual increase of a low pass filter. The superpwm comes back now, crushed to add more granularity to the sound texture.

The song eventually returns to the original tempo, cutting out the vocal fragments, while the rest of the instrumentals gradually fade out.

Visual Composition:

The visual composition initializes video playback from a URL of “The Mechanical Cow” and first starts b&w. Then applies a modulation effect to create a slightly scaled black and white version of the video. Then intensifies the grayscale effect, blends it with the original output. Then the colorful variation blocks introduce color dynamics, kaleidoscopic effects, and rotation modulations, controlled by time-based functions, enhancing the visual complexity. Then we increase the intensity of visual effects with higher frequency oscillations, complex colorama applications, and increased modulations that respond dynamically to the sin of the time. For the final layer we applied color changes and scaling based on the audio’s frequency spectrum, combining with masked oscillations to produce rhythmic visuals ideal for accompanying drum beats.

Code:

d1 $ jux rev $ slow 16 $ striateBy 64 (1/8) $ sound "sophie:0" # room "0.9" # size "0.9" # up "-2" # gain 0.8
d2 $ jux rev $ slow 16 $ striateBy 64 (1/8) $ sound "scream:0" # room "0.9" # size "0.9" # up "1"

d5 $ slow 8 $ s "jungbass" <| n (run 20) # krush 8 


xfadeIn 2 64 $ fast 1 $ chunk 4 (|- note 5) $ jux rev $ 
  arp "<converge diverge>" $ note "<cs3'min9'11'o>"
  # sound "superzow" # gain 0.7  # tremr "8" # tremdp 1
  # sustain 1 # room 0.3 # sz 0.5  
  # lpf 1000 # lpq 0.2 # crush 2

xfadeIn 3 64 $ fast 1 $ jux rev $ 
  arp "<pinkydown>" $ note "<cs3'min9'11'o>"
  # sound "superpwm" # gain 0.8 
  # sustain 0.2 # room 0.3 # sz 0.5  
  # hpf 500 # hpq 0.2 # crush 2

setcps(0.7)

hush

d6 $ slow 4  $  s "amencutup" <| n (shuffle 8 $ run 32) # speed "{1,2,3}%8" # gain 1 # room 0.4 # sz 0.6 # krush 4 # lpf 6000
d6 $ slice 8 "[<0 1> 2] [<3*8 4 0> <4 3 1>]" $ s "breaks152" # gain 1.35 # legato 1 # room 0.3 # sz 0.6 # lpf 6000 # krush 2
d3 $ slow 2 $ slice 8 "1 [1*2] 2 3 2 [4*2] [~ 3] [5 [5*2]]" $ s "vmemo:2" # gain 1.7 # legato 1 # room 0.3 # sz 0.9 # krush 4 # lpf 3000

d6 $ silence
d3 $ silence


d10 $ ccv "120 30 110 40" # ccn "1" # s "midi"
d11 $ slow 4 $ ccv "[<0 50> 127] [<0 50> <177 30>]" # ccn "0" # s "midi" 
d12 $ ccv "127 60 127 90" # ccn "2" # s "midi"
d13 $ ccv "20 40 60 80 100 120 127 0" # ccn "3" # s "midi"
s2.initVideo('https://upload.wikimedia.org/wikipedia/commons/c/ce/The_Mechanical_Cow_%281927%29_silent_version.webm')

src(s2).out(o0)

//b&w 1
src(s2).modulate(src(s2), [0,1]).scale(0.8).out(o0)

//b&w 2
src(s2).color(-1.5, -1.5, -1.5).blend(o0).rotate(-0.5, -0.5).modulate(shape(4).rotate(0.5, 0.5).scale(2).repeatX(2, 2).modulate(o0).repeatY(2, 2)).out(o0)

//colorful variation 
src(s2).blend(osc(5, 0.5, ()=>cc[2]*0.02)
    .kaleid([3,4,5,7,8,9,10].fast(0.1))
    .color(0.5, 0.3)
    .colorama(0.4)
    .rotate(0.009,()=>Math.sin(time)* -0.00001 )
    .modulateRotate(o0,()=>Math.sin(time) * 0.003)
    .modulate(o0, 0.9)
    .scale(0.9))
    .out(o0)

//colorful variation 2
src(s2).modulate(src(s2), ()=>cc[2])
  .blend(osc(5, 0.5, 0.1)
              .kaleid([3,4,5,7,8,9,10].fast(0.1))
  .color(0.5, 0.3)
             .colorama(0.4)
             .rotate(0.009,()=>Math.sin(time)* -0.0001)
              .modulateRotate(o0, ()=>Math.sin(time) *0.003)
              .modulate(o0,0.9)
              .scale(0.9)
             )
    .out(o0)

//more distortion (add colorama details, osc 10)
src(s2).modulate(src(s2), ()=>cc[2])
  .blend(osc(10, 0.5, ()=> 0.1 + 0.9*Math.sin(time*0.05))
              .kaleid([3,4,5,7,8,9,10].fast(0.1))
  .color(0.5, 0.3)
             .colorama(() => 0.5 + 0.5 * Math.sin(time))
             .rotate(0.009,()=>Math.sin(time)* -0.0001)
              .modulateRotate(o0, ()=>Math.sin(time) *0.003)
              .modulate(o0,0.6)
              .scale(0.9)
             )
    .out(o0)

//super distortion
src(s2).rotate(0).modulate(src(s2), ()=>cc[0])
  .blend(osc(10, 0.5, ()=> 0.1 + 0.9*Math.sin(time*0.05))
              .kaleid([3,4,5,7,8,9,10].fast(0.1))
  .color(0.5, 0.3)
             .colorama(() => 0.5 + 0.5 * Math.sin(time))
             .rotate(0.009,()=>Math.sin(time)* -0.0001)
              .modulateRotate(o0, ()=>Math.sin(time) *0.003)
              .modulate(o0,0.6)
              .scale(0.9)
             )
    .out(o0)

src(s2)
.mult(osc(20,-0.1,1).modulate(noise(3,1)).rotate(0.7))
.posterize([3,10,2].fast(0.5).smooth(1))
.modulateRotate(o0)
.out()

//vibrant circle layer
src(s2).add(noise(2, 1)).color(0, 0, 3).colorama(0.4).out()

//vibrant circle layer with MIDI
src(s2).add(noise(()=>cc[1], 1)).color(0, 0, 3).colorama(0.4).out()

//Transition
src(s2).add(noise(()=>cc[1]*0.3, 1)).scale(()=> a.fft[2]*5).color(0, 0, 3).colorama(0.4).out(o0)

//drum vibes
src(s2)
.color(() => a.fft[2]*2,0, 1)
.modulate(noise(() => a.fft[0]*10))
.scale(()=> a.fft[2]*5)
.layer(
  src(o0)
  .mask(osc(10).modulateRotate(osc(),90,0))
  .scale(() => a.fft[0]*2)
  .luma(0.2,0.3)
)
.blend(o0)
.out(o0)

hush()

Work Distribution:

The entire project came together pretty much simultaneously, the visuals were shaped by the audio and vice-versa. Hence, everyone contributed from a technical level to design choices on whatever level possible. More specifically, Aakarsh worked on the synths and pads. Noah came up with the drums and rhythm. Raya worked on the hydra part to create the visual layers on top of the video. Juanma came up with the video and worked on the MIDI synching.

Hoffmann and Naumann trace the roots of artist-musicians back to figures like Leonardo da Vinci, establishing a long-standing tradition of interdisciplinary genius that challenges the modern compartmentalization of artistic professions. This historical lens invites a contemplation on the essence of creativity itself—is it not the spirit of inquiry and boundless exploration that defines true artistry, irrespective of medium? The concept of the “all-round artist” resonates with my understanding of art as a fluid expression of human experience, unbounded by rigid categorizations. It prompts one to consider how contemporary artists might draw upon this tradition to navigate and transcend the increasingly blurred lines between disciplines.

The move towards abstraction in both art and music reflects a shift from representational to conceptual modes of expression. The authors highlight the role of abstraction in fostering a form of universal communication:

“The main focus of modernist art was therefore on the basic elements (color forms tones etc.) and the basic conditions (manner and place of presentation) of artistic production.”​​ So the question arises, in what ways does the abstraction in music influence the abstraction in visual arts, and vice versa?

The exploration of synesthesia and the case studies of Kandinsky and Schoenberg exemplify the profound interplay between seeing and hearing, revealing how artists and musicians have sought to create immersive and multisensory experiences. This intersection fascinates me, as it encapsulates the quest for a holistic artistic expression that engages all senses, thereby amplifying the impact and reach of the artwork.

The role of art schools in fostering interdisciplinary and multidisciplinary work underscores the importance of educational environments in shaping the artists of the future. As someone who values the transformative power of education, I see art schools as crucial incubators for challenging traditional boundaries and nurturing the next generation of artist-musicians. This prompts further reflection on how curricula and institutional structures might evolve to better support this cross-pollination of ideas and techniques.

In conclusion, I believe Hoffmann and Naumann’s work encourages us to reconsider the fluid boundaries between artistic disciplines, urging a deeper appreciation for the complex dialogues that have shaped the evolution of art and music.

The text invites us to consider how live coding, as a blend of computational algorithms and artistic expression, challenges our traditional understandings of music and performance. It implies a dynamic interplay between structure and spontaneity, where the act of coding live doesn’t just create music but also questions the nature of creativity itself. This nuanced dance between the coder’s intent and the system’s capabilities reflects broader themes of control, collaboration, and the unexpected outcomes inherent in merging technology with art. How does this interplay affect our perception of authorship and authenticity in digital art forms? And what does this say about the future of artistic expression in an increasingly digitized world?

The text delves into the distinctions between stylism, traditionalism, and restructuralism in music, highlighting how notation influences music’s evolution into tradition or style. It’s fascinating to think about how notation serves not just as a record of music but as a dynamic language for real-time creation and modification. Notation in live coding isn’t fixed; it’s fluid, adaptable, and, importantly, executable. This adaptability raises questions about the permanence and reproducibility of musical works, blurring the lines between composition, performance, and improvisation. This conversation around notation could be enriched by looking at John Cage’s work with indeterminate compositions, where the notation serves more as a set of possibilities than as definitive instructions, inviting a reevaluation of the role of notation in defining the boundaries of a musical work.

In the world of live coding, notation is like the script for an impromptu play where you’re both the director and the lead actor, communicating with your computer to shape music in real time. This back-and-forth turns traditional music creation on its head, making the process vibrant and ephemeral, akin to sketching on water where each performance is unique, never to be repeated in the same way. This form of notation isn’t just about documenting; it’s about exploring, experimenting, and experiencing the joy of creation as it happens.

https://drive.google.com/file/d/1adPCNibbfjbH3312wOzAxCuNja6UMrZy/view?usp=sharing (in case video is not playing)

Sound code:

setcps 0.8

d1 $ slow 4 $ arp "up down diverge converge" $ note "c'maj'4 d'min'4 a4'min'4 g4'maj'4" # s "supervibe"
d2 $ ccv "<127 0 127 0>" # ccn "0" # s "midi"

-- drums
d3 $ "bd sd " # room 0.2
d4 $ ccv (segment 128 (range 127 0 saw)) # ccn "1" # s "midi"

--d2 $ ccv (stitch "" 127 0) # ccn 0 # s "midi"

-- bass line
d5 $ slow 4 $ note "c34 d44 a38 g34" # s "superhammond:5"
# legato 0.5

d6 $ ccv "<0 20 64 127>" # ccn "2" # s "midi"

xfade 4 $ slow 4 $ note "c'maj d'min a'min g'maj" # s "superpiano" -- "superchip"
# legato 1
# djf 0.3
# gain 0.85

d7 $ ccv "<127 0 127 0>" # ccn "3" # s "midi"

--swoooooop
once $ s "auto:3"

-- variation of arp chords
d1 $ jux rev $ slow 4 $ arp "up down diverge converge" $ note "c'maj'42 d'min'42 a4'min'42 g4'maj'42" # s "supervibe"
d2 $ ccv (segment 128 (slow 4 (range 127 0 saw))) # ccn "2" # s "midi"

do
solo 5
solo 4

d8 $ slow 4 $ arp "converge" $ n "c'maj'42 d'min'42 a4'min'42 g4'maj'42" # s "superpiano" -- "superchip"
# voice 0.7 # gain 0.6
# amp 0.4
# room 0.7 # sz 0.9
# legato 1.1
# djf 0.8
# gain 0.7

d9 $ ccv (segment 128 (range 127 0 saw)) # ccn "4" # s "midi"

do
solo 10
d10 $ s "bd*8"

do
d10 $ s "bd*2"
unsolo 5
unsolo 4
unsolo 10

-- vocals
d12 $ slow 4 $ note "[~ ~ c ~] [~ ~ d ~] e g4" # n "1 2 3 44" # s "numbers"
# amp "1 0.8 2.2 1.3*4"
# legato 1.5
# gain 1
# room 0.3
# djf 0.8

d14 $ ccv "<80 100 120 100>" # ccn "10" # s "midi"

d11 $ degradeBy 0.4 $ jux rev $ note "a b [c f] [e f] a [g d] e t" # s "superpiano"
# room 0.3
# gain 0.95
# djf 0.3

d8 $ ccv "<127 60 127 60>" # ccn "11" # s "midi"

d13 $ degradeBy 0.8 $ jux rev $ every 2 (rev) $ n "0 1 2 3 4 5 6 7 8" # s "glitch"
# gain 0.75
# legato 1
# speed (slow 4 $ range 0.5 1 saw)

d15 $ ccv "<127 0 64 127 0 64>" # ccn "12" # s "midi"

do
d13 silence
d11 silence
all $ (# djf (slow 4 $ range 0.2 0.8 sine))

all $ (# legato 0.05)

d6 $ ccv (segment 128 (range 127 0 sine)) # ccn "5" # s "midi"

do
d12 silence
all $ (# speed (slow 4 $ range 0.5 2 sine))

all $ (# cps (slow 4 $ range 0.5 (fast 3 $ range 0.3 2 sine) saw))

all $ fast 1

hush

Visual Code:

voronoi(5,-0.1,5)
.add(osc(1,0,1)).kaleid(21)
.scale(1,1,2).colorama(()=>cc[5]).out(o1)
src(o1).mult(src(s0).modulateRotate(o1,100), -0.5)
  .out(o0)

What I had in my mind was to create a “floral” pattern with vibrant colors in visuals to go along with my sound composition. The sound uses various synthesizers like supervibe, superhammond, superpiano, etc. The arp function generates arpeggios, while note specifies pitches. slow and jux rev are used to modify the playback speed and direction, adding variation and texture. once, solo, and unsolo are utilized for structural changes, introducing, highlighting, or removing elements temporarily. Drums and bass lines are created using both synthesized sounds (bd, sd) and MIDI-controlled instruments. Throughout the composition, various effects such as legato, gain, room, djf (filter), and amp are used to shape the sound’s envelope, volume, reverb, and filter cutoff, respectively. The all function applies effects (djf, legato, speed, cps) globally, affecting all patterns to create cohesive changes in the texture, tempo, and timbre of the composition.

The use of voronoi, osc, kaleid, and scale functions in combination is pivotal in generating visuals that resemble changing flower patterns. voronoi creates cell-like structures that can mimic the segments of a flower, osc adds movement and texture, kaleid with a high value (21) generates symmetrical, kaleidoscopic patterns resembling the petals of flowers, and scale adjusts the size, allowing the visuals to expand or contract in response to the music. colorama is used to cycle through colors dynamically, which is linked to the tonal shifts in the music.

I approached this by first composing the music in tidalcycles, then creating a visual pattern in hydra that I like, and binding them together. Challenges were to keep the composition interesting even with using one kind of visual. I had tried a lot of variants but it quite didn’t fit together to jump around on them while playing a consistent musical compostion – so I sticked to one. But developments could be made that my visual composition also unfolded gradually by starting from something simple then maybe blooming to these big floral patterns. Also I could’ve been more consistent with the color palette.

“Creative Know-How and No-How” presents live coding as a vibrant tapestry of thought, where the act of coding transcends its technical underpinnings to become a medium of artistic expression. It challenges us to embrace the uncertainties of the creative process, to find value in the act of exploration itself, and to reconsider the ways in which we understand and engage with technology.

One of the most compelling aspects discussed is the concept of “play” within live coding. This notion, borrowed from Roger Caillois, characterizes live coding as an inherently uncertain activity, a form of artistic experimentation that defies the conventional purpose and function. It’s a self-regulating activity that exists for its own sake, devoid of the pursuit of material gain, which is a radical departure from traditional views on productivity and creativity. This perspective invites us to reconsider the value we have for creative acts, urging us to see beyond the tangible outcomes and appreciate the beauty of creation itself.

The parallels drawn between live coding and preindustrial loom weaving are particularly evocative. Both practices require a heightened awareness and a dynamic response to the evolving conditions of the creative process. This analogy not only highlights the deep-rooted connection between coding and weaving as forms of thinking-in-motion but also challenges the historical narrative that prioritizes the Jacquard loom’s role in conceptualizing computational logic. By doing so, it invites a reevaluation of ancient crafts as precursors to modern computational practices, suggesting a continuity in thought that transcends technological advancements.

Moreover, the reading delves into the philosophical underpinnings of live coding, drawing upon concepts like kairotic and mêtic intelligence, which emphasize the importance of timing in the creative process. These ideas underscore the adaptability and situational awareness crucial to live coding, where the coder navigates through a landscape of possibilities, guided by a sense of what could be rather than what is.

Here’s the link to my presentation:

https://www.canva.com/design/DAF8yNswVM8/asdv-QO-gv_DvemEwHOCgg/view?utm_content=DAF8yNswVM8&utm_campaign=designshare&utm_medium=link&utm_source=editor

SuperCollider is an environment and programming language designed for real-time audio synthesis and algorithmic composition. It provides an extensive framework for sound exploration, music composition, and interactive performance.
The application consists of three parts: the audio server (referred to as scsynth); the language interpreter (referred to as sclang) and which also acts as a client to scsynth; and the IDE(referred to as scide). The IDE is built-in and the server and the client (language interpreter) are two completely autonomous programs.

What is the Interpreter, and what is the Server?

SuperCollider is made of two distinct applications: the server and the language.
To summarize in easier words:

Everything you type in SuperCollider is in the SuperCollider language (the client): that’s where you write and execute commands, and see results in the Post window.
Everything that makes sound in SuperCollider is coming from the server—the “sound engine”— controlled by you through the SuperCollider language.

SuperCollider for real-time audio synthesis: SC is optimized for the synthesis of real-time audio signals. This makes it ideal for use in live performance, as well as, in sound installation/event contexts.
SuperCollider for algorithmic composition: One of the strengths of SC is to combine two, at the same time both complementary and antagonistic, approaches to audio synthesis. On one hand, it makes it possible to carry out low-level signal
processing operations. On the other hand, it does enable the composers to express themselves at higher level abstractions that are more relevant to the composition of music (e.g.: scales, rhythmical patterns, etc).
SuperCollider as a programming language: SC is also a programming language. It belongs to the broader family of “object-oriented” languages. SC is also a language interpreter for the SC programming language. It’s based on Smalltalk and C, and has a very strong set of Collection classes like Arrays.

Some code snippets from the demo:

For drum roll sounds:

An interesting example of 60Hz Gabber Rave 1995 that I took from the internet:

Here’s a recording of some small sound clips made with SuperCollider (shown in class):

https://drive.google.com/file/d/1I_HxymG_OLzdw_rirGJ9iYXLxzyK8n_k/view?usp=drive_link