Hydra (Marta & Fatema)

For our visual side, we have decided to begin with vibrant visuals characterized by dynamic, distorted light trails. Our initial code included loading the image, modulating it with a simple oscillator, and then blending it with the original image, resulting in a blur effect. As we progressed, we integrated more complex functions based on various modulations.

As our project evolved, our goal was to synchronize our visuals more seamlessly with the music, increasing in intensity as the musical layers deepened. We incorporated a series of ‘mult(shape)’ functions to help us calm down the visuals during slower beats.

Finally, we placed all the visuals in an array and used CCV to update them upon the addition of each new layer of music. This enabled us to synchronize the transitions between the music and visuals. Additionally, we integrated CCs into the primary visual functions to enhance the piece with a more audio-reactive experience.

created an array of visuals that enabled swift transitions, all perfectly timed with sound triggers for perfect synchronization. Additionally, we integrated CC’s into the main visual functions to enhance the piece with a more audio-reactive experience.

Check out the final code for visuals.

Tidalcycles (Bato & Jeongin)

For our final composition, our group created a smooth blend of UK Garage and House music, set at a tempo of 128 BPM. The track begins with a mellow melody that progresses up and down in the E-flat minor scale. On top of this melody, we layered a groovy UK Garage loop, establishing the mood and setting the tone of the composition.

To gradually introduce rhythm to our composition, Jeong-In layered various drum patterns, adding hi-hats, claps, and bass drums one by one. On top of Jeong-In’s drums, we introduce another layer of classic UK Garage drum loop, which completes the rhythmic structure of the composition.

Furthermore, we incorporated a crisp bass sound, which gave the overall composition a euphoric vibe. After introducing this element, we abruptly cut off the drums to create a dramatic transition. At this point, we added a new melodic layer, changing the atmosphere and breaking up the repetitiveness of the track. Over this new layer, we reintroduced the previously used elements but in a different order and context, giving the composition a fresh perspective.

Additionally, we used a riser to smoothly transition into our drum loop and also incorporated a sea wave sound effect to make the sound more dynamic. We end the composition with a different variation of our base melody, utilizing the jux rev function.

Check out the final code for music.

In the chapter on Live Coding’s Liveness(es), the author discusses the concept that “for some live coders, nothing is saved, recorded, or archived in support of future replaying: the performance both begins and ends with the blank screen/slate.” This idea prompted me to consider the role of archiving in live coding, especially given its spontaneous real-time and potentially ephemeral nature.

Recently, I came across an essay in the book “Electronic Superhighway: From Experiments in Art and Technology to Art after the Internet” that discussed Performative archiving. The author critiques the notion of ephemerality as an excuse for relinquishing control and suggests that “materiality” is a more fitting term, one that evolves over time and through performance. The essay also reflects on how materiality is represented in museums, particularly during the 1990s and 2000s, noting the practice of listing the materials used in net artworks on wall labels, treating digital materials as physical substances.

I found it fascinating that even something as live and real-time as live coding can evoke a sense of materiality. This prompted me to think about the complexities of defining and preserving works in digital realms or those of a performative nature. When we view live coding as an art form, can it truly be considered as such without being archived? Is it the very act of documenting it that solidifies its significance within the discourse?

As I read about the concept of interdisciplinary artists, termed as Artist-Musicians and Musician-Artists by Hoffmann and Naumann, my thoughts gravitate towards Richard Wagner’s notion of “Gesamtkunstwerk” – the total work of art dating back to the mid-19th century. Wagner envisioned a synthesis of music, drama, poetry, visual arts, and stagecraft into one unified piece, believing it would create a more immersive and emotionally powerful experience for the audience— “The true drama is only conceivable as proceeding from a common urgence of every art towards the most direct appeal to a common public” (The Art-Work of the Future, 1849).

It’s fascinating to see how contemporary artists are intriguingly blurring the lines between music and visual arts. With the ongoing digitalization of media, the alignment between visual and musical techniques intensifies. Particularly in live-coding, artists seamlessly integrate diverse elements, embodying Wagner’s vision of synthesizing art forms. Thus, they offer a sensory-rich experience for audiences, maintaining the legacy of Gesamtkunstwerk.

Antony Author’s insights into musical notation highlight how when translated onto computers, our expression gets distilled into numerical data, as evident in grid-based music using MIDI standards. What’s really interesting is the comparison of live coding languages to spoken languages, suggesting that these languages aren’t neutral for expression. Language design significantly shapes users’ creative decisions and the ultimate output they produce.

This got me thinking about how different tools and constraints influence my own expression as an artist dabbling in various mediums. I wonder if other multidisciplinary artists embrace or resist these influences and whether it benefits their creative process.

The influence of language designers on creative outcomes in live coding and visual programming showcases the intricate decisions artists face within these systems. Instead of a one-size-fits-all approach, we’ve seen a rise in diverse, personalized systems, each reflecting the unique vision of its creator and offering unique pathways for artistic exploration.

What’s particularly captivating about this decentralized setup is how creative tech software ecosystems keep evolving. With every new software release, we not only get the core platform but also a bunch of additional packages and plugins created by enthusiasts. These additions often stretch the boundaries of what the original creators had in mind, opening up new possibilities for artists.

Sure, it might seem overwhelming at first for newcomers to navigate this sea of options. But in the end, it all adds to the richness and diversity of artistic practice. Thanks to the collective efforts of enthusiasts, algorithmic artists aren’t confined to the limitations of a single software package. Instead, they have a wide array of tools and resources they can tailor to their specific artistic visions.

Tidal Composition

In terms of my musical composition, I’ve developed six distinct sound layers that blend nicely together. To organize the composition, I initially introduced these sounds sequentially, building towards a climactic moment. Halfway through the performance, I muted the first three tracks, creating a jazz-y shift in the musicality. Later, I reintroduced the three muted sounds one by one, gradually building up their presence before delicately fading them out towards the end of the composition. This gradual decrescendo provided a satisfying sense of closure. I think this structure effectively showcased the diverse range of notes and layers present in the composition.

d1 $ qtrigger $ filterWhen (>=0) $ stack[
    fast "<0.5 2>" $ s "lt:3 lt lt ~" # gain 0.8,
    s "909!4" # gain 1,
    ccv "2 4 1 1" # ccn 0 # s "midi"
  ]


d2 $ qtrigger $ filterWhen (>=0) $ stack [
   s "hh*2 hh*2 hh*2 <hh*6 [hh*2]!3>" # gain 0.8,
    ccv "45 90 270 360" # ccn 1 # s "midi"
  ]


d3 $ qtrigger $ filterWhen (>=0) $ stack [every 3 (hurry 2) $ sound "bd sd [~ bd] [cp bd*2]",
  sound "kurt:4(3,8)" # shape "0 0.98" # gain "0.5" # speed 1.04,
  struct "t" $ ccv (irand 15) # ccn 2 # s "midi"
]

d4 $ qtrigger $ filterWhen (>=0) $ stack[
    fast "<0.5 2>" $ s "lt:3 lt lt ~" # gain 0.5,
    s "909!4" # slow 2 ("<1.5 1>") # gain 1.2,
    s "hh*2 hh*2 hh*2 <hh*6 [hh*2]!3>" # gain 0.7,
    sound "sax(3,8) sax(5,8)" # legato 1 # n 3 # note "<[9 7] 5 [9 12]>" # sz 0.8 # room 0.4 # gain 0.8,
    struct "t" $ ccv (irand 127) # ccn 3 # s "midi"
  ]

d5 $ qtrigger $ filterWhen (>=0) $ stack[
  s "bd*4" # gain 1.4 # krush 10,
  struct "t t t t" $ ccv (irand 4) # ccn 4 # s "midi",
  ccv "2 4 1 1" # ccn 6 # s "midi"
]

d6 $ qtrigger $ filterWhen (>=0) $ stack [
  sound "sax(3,8) sax(5,8)" # legato 1 # n 3 # note "<[9 7] 5 [9 12]>" # sz 0.8 # room 0.4 # gain 0.8,
  struct "t" $ ccv (irand 6) # ccn 5 # s "midi"
]

Hydra visuals

Each layer of sound is assigned a distinct CC number, which sends MIDI data to Hydra. This allowed me to introduce variations to the original rectangle shape depending on the sound being played. It was particularly satisfying because silencing specific sounds would also disable particular visual distortions since the corresponding CC number wouldn’t be transmitted.

To add variety throughout the performance, I would comment out some lines to create the effect of visuals gradually building towards a climax. During the jazz-y drop, I disabled the pixelate line to allow for subtle changes in the oscillating visuals. Similar to the music, I wanted a gradual buildup and fading of the sounds, so it was essential for me to begin and end my visual composition with just one line.

For the next performance, I’m eager to expand on the scope of visuals. I’m intrigued by how many visuals I was able to generate with just 14 lines of code.

update = () => {
  let n = ccActual[2]+1
  let k = ccActual[4]
  shape(2, ()=>ccActual[6]*0.1) //()=>ccActual[6]*0.1
    .modulate(osc(1)) //*0.2
    .pixelate(()=>ccActual[0])
    .rotate(()=>ccActual[1])
    .scrollX(1/n)
    .scrollY(1/k)
    .mult(shape(()=>ccActual[5],()=>ccActual[5]*0.2))
    .modulate(noise(()=>cc[3]*5,0,1))
    .scale(1, ()=>window.innerHeight/window.innerWidth,1)
    .out()
  }

The author explores the intersection of life coding with artistic research, delving into its fusion of technical expertise and intuitive, craft-based approaches. This blend fosters a dynamic interplay between problem-solving and the generation of obstacles. Viewing life coding as a discipline of trial and error, akin to “feeling one’s way,” highlights its reliance on technical computational knowledge while embracing a more intuitive process. In the realm of creative coding, the author often employs a technique of constant questioning, applying logic to resolve challenges in a perpetual cycle of doing and undoing, ultimately culminating in something beautiful. Yet, akin to art, determining the endpoint of this process proves elusive, as completion remains subjective.

Live coding, performed within a defined timeframe, prompts reflection on the nature of performance art—whether it is an ongoing process or a singular event. The author draws inspiration from Paolo Virno’s assertion that performing artists’ actions lack extrinsic goals, existing solely for their own occurrence. This concept reminds me of the Chafa Ghaddar’s fresco currently exhibited at the NYUAD Art Gallery. The fresco’s creation was partly performative, completed within the span of just a week, and the temporal constraints faced prompt contemplation regarding its ultimate completion. Ghaddar’s acceptance of presenting an unfinished piece reflects the essence of life coding, embracing real-time creation over finality.

Contemplating the eventual loss of Ghaddar’s fresco underscores the notion of permanence. The act of producing and subsequently losing artwork prompts deeper reflection on the essence of art itself, its ability to provoke thought, and its intrinsic value beyond material existence.

My demo can be viewed here.

Topos.live is a web-based WebAudio/MIDI sequencer, integrating a variety of synthesis techniques ranging from additive to wavetable, offering a versatile toolkit for sonic innovation. With Hydra integration, it facilitates oscilloscopes, frequency visualizers, and image sequencing capabilities. Crafted using TypeScript, enhancing JavaScript with static typing, and Vite, a fast development server with an optimized build process, Topos is loosely inspired by the Monome Teletype.

Developed by BuboBubo (Raphaël Forment) and Amiika (Miika Alonen), deeply involved in the TOPLAP and Algorave communities, Topos.live represents a fusion of expertise and passion. The duo previously collaborated on the Sardine Live Coding Library for Python (see performance). Their journey with Topos.live started in August 2023.

Raphaël’s scholarly contributions include papers such as How Live is Live Coding? The case of Tidal’s Longest Night and Sardine: a Modular Python Live Coding Environment, adding academic depth to his practical endeavors.