I liked the comparison of different live coding platforms to languages, and I agreed with the author that the platform/language you work with will heavily dictate how you approach things. When I was working with Cascade for my research project, I found myself leaning towards making simple HTML shapes as that was what Cascade afforded me on the surface. With Hydra, I found myself making heavy use of the oscillators as it was what is available. Could I make anything I visualized in my imagination in both platforms? Probably! But they would take so much work, and in a way, being limited by the constraints/affordances of Cascade/Hydra allowed me to think more of making the most out of what I am given, instead of being forced to answer a question as abstract as ‘make anything you want’.

I found it funny how the author emphasized the ephemeral nature of live coding, especially the “live coders [that] celebrate not saving their code”. In our class, I found myself and many of my classmates “pre-gramming” (as called by the author) as much as we could to make the performance as smooth as possible. Perhaps this is just a side-effect of being new to the platform, or having little knowledge, but I’m still fearful of doing the ‘live’ part of live coding, and would rather trust the pre-gramming approach.

As the author compares computers to instruments, I wonder then if a syntax error in the live coding performance could be treated similarly to a mistake as playing the wrong note? However, I think the comparison is not fair for both sides. I find live coding platforms like TidalCycles to be an abstraction layer between the creator, and the music itself. With a guitar or piano, there is no such layer between the creator and the music, and you can easily map your physical sense of self ( of your fingers/hands ) to the music being produced. There is a big physical disconnect with live coding platforms as it depends heavily on visuals to make sure that you’re playing the right notes. You have to look at the code, process that information with your eyes, and then further process what that block of code will sound like. Live coding loses access to one of the best features of being human — your proprioception, that even with my eyes closed I can make music just by feel if I play an instrument well enough. I suppose you could argue that you can type with your eyes closed but I feel that it’s a bit of a stretch for making music…

Antony Author’s insights into musical notation highlight how when translated onto computers, our expression gets distilled into numerical data, as evident in grid-based music using MIDI standards. What’s really interesting is the comparison of live coding languages to spoken languages, suggesting that these languages aren’t neutral for expression. Language design significantly shapes users’ creative decisions and the ultimate output they produce.

This got me thinking about how different tools and constraints influence my own expression as an artist dabbling in various mediums. I wonder if other multidisciplinary artists embrace or resist these influences and whether it benefits their creative process.

The influence of language designers on creative outcomes in live coding and visual programming showcases the intricate decisions artists face within these systems. Instead of a one-size-fits-all approach, we’ve seen a rise in diverse, personalized systems, each reflecting the unique vision of its creator and offering unique pathways for artistic exploration.

What’s particularly captivating about this decentralized setup is how creative tech software ecosystems keep evolving. With every new software release, we not only get the core platform but also a bunch of additional packages and plugins created by enthusiasts. These additions often stretch the boundaries of what the original creators had in mind, opening up new possibilities for artists.

Sure, it might seem overwhelming at first for newcomers to navigate this sea of options. But in the end, it all adds to the richness and diversity of artistic practice. Thanks to the collective efforts of enthusiasts, algorithmic artists aren’t confined to the limitations of a single software package. Instead, they have a wide array of tools and resources they can tailor to their specific artistic visions.

<How it started>

There was no end goal at first. I was trying combinations of different sound, thinking about how they would sound like if they worked as my intro etc. Playing around, I came up with two combinations that I liked. They gave a vibe of somebody maybe being chased, or maybe being high. Then a game I watched a video of years ago popped up in my head, so I decided to maybe try making a visual similar to this. (It’s an educational game and this project too, is far from promoting drug abuse.)

<Performance>

For the performance, I realized that it would be chaotic to go back and forth between the music and the visuals. At the same time, I wanted some aspect of live coding to be there. To get around this, I made the music as a complete piece, allowing myself to focus on the visuals and evaluating the codes according to the preset music.

I could not do any type of screen recording because whenever I tried, the hydra code lagged so much, completely stopping the video in the background and freezing the screen. Because of that, some sounds of the music sounds a little bit different in the video above.

<Tidal Cycles Code>


intro = stack [slow 2 $ s "[ ~ cp]*4", slow 2 $ s "mash*4"]
baseOne = stack [s "bd bd ht lt cp cp sd:2 sd:2", struct "<t(4,8) t(3,8,1)>" $ s "ifdrums" # note "c'maj f'min" # room 0.4]
baseTwo = stack[slow 2 $ s "cosmicg:3 silence silence silence" # gain 0.7, slow 2 $ s "gab" <| n (run 10)]
baseThree = stack[s "haw" <| n (run 4),  s "blip" <| n (run 13) # gain (range 0.9 1.2 rand)]
buildHigh = palindrome $ s "arpy*8" # note ((scale "augmented" "7 3 1 6 5") + 6) # room 0.4
buildMidOne = sound "bd bd ht lt cp cp sd:2 sd:2"
buildMidTwo = stack[s "<feel(5,8,1)>" # room 0.95 # gain 1.3 # up "1", s "feel" # room 0.95 # gain 1.3 # up "-2"]
buildLow = s "f" # stretch 2 # gain 0.75
explosion = slow 2 $ s "sundance:1" # gain 1.2
mainHigh = s "hh*2!4"
mainMid = stack[fast 1.5 $ s "bd hh [cp ~] [~ mt]", fast 1.5 $ s "<mash(3,8) mash(5,8,1)>" # speed "2 1" # squiz 1.1, s "circus:1 ~ ~ ~ "]
endBeep = s "cosmicg:3 silence silence silence" # gain 0.7

-- midi
midiIntro = ccv "2 4 -2 1" # ccn 0 # s "midi"
midiEnd = ccv "2" # ccn 0 # s "midi"
midiBase = ccv "127 0 0 64 127 0" # ccn 1 # s "midi"
midiBuild = ccv "40 10" # ccn 2 # s "midi"
midiMain = ccv "2" # ccn 2 # s "midi"
midiSlow = ccv "20 16 12 8 4" # ccn 3 # s "midi"

playMusic = do {
  d2 $ qtrigger $ filterWhen (>=0) $ seqP [
    (0, 6, intro),
    (0, 42, fast 4 midiIntro),
    (6, 12, intro # gain 0.8)
  ];
  d3 $ qtrigger $ filterWhen (>=0) $ seqP [
    (6, 50, midiBase),
    (6, 42, baseOne),
    (12, 42, baseTwo),
    (18, 22, baseThree),
    (22, 26, baseThree # up 4),
    (26, 30, baseThree),
    (30, 34, baseThree # up 4),
    (34, 42, degradeBy(0.5) baseThree # gain 0.8),
    (42, 46, degradeBy(0.5) baseThree # gain 0.65),
    (46, 50, degradeBy(0.5) baseThree # gain 0.45)
  ];
  d4 $ qtrigger $ filterWhen (>=0) $ seqP [
    (42, 58, buildHigh),
    (46, 58, buildMidOne # gain 1.1),
    (50, 58, buildMidTwo),
    (50, 60, fast 6 midiBuild),
    (50, 58, buildLow),
    (58, 59, explosion)
  ];
  d5 $ qtrigger $ filterWhen (>=0) $ seqP [
    (60, 62, mainHigh),
    (60, 86, midiEnd),
    (60, 86, midiMain),
    (60, 86, midiSlow),
    (62, 84, mainMid)
  ];
  d6 $ qtrigger $ filterWhen (>=0) $ seqP [
    (68, 76, baseOne # gain 0.5),
    (68, 80, baseTwo # gain 0.5),
    (68, 68, baseThree # gain 0.5),
    (76, 86, midiEnd),
    (76, 78, slow 2 endBeep),
    (78, 82, degradeBy(0.7) $ slow 2 endBeep # gain 0.6),
    (82, 86, degradeBy(0.5) $ slow 3 endBeep # gain 0.5)
  ]
}

playMusic

hush

<Hydra Code>


s0.initVideo("/Users/mayalee/Desktop/Spring\ 2024/Live\ Coding/classPerformance/composition/runningVideo.mp4")

src(s0)
  .out(o0)
render(o0)

src(s0)
  .add(
    osc(200,1,0.9)
      .saturate(()=>cc[0])
      .modulate(noise(()=>Math.sin(cc[1] / 100 * time) * 2 + 1))
      .add(shape(30).scale(()=>cc[2]*20).rotate(2))
  )
  .out(o0)
render(o0)

osc(200,1,0.9)
  .rotate(1, 0.1)
  .saturate (()=>cc[0]*5)
  .modulate(noise(()=>Math.sin(0.01 * time) * 2 + 1))
  .add(shape(30).scale(2.5).rotate(2))
  .out(o0)
osc(0.001, 900, 0.8)
  .diff(o0)
  .scale(()=>cc[1] /3000 - 100)
  .modulate(o1, 0.1)
  .out(o2)
render(o2)

osc()
  .shift(0.1,0.9,0.3)
  .out(o2)
render(o2)

osc(200,1,0.9)
  .rotate(1, 0.1)
  .saturate (1)
  .modulate(noise(()=>Math.sin(0.001 * time) * 2 + 1))
  .add(shape(30).scale(2.5).rotate(2))
  .out(o2)
osc(0.001, 900, 0.8)
  .diff(o2)
  .rotate(()=>Math.sin(0.1 * time) * 2 + 1, ()=>cc[3] / 10)
  .scale(0.3)
  .modulate(o1, 0.1)
  .out(o1)
render(o1)

hush()

<Future Improvements>

When I started working on the visuals, I actually thought it would be a good idea to have the beat of the visuals a bit off from the music to create that psycho mood. Looking at the end result, I’m not sure if it was implemented in the way I intended. I think the idea was okay, but making something off in a harmonized way was not such an easy job. I also think there could’ve been more filler visuals. I think there’s a part or two where the visuals are repetitive for a while- making more visuals for these parts, I think, would have made it more intense to watch.

Idea:

The initial thought that came to mind when I started the composition project was to make something soothing that would ease me and everyone that listens to it into the spring break that’s coming up. The piece was supposed to be soothing and full of joy from beginning to the end.

Building the harmonies:

The composition relies heavily on harmonies. The beat used is a very simple one using kicks and snares. The actual essence is in the melody of the piece. First, I played four basic chords on the superpiano – g major, e minor, c major and d major. Then, I’d sing different notes over these chords and then go to this online pitch detector website to detect the pitches of my own voice when I sang over the chords, and use other instruments within Dirt samples to play those notes/pitches. Every thing that I have built upon the four base chords came like this.

However, doing this wasn’t as easy as it sounds. Because different instruments have different timbres and even when the notes sounded good while singing, they sounded very weird on different instruments. So, everything I used in the piece was selected intentionally and with a lot of search. Also, I wanted to add some sort of a flute within the piece. I asked a friend to play the flute, recorded it and included it in the DirtSamples to get the flute in their (although it sounded super distorted for some reason within Tidal Cycles).

Playing with the Visuals:

The visuals didn’t need a lot of time to create. I had this idea of a star that gets gradually excited as time goes by to be the story. So, I made a shape and made it go crazy with time – with change in numbers, motion and colors, at the end, it multiplied to cover the whole screen. And, I feel the feeling of joy was conveyed.

Video (with an annoyingly choppy recording) and Code:

Tidal Code:



chorus = do {
  d1 $ qtrigger $ seqP [
    (0, 8, slow 4 $ s "superpiano*8" # up "d4 e4 fs4 g4" # gain (1.2) # room 0.9 #krush 4 #lpf (range 2000 3000 saw) #sustain 2),
    (0, 16, slow 4 $ s "superpiano*16" # up "g'maj e'min c'maj d'maj" # room 5 # krush 9 # gain 0.8),
    -- (4, 36, slow 2 $ s "superpiano*8" # up "d6  [c6 b5] <[~ ~]  [a5 b5]>" # sustain 3 # gain 1.6),
    (0, 16, slow 1 $ s "[bd bd sd ~] [bd bd sd bd]" # gain 1.6),
    -- (12, 36, slow 2 $ s "yeah*16" # up "<[a6 b6] [a6 b6 c7 [b6 a6 g5 fs5]]>" #gain 5),
    (8, 20, slow 2 $ s "flute*4" # up "<[g5 [a5 ~] c5 ~] [g5 [a5 ~] c5 ~]>" # sustain 3 # gain 1.2 #krush (range 0.3 0.8 rand)),
    (0, 16, ccv "<[10 30] [10 30 60 [10 30 90 127]]>" # ccn "0" # s "midi"),
    (0, 16, ccv "20  [40 60] <[40 60]  [90  120]>" # ccn "3" # s "midi"),
    (0, 16, ccv "<[120 [60 ~] 40 ~] [20 [40 ~] 120 ~]>"# ccn "4" # s "midi"),
    (4, 20, slow 2 $ s "yeah*16" # up "<[a6 b6] [a6 b6 c7 [b6 a6 g5 fs5]]>" #gain 5)
  ];
  -- d2 silence;
  d3 silence;
}

chorus2 = do {
  d2 $ qtrigger $ seqP [
    -- (0, 4, slow 4 $ s "hh*16" # gain (range 0.8 1.2 saw) # speed (range 0.4 3 saw)),
    -- (0, 16, slow 4 $ s "superpiano*8" # up "d4 e4 fs4 g4" # gain (1.2) # room 0.9 #krush 4 #lpf 2000 #sustain 2),
    (0, 4, slow 4 $ s "superpiano*8" # up "d4 e4 fs4 g4" # gain (range 0.4 1.2 saw) # room 0.9 #krush 4 #lpf (range 2000 3000 saw) #sustain 2),
    (4, 16, slow 4 $ s "superpiano*16" # up "g'maj e'min c'maj d'maj" # room 5 # krush 9 # gain 0.8),
    (4, 16, slow 2 $ s "superpiano*8" # up "d6  [c6 b5] <[~ ~]  [a5 b5]>" # sustain 3 # gain 1.6),
    (4, 16, slow 1 $ s "[bd bd sd ~] [bd bd sd bd]" # gain 1.6),
    (4, 16, slow 2 $ s "yeah*16" # up "<[a6 b6] [a6 b6 c7 [b6 a6 g5 fs5]]>" #gain 5),
    -- ( slow 2 $ s "flute*4" # up "<[g5 [a5 ~] c5 ~] [g5 [a5 ~] c5 ~]>" # sustain 3 # gain (range 1.2 0.4 saw) #krush 0.3),
    (4, 20, slow 2 $ s "flute*4" # up "<[g5 [a5 ~] c5 ~] [g5 [a5 ~] c5 ~]>" # sustain 3 # gain 1.2 #krush 0.3),
    (4, 16, ccv "<[10 30] [10 30 60 [10 30 90 127]]>" # ccn "0" # s "midi"),
    (4, 16, ccv "20  [40 60] <[40 60]  [90  120]>" # ccn "3" # s "midi"),
    (4, 16, ccv "<[120 [60 ~] 40 ~] [20 [40 ~] 120 ~]>"# ccn "4" # s "midi")
  ];
  d3 silence;
}

-- d3 $ slow 2 $ s "flute*4" # up "<[g5 [a5 ~] c5 ~] [g5 [a5 ~] e6 ~]>" # sustain 3 # gain 1.6

verse = do {
  d1 $ qtrigger $ seqP [
    -- (0, 20, s "[bd bd sd ~] [bd bd sd bd]" #gain 1.4),
    (4, 20, slow 4 $ s "gtr*16" # up "g'maj e'min c'maj d'maj" # gain 1.1),
    -- (4, 20, slow 4 $ s "gtr:2*16" # up "g'maj e'min c'maj d'maj"),
    -- (8, 20, s "hh*2" # gain 1.2),
    (12, 20, slow 4 $ s "superpiano*8" # up "d4 e4 fs4 g4" # gain 1 # room 0.9 #krush 4 #lpf 2000 #sustain 2),
    (0, 4, slow 2 $ s "flute*4" # up "<[g5 [a5 ~] c5 ~] [g5 [a5 ~] c5 ~] >" # sustain 3 # gain 1.3)
  ];
  d3 $ qtrigger $ seqP [
    (4, 20, slow 4 $ ccv "[30 40]  <50 20 60 80> 90 120" # ccn "0" # s "midi")
  ];
}

d1 $ slow 2 $ s "flute*4" # up "<[g5 [a5 ~] c5 ~] [g5 [a5 ~] c5 ~]>" # sustain 3 # gain 1.6

verse

chorus

chorus2

hush

Hydra Code:

shape(()=>cc[0] * 5, 0.001, 0.4)
  .color(0, 0.5, 0.2)                                  
  .repeat(3.0, 3.0)                                     //2
  .modulate(voronoi(0.3, 0.6, ()=>cc[3] * 5))
  .rotate(() => time/10)
  .scrollY(() => time/10)
  .scrollX(() => time/10)
  .modulateKaleid(osc(()=>cc[4] * 2))                //3
  .modulate(noise(()=>cc[4] * 2))                    //1
  .out()

hush()

Video and Inspiration

My Inspiration, my process, my parts, all came in waves…

First, I saw one of the previous live coding projects on the IM show case recordings on the screens on the side in class, this one was bubbles moving around, I thought they looked cool, and I started trying to recreate them. I am not sure what happened along the way but the ideas changed, my starting point takes a little from the bubbles I think, not sure how much, but there’s something there. Ins

I added a shape(4), repeated it and edit the scale and number of repeats until i got squares all over my screen, then

shape(4, 0.9)
  .color(150)
  .colorama(()=>(cc[0]))
  .colorama(0.7)
  .repeatY(()=>(cc[1]*8))
  .repeatX(()=>(cc[1]*16))
  .modulate(osc(10,0.05,0.000001)
    .modulate(noise(10)),()=>Math.sin(time*0.01+0.1))
  .out(o0)

For the Audio, I used a “superpiano” with a hi hat, the piano triggered the colorama transitions and the hi hat triggered change in size as seen here

base = do
  d1 $ slow 1 $ n (scale "egyptian" "7 4 1 6 3 0 7 4") # sound "superpiano" # sustain 5  # room 0.4
  d2 $ slow 1 $ ccv "89 50 13 76 38 1 89 50" # ccn "0" # s "midi"
base2 = do
  d3 $ slow 2 $ s "hh*4" # speed 2 # gain 2
  d4 $ slow 2 $ ccv "15 30 60 90" # ccn "1" # s "midi"

Then I added this layer for an extra effect:

tabla = do
  d5 $ s "tabla:14 ~ ~ ~" # room "<0.4 0.2>" # gain 2
  d6 $ slow 1 $ ccv "<1 6> ~ ~ ~" # ccn "2" # s "midi"
osc(2,0.001,1)
  .modulate(voronoi(()=>(cc[2]*127),0.3,0.3), 10)
  .brightness(0.3)
  .out(o1)
src(o1).mult(o0).out(o2)

First one is the voronoi shown without mult and the second one is using mult to combine the 2 together, The voronoi was aslo affected with the tabla sound, to

Then, I was talking to classmates, trying to explore sounds, make my sounds better (I struggle here its a little scary), but one of the sounds shared, the second I heard it, my brain went GLITCH, so here came my closing part.

When I did the glitch I could not help but think of retro vibes. An old TV glitching to be exact. So, I decided to get Images from different Egyptian retro and add them with a sound of their own. the images would pop and disappear with the sound as shown below; with a TV pixelate when the images popped.

The code I used for the sound with the images.

tabla_trigger = do {
  d7 $ qtrigger $ filterWhen (>=0) $ slow 2 $ seqP [
    (0, 1, s "tabla:14*2"), -- 1 -- tv
    (1, 2, s "tabla:14*4"), -- 1 -- tv
    (2, 3, s "tabla:14*8"), -- 2 3
    (3, 4, s "tabla:14*16"), --3 4
    (4, 5, s "tabla:14*8"), -- 5 6 7
    (5, 6, s "tabla:14*4"), -- 8 9 10
    (6, 7, s "tabla:14*2"), --11 12 1
    (7, 8, s "tabla:14"),
    (8, 9, s "tabla:8")
 ] # room 0.95 # gain 2 #  up "-2"
}
tabla_imgs = do{
  d8 $ qtrigger $ filterWhen (>=0) $ slow 2 $ seqP [
    (0, 1, ccv "1 0"),
    (1, 2, ccv "1 0 1 0"),
    (2, 3, ccv "2 0 2 0 3 0 3 0"),
    (3, 4, ccv "4 0 4 0 5 0 5 0 6 0 6 0 7 0 7 0"),
    (4, 5, ccv "8 0 8 0 9 0 9 0"),
    (5, 6, ccv "10 0 10 0"), --8 9 10
    (6, 7, ccv "11 0"), --11 12 1
    (7, 8,  ccv "12"),
    (8, 9, ccv "13")
  ] # ccn "4" # s "midi"
}

do
  tabla_trigger
  tabla_imgs
  d5 silence
  d6 silence

an example of one of the images popping:

shape(4,1.1,0.001)
    .color(0.8, 0.8, 1)
    .layer(osc(100,0.01,0).mask(shape(4,0.3,0.001)))
    .modulate(noise(10))
    .pixelate(128,128)
    .layer(s4)
    .layer(s5)
    .out(o3)

Finally, as I finish the popping images comes the glitch part. As I end the images, i change the colorama input to 1 as shown on the right, on the left is an example of the glitches used in this part.

I increased the glitching for this part as the sound progressed, here is an example of the glitch code:

src(o0)
    .blend(src(o0).scale(.999)
    .modulatePixelate(noise(1,0.01).pixelate(16,16),1024),1)
    .out(o2)

and the sound I used for the glitch:

glitch = do {  d7 $ qtrigger $ filterWhen (>=0) $ slow 2 $ seqP [    (0, 1, s "feel:6"),    (1, 2, s "feel:6*2"),    (2, 4, s "feel:6*4"),    (4, 6, s "feel:6*8"),    (6, 7, s "feel:4")  ] # room 0.95 # speed "4" # gain 1.3 # squiz 1.1 # up "-2"}

For the glitch I wanted the sound and everything to disappear with its end, like a glitch ending, then I faded out the image above on the right, using a slower pace of the sounds I had at first to end the old TV and just close everything.

Performance recording

Audio code 🎧

setcps (130/60/4)

-- chords
d1 $ qtrigger $ filterWhen (>=0) $ stack [
    struct ("[1 0] [0 1] [0 0] [1 0]")
    $ s "longer:1" >| begin 0 >| end "0.003"
    # note ("<f'min'4 df'maj'4 c'min'4 ef'maj'4>") # hpf 1000,
    ccv "[<90 40 30 90> 0] [0 <90 40 30 90>] [0 0] [<90 40 30 90> 0]" # ccn "0" # s "midi"
  ]

-- bass drum
d2 $ qtrigger $ filterWhen (>=0) $ stack [
    struct ("1 1 [1 0 0 1] [0 0 1 0]") $ s "house",
    ccv "60 60 [60 0 0 60] [0 0 60 0]" # ccn "1" # s "midi"
  ]

-- melody
d3 $ qtrigger $ filterWhen (>=0) $ stack [
  fast 2 $ struct ("1 1 1 <0 1>") $ rolledBy 0.7 $ note (arp "diverge" "<f'min'4 ef'maj'4>")
  # s "bubble:2" # gain 1.3 # hpf 1000,
  ccv "30 30 30 <0 30>" # ccn "2" # s "midi"
]

-- kick & hh
do
  d4 $ qtrigger $ filterWhen (>=0) $ struct ("[1 0 1 1 <0 1> [0 1] [0 1] 0]") $ s "bubble:5" -- bubble kick
  d5 $ qtrigger $ filterWhen (>=0) $ struct ("0 1 0 1") $ s "jchh:1" # gain 0.8 # room 0.1

-- shaker?
do
  d7 $ qtrigger $ filterWhen (>=0)
    $ seqP [
      (2,3, "ukgfx:3@4")
    ]
  d8 $ qtrigger $ filterWhen (>=0) $ struct ("[1 1] [1 0] [0 1] 0") $ s "house:2" # lpf 2000

-- faster
do
  d1 $ qtrigger $ filterWhen (>=0) $ struct ("[1 0] [0 1] [0 0] [1 0]") $ s "longer:1" >| begin 0 >| end "0.003"
    # note ("<f'min'4 df'maj'4 c'min'4 ef'maj'4>")
  d5 $ qtrigger $ filterWhen (>=0)
    $ seqP [
      (0,4, struct ("[0 1] [0 1] [0 1] [0 1]") $ s "jchh:1" # gain "0.8" # room 0.1),
      (4,8, struct ("[0 1] [0 1] [0 1] [0 1]") $ s "jchh:1" # gain "0.8" # room 0.1)
    ]

-- little break
do
  d1 $ qtrigger $ filterWhen (>=0) $ struct ("1@4") $ s "longer:1" >| begin 0 >| end "0.003"
    # note ("<f'min'4 df'maj'4 c'min'4 ef'maj'4>")
  d2 $ silence
  d3 $ qtrigger $ filterWhen (>=0)
    $ seqP [
      (4,8, fast 2 $ struct ("1 1 1 <0 1>") $ rolledBy 0.7 $ note (arp "diverge" "<f'min'4 ef'maj'4>"))
    ] # s "bubble:2" # gain 1.4 # hpf 1000 -- louder bubble
  d5 $ silence
  d6 $ silence
  d8 $ silence

-- highlight
do
  d1 $ qtrigger $ filterWhen (>=0) $ stack [
      struct ("1@4") $ s "longer:1" >| begin 0 >| end "0.003"
      # note ("<f'min'4 df'maj'4 c'min'4 ef'maj'4>") # gain 1.07,
      ccv "[<90 40 30 90> 0] [0 <90 40 30 90>] [0 0] [<90 40 30 90> 0]" # ccn "0" # s "midi"
    ]
  d2 $ qtrigger $ filterWhen (>=0) $ stack [
      struct ("1 1 [1 0 0 1] [0 0 1 0]") $ s "house",
      ccv "60 60 [60 0 0 60] [0 0 60 0]" # ccn "1" # s "midi"
    ]
  d3 $ qtrigger $ filterWhen (>=0) $ fast 2 $ struct ("1 1 1 <0 1>") $ rolledBy 0.7 $ note (arp "diverge" "<f'min'4 ef'maj'4>")
    # s "bubble:2" # gain 1.3 # hpf 1000
  d4 $ qtrigger $ filterWhen (>=0) $ struct ("[1 0 1 1 <0 1> [0 1] [0 1] 0]") $ s "bubble:5" # hpf 1000
  d5 $ qtrigger $ filterWhen (>=0) $ struct ("[0 1] [0 1] [0 1] [0 1]") $ s "jchh:1" # gain 0.8 # room 0.1
  d6 $ qtrigger $ filterWhen (>=0) $ struct ("0 1 0 1") $ s "ukgclap:5" # gain 0.7 # room 0.1
  d7 $ silence
  d8 $ qtrigger $ filterWhen (>=0) $ struct ("[1 1] [1 0] [0 1] 0") $ s "house:2"

-- riser
d7 $ qtrigger $ filterWhen (>=0) $ stack [
    seqP [
      (0,1, s "jcfx:12@8" # gain 0.7 # room 0.2),
      (0,1, s "ukgriser:3@8" # lpf 2000 # gain 0.8 # room 0.2),
      (2,6, s "h:1@4" # n "<f'min'4 df'maj'4 c'min'4 ef'maj'4>" # crush 3 # cut 1 # hpf 1000 # gain 0.6 # room 0.4)
    ],
    ccv "<90 40 30 90>" # ccn "2" # s "midi"
  ]

-- only chords + vox
do
  d1 $ struct ("[1 0] [0 1] [0 0] [1 0]") $ s "longer:1" >| begin 0 >| end "0.003"
    # note ("<f'min'4 df'maj'4 c'min'4 ef'maj'4>")
  d2 $ silence
  d3 $ silence
  d4 $ silence
  d5 $ silence
  d6 $ silence
  d7 $ qtrigger $ filterWhen (>=0) $ s "h:1@4" # n "<f'min'4 df'maj'4 c'min'4 ef'maj'4>"
    # crush 3 # cut 1 # hpf 1000 # gain 0.6 # room 0.4
  d8 $ silence

-- ending..
do
  d1 $ silence
  d2 $ silence
  d3 $ silence
  d4 $ silence
  d5 $ silence
  d6 $ silence
  d7 $ qtrigger $ filterWhen (>=0) $ struct ("[1 0] [0 1] 0 0") $ s "longer:1" >| begin 0 >| end "0.003"
    # note ("<f'min'4 df'maj'4 c'min'4 ef'maj'4>" + "-3")
  d8 $ silence
  d9 $ silence

d7 $ struct ("[1 0] [0 1] 0 0") $ s "longer:1" >| begin 0 >| end "0.003"
    # note ("<f'min'4 df'maj'4 c'min'4 ef'maj'4>" + "-3") # lpf 2000 -- 2000 -> 1000

Visual code 🫧

hush()

bpm = 65
// -- load everything here --
// load video 1 (hand)
s1.initVideo('https://media.giphy.com/media/v1.Y2lkPTc5MGI3NjExOWRlamczNDZkNWwwNjl3cXMzN3FjYnR5bWlmbTNlbWZmaHVoOTJ2MiZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9Zw/3oKIPw0gVImPZISxlS/giphy.mp4')
src(s1).out(o1) // out at o1
// load video 2 (background)
s0.initVideo('https://media.giphy.com/media/v1.Y2lkPTc5MGI3NjExcGtoMWl5ZzN1czdxMW40N3kyZGhjOTJmczB4bjQ1YWg3anFsYnUzeCZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9Zw/xUOwGedYaNseUaJRa8/giphy.mp4')
// load noises
voronoi(10,1,5)
  .brightness(()=>Math.random()*0.15)
  .modulatePixelate(noise(() => (cc[2]*25),0.5), 100)
  .out(o2)
// --------------------------

// start
src(s0)
  .modulate(noise(() => (cc[0]*0.3)))
  // .blend(noise(() => (cc[2]*0.5)), 0.1).brightness(0.2) // melody
  // .blend(o2, 0.1) // shaker
  .out(o0)

// little break
speed = 0.01
src(s0).brightness(0.1).out(o0)

// highlight
speed = 1
src(s0)
  .modulate(noise(() => (cc[0]*0.3)))
  .blend(noise(() => (cc[2]*0.5)), 0.1).brightness(0.2)
  .blend(o2, 0.1)
  .add(src(s1), 0.2)
  .out(o0)

// chords + vox
speed = 1
src(s0)
  .add(src(s1), 0.4)
  // .brightness(0.2) // ending
  .out(o0)

For the composition project, I wanted to create pop-like, melodic music with nostalgic visuals. I first started experimenting with different rhythms for the beat and created a main beat using the ‘bubble’ sound sample from the DirtSamples library. I instantly liked how it sounded, so I decided to go with the lightweight melody that can go well with the bubble kick drum. The hardest part for me was to come up with the chord progression. With the professor’s help, I could devise the main chord that I used as an intro. I couldn’t find a synth that fit the rest of the beat, so I ended up chopping off the chord sound from another song (OMG by NewJeans). Then, I changed the notes to apply the custom chords. Once I had all the elements, I mostly focused on the progression of the sounds. I like music with a nice intro, so I spent a lot of time refining the initial build-up and making it consistently progress for listeners to really feel the build-up.

https://drive.google.com/file/d/1adPCNibbfjbH3312wOzAxCuNja6UMrZy/view?usp=sharing (in case video is not playing)

Sound code:

setcps 0.8

d1 $ slow 4 $ arp "up down diverge converge" $ note "c'maj'4 d'min'4 a4'min'4 g4'maj'4" # s "supervibe"
d2 $ ccv "<127 0 127 0>" # ccn "0" # s "midi"

-- drums
d3 $ "bd sd " # room 0.2
d4 $ ccv (segment 128 (range 127 0 saw)) # ccn "1" # s "midi"

--d2 $ ccv (stitch "" 127 0) # ccn 0 # s "midi"

-- bass line
d5 $ slow 4 $ note "c34 d44 a38 g34" # s "superhammond:5"
# legato 0.5

d6 $ ccv "<0 20 64 127>" # ccn "2" # s "midi"

xfade 4 $ slow 4 $ note "c'maj d'min a'min g'maj" # s "superpiano" -- "superchip"
# legato 1
# djf 0.3
# gain 0.85

d7 $ ccv "<127 0 127 0>" # ccn "3" # s "midi"

--swoooooop
once $ s "auto:3"

-- variation of arp chords
d1 $ jux rev $ slow 4 $ arp "up down diverge converge" $ note "c'maj'42 d'min'42 a4'min'42 g4'maj'42" # s "supervibe"
d2 $ ccv (segment 128 (slow 4 (range 127 0 saw))) # ccn "2" # s "midi"

do
solo 5
solo 4

d8 $ slow 4 $ arp "converge" $ n "c'maj'42 d'min'42 a4'min'42 g4'maj'42" # s "superpiano" -- "superchip"
# voice 0.7 # gain 0.6
# amp 0.4
# room 0.7 # sz 0.9
# legato 1.1
# djf 0.8
# gain 0.7

d9 $ ccv (segment 128 (range 127 0 saw)) # ccn "4" # s "midi"

do
solo 10
d10 $ s "bd*8"

do
d10 $ s "bd*2"
unsolo 5
unsolo 4
unsolo 10

-- vocals
d12 $ slow 4 $ note "[~ ~ c ~] [~ ~ d ~] e g4" # n "1 2 3 44" # s "numbers"
# amp "1 0.8 2.2 1.3*4"
# legato 1.5
# gain 1
# room 0.3
# djf 0.8

d14 $ ccv "<80 100 120 100>" # ccn "10" # s "midi"

d11 $ degradeBy 0.4 $ jux rev $ note "a b [c f] [e f] a [g d] e t" # s "superpiano"
# room 0.3
# gain 0.95
# djf 0.3

d8 $ ccv "<127 60 127 60>" # ccn "11" # s "midi"

d13 $ degradeBy 0.8 $ jux rev $ every 2 (rev) $ n "0 1 2 3 4 5 6 7 8" # s "glitch"
# gain 0.75
# legato 1
# speed (slow 4 $ range 0.5 1 saw)

d15 $ ccv "<127 0 64 127 0 64>" # ccn "12" # s "midi"

do
d13 silence
d11 silence
all $ (# djf (slow 4 $ range 0.2 0.8 sine))

all $ (# legato 0.05)

d6 $ ccv (segment 128 (range 127 0 sine)) # ccn "5" # s "midi"

do
d12 silence
all $ (# speed (slow 4 $ range 0.5 2 sine))

all $ (# cps (slow 4 $ range 0.5 (fast 3 $ range 0.3 2 sine) saw))

all $ fast 1

hush

Visual Code:

voronoi(5,-0.1,5)
.add(osc(1,0,1)).kaleid(21)
.scale(1,1,2).colorama(()=>cc[5]).out(o1)
src(o1).mult(src(s0).modulateRotate(o1,100), -0.5)
  .out(o0)

What I had in my mind was to create a “floral” pattern with vibrant colors in visuals to go along with my sound composition. The sound uses various synthesizers like supervibe, superhammond, superpiano, etc. The arp function generates arpeggios, while note specifies pitches. slow and jux rev are used to modify the playback speed and direction, adding variation and texture. once, solo, and unsolo are utilized for structural changes, introducing, highlighting, or removing elements temporarily. Drums and bass lines are created using both synthesized sounds (bd, sd) and MIDI-controlled instruments. Throughout the composition, various effects such as legato, gain, room, djf (filter), and amp are used to shape the sound’s envelope, volume, reverb, and filter cutoff, respectively. The all function applies effects (djf, legato, speed, cps) globally, affecting all patterns to create cohesive changes in the texture, tempo, and timbre of the composition.

The use of voronoi, osc, kaleid, and scale functions in combination is pivotal in generating visuals that resemble changing flower patterns. voronoi creates cell-like structures that can mimic the segments of a flower, osc adds movement and texture, kaleid with a high value (21) generates symmetrical, kaleidoscopic patterns resembling the petals of flowers, and scale adjusts the size, allowing the visuals to expand or contract in response to the music. colorama is used to cycle through colors dynamically, which is linked to the tonal shifts in the music.

I approached this by first composing the music in tidalcycles, then creating a visual pattern in hydra that I like, and binding them together. Challenges were to keep the composition interesting even with using one kind of visual. I had tried a lot of variants but it quite didn’t fit together to jump around on them while playing a consistent musical compostion – so I sticked to one. But developments could be made that my visual composition also unfolded gradually by starting from something simple then maybe blooming to these big floral patterns. Also I could’ve been more consistent with the color palette.