Hydra code:

let p5 = new P5();
let maxCount = 5000; // max count of the circles
let currentCount = 1;
let x = [];
let y = [];
let r = [];
s0.init({ src: p5.canvas });
src(s0).out();
p5.hide();
  p5.strokeWeight(0.5);
  x[0] = p5.width / 2;
  y[0] = p5.height / 2;
  r[0] = 10;
p5.draw = () => {
  p5.clear();
  let newR = p5.random(1, 7);
  let newX = p5.random(newR, p5.width - newR);
  let newY = p5.random(newR, p5.height - newR);
  let closestDist = Number.MAX_VALUE;
  let closestIndex = 0;
  for (let i = 0; i < currentCount; i++) {
    let newDist = p5.dist(newX, newY, x[i], y[i]);
    if (newDist < closestDist) {
      closestDist = newDist;
      closestIndex = i;
    }
  }
  let angle = p5.atan2(newY - y[closestIndex], newX - x[closestIndex]);
  x[currentCount] = x[closestIndex] + p5.cos(angle) * (r[closestIndex] + newR);
  y[currentCount] = y[closestIndex] + p5.sin(angle) * (r[closestIndex] + newR);
  r[currentCount] = newR;
  currentCount++;
  for (let i = 0; i < currentCount; i++) {
    p5.fill(255, 255, 0);
    p5.ellipse(x[i], y[i], r[i] * 2, r[i] * 2);
  }
  if (currentCount >= maxCount) p5.noLoop();
};
render(o0);

src(s0).mult(osc(10,1,1)).out()

src(s0).modulate(voronoi(()=>(cc[1])*15,.9)).mult(osc(9,1.1,2)).out()

src(s0).modulate(voronoi(()=>(cc[1])*15,1)).mult(osc(9,1.1,2)).scale(1.5,1,1).out()

src(s0).modulate(voronoi(()=>(cc[1])*15,1)).mult(osc(9,10,6)).scale(1.5,1,1).saturate(({time}) => Math.sin(time) * 4).out()

TidalCycles code:

hush

d1 $ someCycles(degradeBy 0.1) $ jux rev $ struct "t(4,8,3)" $ sometimes (# octave 7) $ n (scale "major" (sometimes rev $ "[0 2 3 4 7]")) # s "superpiano" # speed "[1,2]" # room 0.2
d6 $ ccv "0 10 20 50 80 100 127" # ccn "1" # s "midi"


d2 $ s "bassdm:14*4" # gain 2.5 # room 0.09
d5 $ ccv "0 40 80" # ccn "1" # s "midi"


d4 $ s "[reverbkick]" # room 0.5
d5 $ ccv "0 20 40 80" # ccn "1" # s "midi"


d3 $ sometimes (# velocity 0.6) $ iter 4 $ struct "<t(4,8) t(4,8,1)>" $ s "cp"

This is a version I did after class, with the right progression of the visuals:

This version is the spontaneous version; I played lines by lines without a specific order, the overall music sounds a little bit unstructured compared to the first video I posted.

For this version, I put everything into cycles with the purpose of timing everything rather than playing patterns one by one, so that’s why it’s a bit faster.

Inspiration:

My inspiration came a little bit randomly. A few days before the composition progress update, I stumbled upon this song from the electronic band Cash Cash called “How To Love” and the song, especially the synth riff, was stuck in my head for a while. I thought to myself, this riff sounds really cool and nostalgic, so let me try to create a composition piece inside TidalCycle based on it. Figuring out the code for the RHYTHM of the riff inside TidalCycle was challenging — it is easy to hum but not to construct them in codes — so I had to ask professor Aaron on Tuesday.

The time signature is a 4 4 so putting all the elements together was not too challenging. I enjoyed the process of writing the elements. The mixing, arranging, adding filters and effects part took me a lot of time. I had to make sure at least everything fit together and nothing gets left behind.

Hydra code — I experimented mainly with the concept of painting the canvas, from line to color:

solid().out()

s0.initImage("https://upload.wikimedia.org/wikipedia/commons/8/88/Hieronymus_Bosch_-_The_Garden_of_Earthly_Delights_-_Garden_of_Earthly_Delights_%28Ecclesia%27s_Paradise%29.jpg")
src(s0).out()

gradient(1)

.diff(src(s0),1)
.modulate(noise(()=>(cc[1])*4,.1))
.kaleid(2).kaleid(3)
.colorama (0.4)
.diff(noise(10,.10).thresh(.12,.12).rotate(()=>(cc[2])*14,.1))

.thresh(.2)

.out()

TidalCycles code — structured version:

hush
--40 80
--0 40 80 120
--0 20 40 60 80 100 127  

  d2 $ ccv "0 20 40 60 80 100 120 127" # ccn "1" # s "midi"
  d1 $ qtrigger $ filterWhen (>=0) $ seqP [
  --verse
  (0,16, s "arpy" >| note ((scale "major" "<[2@3 2@3 2@3 2@2 2@1 4@4] [1@3 1@3 1@3 1@2 1@2 4@4] [0@3 0@3 0@3 0@2 0@2 4@4] [-2@3 -2@3 -2@3 -2@2 -2@2 4@4]>")) # room 0.8 #lpf 1500),
  (4,23, s "~ hand:15 ~ hand:15" # gain 2 #room 0.1 #lpf 5000),
  (8,24, slow 4 $ s "superpiano" <| note "c'maj g'maj a'min f'sus2" #room 0.5),
  (16,24, s "superpiano" <| note "<[f'maj*4] [a'min*4] [g'maj*4] [g'maj*4]>" #room 0.1 #gain 1 #lpf 1500),
  (16,24, slow 4 $ s"superpiano" <|note "c6 c6 d6 g5" #room 0.1 # gain 1.5),
  (12,24, s "bassdm:14*4" #gain 1.5),
  (12,24, slow 2 $ sound "bassdm*4" #gain 1.5),

  --build-up
  (24,32, s "superpiano" <| note "<[f'maj*4] [a'min*4] [g'maj*4] [c'maj*4]>" #lpf 150 #gain 3),

  --riser-
  (24,29, s "bassdm:14*4" # room 0.5 #gain 1.5),
  (29,30, s "bassdm:14*8" # room 0.5 #gain 1.5),
  (30,31, s "bassdm:14*16" # room 0.5 #gain 1.5),
  (31,32, s "bassdm:14*32" # room 0.5 #gain 1.5),

  --drop
  (32,48, s "arpy" >| note ((scale "major" "<[2@3 2@3 2@3 2@2 2@2 4@4] [1@3 1@3 1@2 1@2 1@2 4@4] [0@3 0@3 0@3 0@2 0@2 4@4] [-1@3 -1@3 -1@3 -1@2 -1@2 4@4] [-2@3 -2@3 -2@3 -2@2 -2@2 4@4] [-1@3 -1@3 -1@3 -1@2 -1@2 4@4] [0 1 2 4 5] [4 5 7 8 7 8]>")) # room 0.8 #lpf 2000),
  (32,48, sound "clubkick:5*4" # room 0.5),
  (40,48, struct "<t(4,8) t(4,8)>" $ s "[arpy]" # note "<[c'maj*4] [g'maj*4] [a'min*4] [f'sus2*4]>" # gain 1.2 # orbit 5),
  (40,48, struct "<t(4,8) t(4,8,3)>" $ s "[reverbkick]" # note "f'maj a'min g'maj g'maj" #krush 0.5 # orbit 6 #speed 0.5)
]

TidalCycles code — unstructured version:

hush

  d10 $ ccv "40 80" # ccn "1" # s "midi"

do
  d1 $ s "arpy" >| note ((scale "major" "<[2@3 2@3 2@3 2@2 2@1 4@4] [1@3 1@3 1@3 1@2 1@2 4@4] [0@3 0@3 0@3 0@2 0@2 4@4] [-2@3 -2@3 -2@3 -2@2 -2@2 4@4]>")) # room 0.8 #lpf 1500

  d2 $ s "~ hand:15 ~ hand:15" # gain 2 #room 0.1 #lpf 5000

  d3 $ slow 4 $ s "superpiano" <| note "c'maj g'maj a'min f'sus2" #room 0.5

  d4 $ s "bassdm:14*4" #gain 1.5

  d5 $ slow 2 $ sound "bassdm*4" #gain 1.5

  d6 $ s "superpiano" <| note "<[f'maj*4] [a'min*4] [g'maj*4] [g'maj*4]>" #room 0.1 #gain 1 #lpf 1500

  d7 $ slow 4 $ s"superpiano" <|note "c6 c6 d6 g5" #room 0.1 # gain 1.5

d1 silence
d2 silence
d3 silence
d4 silence
d5 silence
d6 silence
d7 silence

    d11 $ ccv "0 40 80 120" # ccn "1" # s "midi"

do
  d1 $ s "superpiano" <| note "<[f'maj*4] [a'min*4] [g'maj*4] [c'maj*4]>" #lpf 150 #gain 3

  d2 $ s "bassdm:14*4" # room 0.5 #gain 1.5

  --riser--
  d3 $ qtrigger $ filterWhen (>=0) $ seqP [
  (0,1, s "bassdm:14*4" # room 0.5 #gain 1.5),
  (1,2, s "bassdm:14*8" # room 0.5 #gain 1.5),
  (2,3, s "bassdm:14*16" # room 0.5 #gain 1.5),
  (3,4, s "bassdm:14*32" # room 0.5 #gain 1.5),

  --buildup--
  (4,240, s "arpy" >| note ((scale "major" "<[2@3 2@3 2@3 2@2 2@2 4@4] [1@3 1@3 1@2 1@2 1@2 4@4] [0@3 0@3 0@3 0@2 0@2 4@4] [-1@3 -1@3 -1@3 -1@2 -1@2 4@4] [-2@3 -2@3 -2@3 -2@2 -2@2 4@4] [-1@3 -1@3 -1@3 -1@2 -1@2 4@4] [0 1 2 4 5] [4 5 7 8 7 8]>")) # room 0.8 #lpf 2000),
  (4,240, sound "clubkick:5*4" # room 0.5)]


    d12 $ ccv "0 20 40 60 80 100 127" # ccn "1" # s "midi"

  d4 $ struct "<t(4,8) t(4,8)>" $ s "[arpy]" # note "<[c'maj*4] [g'maj*4] [a'min*4] [f'sus2*4]>" # gain 1.2 # orbit 5

  d5 $ struct "<t(4,8) t(4,8,3)>" $ s "[reverbkick]" # note "f'maj a'min g'maj g'maj" #krush 0.5 # orbit 6 #speed 0.5

d1 silence
d2 silence
d3 silence
d4 silence
d5 silence

Kurokawa’s method of composition is innovative in its approach as it combines the science of synesthesia and the deconstruction of nature. It is interesting that he approaches these two subjects from a mechanical standpoint in addition to an aesthetic one. Synesthesia is all about how the body and brain perceive the senses, and nature is all about its impact on the body. He also brings in a third element, which is time, to give more prominence to synesthesia and nature’s impact overall. The effect is visible in his digital and computational landscaping work of shaping graphics, universal images, lines, etc. Kurokawa seems to be very methodical in his approach to creating his composition pieces. He is doing his work of helping nature evolve, sculpt time, and find pleasure through acknowledging transience and imperfection.

What is Melrose?

Melrose is a MIDI producing environment for creating (live) music, using golang-based music programming language. It is created by a Dutch software artist named Ernest Micklei in 2020. Melrōse offers a more human-centric approach to algorithmic composition of music. It defines a new programming language and includes a tool to play music by evaluating expressions in that langage. With Melrōse, exploring patterns in music is done by creating musical objects that can be played and changed at the same time.

Melrose offers a variety of features:

– Musical objects: Create a Note, Sequence, Chord, Progression to start your musical journey.

– Operations: Apply to musical objects such Repeat, Resequence, Replace, Pitch, Fraction, Mapping to create new musical combinations.

– Playing: Play any musical object directly, or in a Loop or in a Track, alone or together with others.

– Interactive: While playing you can change any musical object by re-evaluating object definitions.

– MIDI: Send MIDI to any sound producting system such as a DAW or your hardware synthesizer. React to or record MIDI input to create new musical objects.

How is Melrose Used in the Real World?

Melrose allows composers to generate and manipulate MIDI sequences using code. It can send MIDI messages to external synths or DAWs (Digital Audio Workstations); so therefore, musicians can write scripts to trigger MIDI sequences on-the-fly. Coders and musicians can use Melrose to explore harmonies, scales, and rhythmic structures programmatically. And developers can use Melrose to generate adaptive soundtracks for games.

Impact on Music Community:

Melrose empowers coders & musicians and bridge the gap between coding and music production. It also enables complex, non-repetitive compositions, unlike traditional DAWs, it allows music to be generated with rules and randomness. Music coders can have access to a lightweight, code-driven alternative to DAWs that are not too expensive and complicated to master. Especially for education, schools and educators can utilize the tool to teach interactive and generative music composition.

Sample Code:

it = iterator(6,1,4,-1) // delta semitones
pm = pitchmap('1:0,1:7,1:12,1:15,1:19,1:24',note('c')) // six notes
sm = fraction(8,resequence('1 4 2 4 3 6 5 4',pm)) // note sequence of eights
lp_sq = loop(pitch(it,sm),next(it)) // loop the sequence and change pitch on every iteration


p = progression('d', 'ii V I')

bpm(120)
s5 = sequence('16.e 16.e 16.e 8.c 8.e 8.g 8.g3 8.c 8.g3 16.e3 16.a3 16.b3 16.a#3 16.a3 8.g3 8.e 8.g4 16.a4 16.f4 16.g 16.e 16.c 16.d 16.b 16.c')

bpm(100)  
s7 = sequence('16.f#3+ 16.f#3+ 16.d3 8.b2 8.b2 8.e3 8.e3 16.e3 16.g#3+ 16.g#3+ 16.a3 16.b3 16.a3 16.a3 16.a3 8.e3 8.d3 8.f#3 8.f#3 16.f#3 16.e3 16.e3 16.f#3 16.e3')
s8 = loop(s7)

f1 = sequence('C D E C')
f2 = sequence('E F 2G')
f3 = sequence('8G 8A 8G 8F E C')
f4 = sequence('2C 2G3 2C 2=') 

v1 = join(f1,f1,f2,f2,f3,f3,f4,f4)

kick=note('c2') // 1
clap=note('d#2') // 2
openhi=note('a#2') // 3
closehi=note('g#2') // 4
snare=note('e2') // 5
R4=note('=') // 6
rim=note('f3') // 7

all = join(kick, clap, openhi, closehi, snare,R4, rim)

bpm (120)
drum1=duration(16,join("1 6 3 6 (1 2 5) 6 3 6 1 4 3 6 (1 2 5) 6 3 6",all))
Lp_drum1 = Loop(drum1)

// Define individual drum sounds with specific durations
kick = note('16c2')  // Kick drum
snare = note('16d2') // Snare drum
hihat = note('16e2') // Hi-hat

// Create patterns for each drum part using notemap
kickPattern = notemap('!...!...!...!', kick)  // Kick on beats 1, 3, 5, and 7
snarePattern = notemap('.!!..!!..!!..', snare) // Snare on beats 2 and 4
hihatPattern = notemap('!!!!!!!!!!!!', hihat) // Hi-hat plays every beat

// Merge all patterns into a single drum track
drumTrack = merge(kickPattern, snarePattern, hihatPattern)

// Play the drum track in a loop
loop(drumTrack)

Here’s a video of the author demonstrating the language

Recorded Demo:

Grooving signifies a “microscopic sensitivity to musical timing.” In this sense, performing a groove of any kind could equate having a developed sense of perception of musical timing. Grooving in no doubt can be easily elicited by good music, which goes to show that good sounds turn on the audience’s subconscious feel of musical timing. The backbeat is “indigenous to the modern drum kit”; the backbeat itself is also regarded as “a popular remnant of […] ancient human musical behavior. This proves that at the very early ages, humans have possessed for themselves the ability to sense and feel musical timing, which is also a form of a human body making music, completing the circle of music experience. This sense of backbeat and music timing is meant to be experienced in a collective setting; good music is meant to be shared within a community.

Oftentimes, people are curious to know what the codes look like, what go into the process of coding, and everything in-between. The transparency of codes that live coding provides gives us a new framework to think about coding itself. The act becomes less of a gatekept activity that only professionals can understand and more of a public display of the logic necessary for programmers to build code-based materials. Live coding clearly conveys the sense of logic being carried out and the sense of communication between live programmers. The art helps bridge the gap between arts and programming, between viewers and computer engineers.