Apologies for the late submission, it slipped my mind to post one, though I had recorded my video prior to the class.
All in all, I had to idea what to expect with the composition. I had no idea what my personal stylistic choices are, so I kind of struggled at the start with a concept. Therefore, I simply began with crafting a funky upbeat solid rhythm. I took my time to become more familiar with the visuals, therefore, I had spent quite a bit of time experimenting with them, but not many of my results felt like they aligned with the direction my piece was heading. Then I thought to bring in a personal sound sample to spice things up. As a result, I went to the first thing that came to my mind — Pingu. I included the Noot Noot sample as I find Pingu to be the perfect embodiment of chaos but also a playful character (and also he is one of my favourite characters to exist). I wanted to ensure the visuals were in sync with the sound, and at the start I had struggled, especially with finding the right sort of ccv values, however, through a brute iterative trial and error session, I found a neat balance. I had started going with a more subtle approach, however I found that it was quite challenging to recognise this, and I was worried that given the time limit during the demos, I would not be able to execute in a proper manner. Therefore, I went for more bolder visuals, with simpler beats. I noted that in class you said that the sync between the visuals and the audio was not as evident, so I hope from this video you are able to find a more distinguishable link between them.
From the 0:27 part, I introduce a new melody, and I wanted to represent that with squiggly lines to indicate its playful nature. This is then followed by even funkier and playful beats such as Casio and blip. Once I had found an interesting synchrony with casio and blip, I understood how I wanted to go ahead — as this made it easy for me to create something that reflects the feeling lightheartedness with a tinge of a spirited and lively approach, however, as I had Pingu in my vision, around the end of my video (4:00) I began to truly mess with the visuals and create something that is quite disorderly in nature despite it being in sync with my sound.
I hope that you enjoyed!
Here is my code! (It’s a bit changed from the video since it is from the class demo)
Tidal
--- FINAL CODE
hush
d1 $ s "{808bd:5(3,4) 808sd:2(2,6)} " # gain 2 # room 0.3
d1 silence
d2 $ struct "{t(3,4) t(2,6) t(2,4)}" $ ccv ((segment 128 (range 127 0 saw))) # ccn "0" # s "midi"
hush
d3 $ fast 2 $ s "pluck" <| n (run 4) # gain 1 # krush 2
d2 $ ccv "0 20 64 127" # ccn "0" # s "midi"
d4 $ s "glasstap" <| n (run 4) # gain 1.5
d5 $ slow 2 $ s "arpy" <| up "c d e f g a b c6" # gain 1.5
d2 $ ccv " 9 19 36 99 80 87 45 100" # ccn "0" # s "midi"
d6 $ fast 2 $ s "casio" <| n (run 4) # gain 2
d3 $ qtrigger $ filterWhen (>=0) $ seqP [
(0, 1, s "blip:1*4"),
(1,2, s "blip:1*8"),
(2,3, s "blip:1*12"),
(3,4, s "blip:1*16")
] # room 0.3
d4 silence
hush
nooty = once $ sound "nootnoot:Noot" # squiz 1 # up "-2" # room 1.2 # krush 2
nooty
-- PART 2
d5 $ s "blip" <| n (run 4)
# krush 3
# gain 1
d2 $ ccv "30 80 120 60" # ccn "0" # s "midi"
d6 silence
hush
d6 $ fast 2 $ s "control" <| n (run 2)
d7$ fast 2 $ s "casio" <| n (run 4) #gain 0.9
d8 $ s "{arpy:5(3,4) 808sd:(2,4)} " # gain 1
d2 $ struct "{t(3,4) t(2,4) t(2,4)}" $ ccv ((segment 128 (range 127 0 saw))) # ccn "0" # s "midi"
nootynooty = once $ sound "nootnoot:Noot" # legato 0.2 # squiz 1 # up "-2" # room 1.2 # krush 2
d6 silence
d10 $ qtrigger $ filterWhen (>=0) $ seqP [
(0, 1, s "control:1*4"),
(1,2, s "control:1*8"),
(2,3, s "control:1*12"),
(3,4, s "control:1*16")
] # room 0.3
nooty
hush
For this project, I wanted to place strong focus on the drum patterns in my music. In the first section, I used a central rectangle and changed its form through twisting and scaling. These transformations were chosen to represent different shifts and spikes in the drum beats. By distorting the rectangle, I tried to show how the percussion moves and grows during the track. This is a big challenge since it is hard to find the values that exactly match the strengths and the frequencies of the drums.
When the piece reaches its middle section, I remove all the drum parts after a brief lead-in and leave only an ambient synthesizer. I also take away the moving rectangles in the center to keep the visuals in step with the music. Afterwards, the bass brought some energy to the piece, and made me think of adding some glitchy patterns in Hydra together with some crushing and distorted sounds.
In the final section, I used only a Moog synth. I wanted the sound to be stretched out and slow to fit the outro of the entire composition and to make it stand apart from the earlier chaotic bass lines. I also adjusted the pattern to be more organized, and it shifted in step with the Moog as it gradually slowed down.
I was inspired by the theory of the backrooms for this composition. Since I already wanted to go for something creepy/eerie, the video I found of the backrooms with the random visuals fit perfectly.
I wanted to start with calm and distant music, gradually introducing the more distorted and abnormal sounds, as this was happening, I simultaneously changed the visuals to make more and more intense, often being modified with the sounds. The climax was supposed to be reached towards the end of the composition when the visuals got more and more intense/red and bright, and the music got more and more creepy, finishing with the numbers track.
Tidal Code:
d1 $ stack [
slow 4 $ s "pad:1.5" # gain 0.5,
s "bass*2" # room 0.3 # gain 1.2
]
d2 $ stack [
s "haw(4,8) blue(4,8)" # speed 0.5 ,
ccv "<64 0 127 0>" # ccn "0" # s "midi"
]
d2 $ stack [
slow 1 $ s "moog" >| note (arp "up" (scale "major" ("[0,2,4,6]") + "a5"))# room 0.4 # gain 0.7,
ccv "<0 64 127 64>" # ccn "0" # s "midi"
]
-- # room 0.4 # gain 0.5,
d2 $ stack [
slow 1 $ s "moog" >| note (arp "up" (scale "major" ("[0,2,4,6]") + "a5")) # room 0.4 # gain 0.7,
ccv 0 # ccn 1 # s "midi"
]
-- # krush 10
d1 $ fast 2 $ s "moog" >| note (arp "up" (scale "major" ("[0,2,4,6]") + "a5")) # room 0.4 # gain 1 # squiz 0.3
-- # squiz 0.1
-- note "a5"
-- add this d4 $ fast 2 $ s "stab*2 stab*2 stab*2 <stab*6 [stab*2]!3>" # room 0.7 # gain (range 1.2 1.4 rand)
d4 $ fast 2 $ s "stab*2 stab*2 stab*2 <stab*6 [stab*2]!3>" # room 0.7 # gain 1.3
d1 $ stack [
slow 2 $ s "bassfoo" >| note (arp "updown" (scale "major" ("[0,2,4,6]"+"<0 0 2 3>") + "c2")) # room 0.4 # delay 0.9,
ccv "<127 0 64 >" # ccn "0" # s "midi"
-- add squiz and have it distort image
-- hydra 2
]
d1 $ slow 2 $ ccv " 127 64" # ccn "0" # s "midi"
d3 $ stack [
slow 4 $ s "pad:1.5" # gain 0.1,
s "bass*2" # room 0.5 # gain 0.1
]
d2 $ stack [
s "haw(4,8) blue(4,8)" # speed 0.5 # gain 0.5,
ccv "<64 0 127 0>" # ccn "0" # s "midi"
]
d3 silence
d4 silence
d1 silence
-- d4 $ fast 2 $ s "stab*2 stab*2 stab*2 <stab*6 [stab*2]!3>" # room 0.7 # gain (range 1.2 1.4 rand)
d2 $ slow 2 $ s "[~numbers:1] [numbers:2] [~numbers:3] [numbers:4] [numbers:5] [numbers:6] [~numbers:7] [numbers:8*2]" # squiz 1.5 # gain 1.2
d3 $ slow 2 $ s "[numbers:1*2] [numbers:2] [numbers:3] [numbers:4] [numbers:5] [numbers:6] [numbers:7] [numbers:8]" # note "c'maj e'min d'maj" # room 0.4 # squiz 2
hush
I drew inspiration from Hans Zimmer’s work in Dune, with the industrial but desert sounds and the deep base. The sounds for the evil side of the track were also insipred by the Sardukar chant from the epilogue of each of the movies. The aesthetics of the hero soundtracks in games I played growing up.
The story being told is of a battle between evil and good a where good win after the beat drops (hopefully on time..). I have attached a video to the composition.
Looking at the video one more time, I realise that you can see the cursor bing visible in the middle of the screen. This is evidence of the p5js visualization shown as a video and there not being a p5 instance created during the performance.
The background visual is made with p5.js by taking muse mind monitors eeg readings of me playing chess and taking time-series data for each type of brain wave (alpha, delta, thetha) which are correlated to different types of emotions. The wave values, while in .csv had lost a lot of information and context where the color pf the particles is the aggregation of the wavetypes and the size is dictated by purely delta waves associated with high stress. The ring in the middle pulses on the heart-rate data from the headband.
The code code was used to generate the p5js video:
let brainData = [];
let headers = [];
let particles = [];
let currentIndex = 0;
let validData = [];
let heartBeatTimer = 0;
let lastHeartRate = 70; //default heart rate
class Particle {
constructor(x, y, delta, theta, alpha) {
this.pos = createVector(x, y);
this.vel = p5.Vector.random2D().mult(map(delta, -2, 2, 0.5, 3));
this.acc = createVector(0, 0);
this.r = map(delta, -2, 2, 50, 255);
this.g = map(theta, -2, 2, 50, 255);
this.b = map(alpha, -2, 2, 50, 255);
this.size = map(abs(delta), 0, 2, 2, 10);
this.lifespan = 255;
}
update() {
this.vel.add(this.acc);
this.pos.add(this.vel);
this.acc.mult(0);
this.vel.add(p5.Vector.random2D().mult(0.1));
this.lifespan -= 2;
}
applyForce(force) {
this.acc.add(force);
}
display() {
noStroke();
fill(this.r, this.g, this.b, this.lifespan);
ellipse(this.pos.x, this.pos.y, this.size, this.size);
}
isDead() {
return this.lifespan < 0;
}
}
function preload() {
brainData = loadStrings("mindMonitor_2025-02-26--20-59-08.csv");
}
function setup() {
createCanvas(windowWidth, windowHeight);
colorMode(RGB, 255, 255, 255, 255);
headers = brainData[0].split(",");
//filterinf dataset for valid rows
validData = brainData.slice(1).filter((row) => {
let cells = row.split(",");
return cells.some((cell) => !isNaN(parseFloat(cell)));
});
console.log("Total valid data rows:", validData.length);
}
function draw() {
//trailing effect for particles
background(0, 20);
if (validData.length === 0) {
console.error("No valid data found!");
noLoop();
return;
}
let currentLine = validData[currentIndex].split(",");
let hrIndex = headers.indexOf("Heart_Rate");
let heartRate = parseFloat(currentLine[hrIndex]);
if (!isNaN(heartRate) && heartRate > 0) {
lastHeartRate = heartRate;
}
let beatInterval = 60000 / lastHeartRate;
heartBeatTimer += deltaTime;
if (heartBeatTimer >= beatInterval) {
heartBeatTimer = 0;
let deltaIndex = headers.indexOf("Delta");
let thetaIndex = headers.indexOf("Theta");
let alphaIndex = headers.indexOf("Alpha");
let delta = parseFloat(currentLine[deltaIndex]) || 0;
let theta = parseFloat(currentLine[thetaIndex]) || 0;
let alpha = parseFloat(currentLine[alphaIndex]) || 0;
let particleCount = map(abs(delta), 0, 2, 5, 30);
for (let i = 0; i < particleCount; i++) {
let p = new Particle(
width / 2 + random(-100, 100),
height / 2 + random(-100, 100),
delta,
theta,
alpha
);
particles.push(p);
}
currentIndex = (currentIndex + 1) % validData.length;
}
push();
noFill();
stroke(255, 0, 0, 150);
strokeWeight(3);
let pulseSize = map(heartBeatTimer, 0, beatInterval, 100, 50);
ellipse(width / 2, height / 2, pulseSize, pulseSize);
pop();
for (let i = particles.length - 1; i >= 0; i--) {
particles[i].update();
particles[i].display();
if (particles[i].isDead()) {
particles.splice(i, 1);
}
}
}
function windowResized() {
resizeCanvas(windowWidth, windowHeight);
}
I encapsulated the tidal code so that i would only have to excute few lines during the performance which didn’t go the way I wanted:( , so in a way i feel like when it comes to live coding performance, having a few refactored functions work while working with code blocks lives gives a better setup for performing and debuggin on the fly.
Below is the tidal code:
-----------------hehe----
setcps (135/60/4)
-- harkonnen type beat
evilPad = slow 8
$ n (cat [
"0 3 7 10",
"3 6 10 13",
"5 8 12 15",
"7 14 17 21"
])
# s "sax"
# room 0.8
# size 0.9
# gain (range 0.7 0.9 $ slow 4 $ sine)
--shardukar type beat
evilBass = slow 8
$ sometimesBy 0.4 (rev)
$ n "0 5 2 7"
# s "sax:2"
# orbit 1
# room 0.7
# gain 0.75
# lpf 800
# shape 0.3
--geidi prime type beat
evilAtmosphere = slow 16
$ sometimes (|+ n 12)
$ n "14 ~ 19 ~"
# s "sax"
# gain 0.8
# room 0.9
# size 0.95
# orbit 2
--shardukar chant type pattern chpped
evilPercussion = slow 4
$ striate "<8 4>"
$ n (segment 16 $ range 6 8 $ slow 8 $ perlin)
# s "speechless"
# legato 2
# gain 1.2
# krush 4
--shardukar chant type pattern 2 chopped
evilVoice = slow 4
$ density "<1 1 2 4>/8"
$ striate "<2 4 8 16>/4"
$ n (segment 32 $ range 0 6 $ slow 8 $ sine)
# s "speech"
# legato (segment 8 $ range 2 3 $ slow 16 $ sine)
# gain (segment 32 $ range 0.8 1.2 $ slow 4 $ sine)
# pan (range 0.3 0.7 $ rand)
# crush 8
hush
evilRhythm = stack [
s "~ arp ~ ~" # room 0.5 # krush 9 # gain 1.2,
fast 2 $ s "moog2 moog2 moog2 moog3 moog:1 moog2" # room 0.7 # krush 5, fast 4 $ s "wobble8" # gain 0.8 # lpf 1200
]
d1 $ evilRhythm
hush
-- Build-up to drop
evilBuildUp = do {
d1 $ qtrigger $ filterWhen (>=0) $ seqP [
(0, 1, s "moog:14"), (1, 2, s "moog:18"),
(2, 3, s "moog:116"), (3, 4, s "moog:132")
] # room 0.3 # krush 9 # lpf (slow 4 (3000saw + 200)); d2 $ qtrigger $ filterWhen (>=0) $ seqP [ (0, 1, s "bass1:74"),
(1, 2, s "bass1:88"), (2, 3, s "bass1:916"),
(3, 4, s "bass1:932") ] # room 0.3 # lpf (slow 4 (1000saw + 100)) # speed (slow 4 (range 1 4 saw)) # gain 1.3;
d3 $ qtrigger $ filterWhen (>=0) $ seqP [
(0, 4, evilVoice # gain (slow 4 (range 0.8 1.5 saw)))
];
d4 $ qtrigger $ filterWhen (>=0) $ seqP [
(3, 4, s "crash:2*16" # gain (slow 1 (range 0.7 1.3 saw)) # room 0.9)
]
}
d1 $ silence
d2 $ silence
d3 silence
d4 silence
theDrop = do {
d1 silence;
d2 silence;
d3 silence;
d4 silence
}
heroBassDrum = s "[808bd:3(3,8), 808bd:4(5,16)]" # gain 1.3 # room 0.4 # shape 0.4 #krush 9
heroSnare = s "~ sd:2 ~ sd:2" # room 0.6 # gain 1.1 # squiz 0.8 #krush 6
heroHiHats = fast 2 $ s "hh*4" # gain (range 0.8 1.0 $ rand) # pan (range 0.2 0.8 $ slow 3 $ sine) #krush 5
heroTom = s "~ ~ ~ [~ lt:1 lt:2 lt:3*2]" # gain 1.0 # room 0.5 # speed 0.9 #krush 9
heroCymbal = s "[~ ~ ~ crash:4]" # gain 0.95 # room 0.7 # size 0.8
heroicFill = s "[feel:14, [~ ~ ~ feel:64], [~ ~ ~ ~ crash]]"
# gain 1.2
# room 0.7
# speed (range 1.5 0.9 $ saw)
# crush 8
dramaticEntrance = do {
d1 $ s "808bd:3 ~ ~ ~ ~ ~ ~ ~" # gain 1.4 # room 0.9 # size 0.9;
d2 $ s "~ ~ ~ crash:4" # gain 1.3 # room 0.9;
d3 silence;
d4 silence
}
heroPattern = do {
d1 $ stack [
heroBassDrum,
heroSnare,
heroHiHats
];
d2 $ stack [
heroTom,
heroCymbal
] # shape 0.3;
d3 $ s "sine:3" >| note (scale "mixolydian" ("0,4,7") + "c4")
# gain 0.65 # room 0.7;
d4 silence
}
heroExpanded = do {
d1 $ stack [
heroBassDrum # gain 1.4,
heroSnare # gain 1.2,
heroHiHats # gain 1.1
];
d2 $ stack [
heroTom,
heroCymbal # gain 1.1
] # shape 0.3;
d3 $ s "sine:3" >| note (scale "mixolydian" ("0,4,7") + "")
# gain 0.75 # room 0.7 # lpf 3000;
d4 $ s "[~ ~ ~ [feel:6 feel:7]]" # gain 0.9 # room 0.6
}
heroEpic = do {
d1 $ stack [
s "[808bd:3(3,8), 808bd:4(5,16)]" # gain 1.4 # room 0.4,
s "~ sd:2 ~ [sd:2 sd:4]" # room 0.6 # gain 1.2,
fast 2 $ s "hh4" # gain (range 0.9 1.1 $ rand) ]; d2 $ stack [ s "~ ~ mt:1 [~ lt:1 lt:2 lt:32]" # gain 1.1 # room 0.5,
s "[~ ~ ~ crash:4]" # gain 1.0 # room 0.7
] # shape 0.4;
d3 $ s "sine:3" >| note (scale "mixolydian" ("0,4,7,9") + "")
# gain 0.8 # room 0.8 # lpf 4000;
d4 $ s "feel:6(3,8)" # gain 0.9 # room 0.6 # speed 1.2
}
evilIntro = do {
d1 $ evilPad;
d2 silence;
d3 silence;
d4 silence
}
evilBuilds = do {
d1 $ evilPad;
d2 $ evilBass;
d3 $ evilAtmosphere;
d4 silence
}
evilIntensifies = do {
d1 $ evilPad;
d2 $ evilBass;
d3 $ evilAtmosphere;
d4 $ evilPercussion
}
d9 $ ccv "0 40 60 120" # ccn "0" # s "midi"
evilFullPower = do {
d1 $ evilPad # gain 0.6;
d3 $ evilVoice;
d4 $ evilPercussion # gain 1.3
}
--for evil full power
d6 $ s "~ arp ~ ~" # room 0.5 # krush 9 # gain 1.2
d7 $ fast 2 $ s "moog2 moog2 moog2 moog3 moog:1 moog2" # room 0.7 # krush 5 d8 $ fast 4 $ s "wobble8" # gain 0.8 # lpf 1200
evilIntro
evilBuilds
evilIntensifies
evilFullPower
evilBuildUp
theDrop
dramaticEntrance
heroPattern
heroExpanded
heroEpic
hush
Im really happ something came out of the eeg pipieline but I honestly feel that that that time culd have been better spent relying on noise to generate visuals.
But nevertheless, I’m happy I could (pun intended..) leave my heart and soul on the screen.
Following the words of the famed producer Kanye West, I wanted to incorporate a lot of the human voice into my project. My overall theme was a transition from happy, serene sounds with the backdrop of the phrase “I love live coding” spelled out, into a transition with a countdown and then a descent into madness with the phrase “pain” spelled out in repeat and it ends with the word “help” spelled out rather lifelessly.
The visuals then had to match this sequence, where I wanted the initial scenery to be colorful with pretty shapes. For the transition, I wanted the shapes to get a bit more distorted, and for the number of shapes to match the countdown so you can both see and hear the countdown for the beat drop. Right after, comes a brief moment of serenity before a descent into chaos, untilizing a lot of dark libraries as a backdrop to the prominent “pain” repeated throughout.
Making this project was a lot of fun and I am excited to see everyone’s project.
Tidalcycles:
setcps (135/60/4) -- 1. Set global tempo
happyComplex = do {
hush;
d1 $ slow 4 $ s "alphabet" <| n "~ ~ 8 ~ ~ 11 14 21 4 ~ ~ 11 8 21 4 ~~ 2 14 3 8 13 6 ~ ~"
# speed "1.2"
# gain (range 1.5 1 (slow 64 sine)) -- starts loud then fades over time
# room "0.1"
# pan (slow 8 "0 1");
d2 $ stack [
fast 2 $ s "arpy" >| note (arp "updown" (scale "major" ("[0,2,4,6]" + "c5")))
# gain (range 1.2 0.8 (slow 64 sine)),
s "arpy" >| note (scale "major" ("[<-7 -5 -3 -1>,0,2](3,8)" + "c5"))
# gain (range 1.2 0.8 (slow 64 sine))
];
-- Light percussion: gentle claps and hi-hats.
d3 $ stack [ s "~ cp" # room 0.5,
fast 2 $ s "hh*2 hh*2 hh*2 <hh*6 [hh*2]!3>"
# gain (range 1 0.5 (slow 64 sine))
# room 0.7
];
-- Ambient textures: acoustic drum loop and soft piano.
d4 $ loopAt 8 $ chop 100 $ s "bev:1" # room 0.8 # legato 12; -- # gain (range 1 0.3 (slow 64 sine));
d5 $ slow 8 $ s "superpiano" <| n "c d f g a c6 d6 f6"
# gain (range 1 0.3 (slow 64 sine))
};
happyComplex
d8 $ ccv (segment 128 (range 127 0 saw)) # ccn "0" # s "midi" -- 5.
d9 $ ccv (segment 128 (range 127 0 saw)) # ccn "1" # s "midi" -- 6.
d1 $ qtrigger $ slow 2 $ s "numbers" # n "<3 2 1>" # gain 1.5
--ctrl 4
d1 silence
d10 $ ccv "0 127 0 127" # ccn "2" # s "midi" -- 8.
d11 $ ccv "<127 0 127 0>" # ccn "0" # s "midi" -- 8.
scaryComplex2
d1 $ slow 2 $ s "alphabet" <| n "~ 15 0 8 13 ~"
# speed "0.6"
# legato 2
# gain (range 1.3 2 (slow 64 saw))
# room "0.1"
# pan (slow 8 "0 1")
d1 $ s "alphabet" <| n "7 4 11 15" --19
# gain 1.5 -- 21, 23, 25 change to 1, 0.5 0
hush
-- transitionComplex = do {
-- -- # hush;
-- d5 silence;
-- d1 $ qtrigger $ seqP [
-- (0.01, 1.01, s "numbers" <| n "3" # gain 1.5),
-- (1, 2, s "off"),
-- (2, 3, s "off"),
-- (3, 4, s "numbers" <| n "2" # gain 1.5),
-- (4, 5, s "off"),
-- (5, 6, s "off"),
-- (6, 7, s "numbers" <| n "1" # gain 1.5),
-- (7, 8, s "off"),
-- (8, 9, s "off")
-- ];
-- d2 $ fast 2 $ s "hh" <| n (run 6)
-- # gain 0.8
-- # speed (slow 4 (range 1 2 saw));
-- d3 $ loopAt 16 $ s "sheffield" # gain (range 0.2 0.4 (slow 32 sine)) # room 0.9;
-- xfadeIn 4 2 silence
-- };
hush
scaryComplex2 = do {
clutch 2 $ s "ades3" <| n (run 7) # gain (range 1.2 0.8 (slow 64 sine)) # room 0.2;
clutch 3 $ loopAt 1 $ s "dist:1" # gain 1.0;
clutch 4 $ slow 8 $ s "bass1" <| n (run 30)
# gain (range 1.0 0.7 (slow 64 sine))
# up "-2" # room 0.3;
clutch 5 $ stack [
fast 2 $ s "arpy" >| note (arp "updown" (scale "minor" ("[0,2,3,5]" + "c4")))
# gain 0.8,
s "arpy" >| note (scale "minor" ("[0,1,3,5]" + "c4"))
# gain 0.8
] # room 0.5;
clutch 6 $ slow 4 $ s "industrial" <| n (run 32) # gain 1.0 # hpf 800
};
d1 $ s "alphabet" <| n "7 4 11 15" --19
# gain 1.5 -- 21, 23, 25 change to 1, 0.5 0
hush --27
I started this project with one goal in mind: incorporating one of my favorite memes—the “Is that hyperpigmentation?” meme that’s been going viral recently. I didn’t have a specific or fixed vision from the start; instead, I approached it by experimenting over and over again until I found something that felt right.
One thing I really love is experimental music production. Lately, I’ve been listening to a lot of NewJeans and NMIXX, and I wanted to try a “switch-up” style beat—where the entire vibe of the music shifts suddenly while still transitioning smoothly between phases. I’ve composed music like this before, but never through coding.
For this project, I imported the “Is that hyperpigmentation?” and “It is fantastic” lines from the original meme as samples. Initially, I wasn’t planning on incorporating a beat drop, but after last week’s class, I was inspired to experiment with the beat drop example we studied. I wouldn’t necessarily call what I created a traditional beat drop—it’s more of a buildup leading to an underwhelming yet oddly satisfying drop (at least in my opinion). I do think I could have executed the buildup better, as I struggled to align the audio with my initial vision despite multiple attempts. However, I really like how the buildup transitions into the final section.
For the visuals, I wanted them to match the vibe of the music while also conveying emotion. The piece starts with a simple line that moves to the beat, set against a dark background to complement the bass-heavy intro. During this section, I subtly tease the “Is that hyperpigmentation?” and “It is fantastic” samples—just enough to keep the audience intrigued and a little confused. As the composition progresses, I introduce hi-hats, a Casio sample, and glitch effects layered with drums. When the Casio sample speeds up, I color in the lines and shift them to light blue, reinforcing the energy shift.
Next comes the buildup, featuring the phrase “It is fantastic” repeating and intensifying, ultimately cutting out to silence just before the full sample plays. At this point, the sound transitions into an ethereal yet mysterious atmosphere, and the visuals suddenly become vibrant and overwhelming—which I love. I also introduce a simple, looping “It is fantastic,” which enhances the vibe. This is followed by a complex beat sequence before the piece gradually winds down, ending with the full sample playing in silence alongside the “hyperpigmentation” drawing from the original meme.
Tidal Code:
-- Part 1
sTart = do
d16 $ s "~ ~ ~ bass1" # room 5 # legato 1 # gain 1
d2 $ ccv "127 ~ ~ 0 " # ccn "0" # s "midi"
sTart
once $ s "tastic" # gain 0.5
d15 $ s "bleep" # room 0.6 # gain 1
once $ "pigment" # gain 0.5
d9 $ s "~cp" # gain 1.2 # room 0.3
boop_boop = do
d1 $ qtrigger $ filterWhen (>= 0) $ fast 4 $ s "casio" <| n (run 2) # room 0.8
d2 $ ccv "0 70 90 127" # ccn "0" # s "midi"
boop_boop
-- Part 2
d3 $ degradeBy 0.01 $ sound "hh*8" + "hh!"# gain "1.5 0.75 0.75 1.5 0.6 0.9 0.9" # speed 2 #gain 1.5
glitchity_boop = do
d8 $ qtrigger $ filterWhen (>= 0) $ sound "<glitch:5 glitch:3> bd bd [~ bd]" # gain 2 # room 0.01
d4 $ ccv "<2 127> 30 40 [~ 10]" # ccn "0" # s "midi"
glitchity_boop
-- part 3
tASTIC = do
d3 $ qtrigger $ filterWhen (>=0) $ seqP
[ (0, 1, s "tastic:2*4")
, (1, 2, s "tastic:2*8")
, (2, 3, s "tastic:2*16")
, (3, 4, s "tastic:2*32")
]
# room 0.3
# hpf (slow 4 (1000 * saw + 100))
# speed (slow 4 (range 1 4 saw))
# gain 2
# legato 0.5
d4 $ qtrigger $ filterWhen (>=0) $ seqP
[ (4, 5, s "tastic") ]
# gain 2
# room 0.3
d8 $ silence
tASTIC
hush
-- part 4
d1va_Bo0ts = do
d10 $ qtrigger $ filterWhen (>= 0) $ slow 2 $ s "superzow" >| note (scale "<minor hexSus major>" ("[<-5 -3 -1 1> 0,2,4,8] * <1 8 16 32>") + "[f5,f6,f7]") # legato 1 # lpfbus 1 (segment 1000 (slow 4 (range 100 3000 saw)))
d5 $ struct "[t(1,2) t(2,4) t(4,10) t(10,16)]" $ ccv (segment 16 (slow 1 (range 120 0 saw))) # ccn "1" # s "midi"
d1va_Bo0ts
once $ s "pigment" # room 1.5 # gain 5
d13 $ qtrigger $ filterWhen (>= 0) $ fast 2 $ s "tastic" # gain 3 # room 1.5 #legato 0.5 # gain (range 1 1.2 rand)
d1 $ qtrigger $ filterWhen (>= 0) $ s "[bleep(5,16), cp(1,4), feel(7,8), bass1:(9,16)]" # legato 0.2 # gain 2
d1 silence
d8 silence
d10 silence
eNding
eNding = do
once $ qtrigger $ filterWhen (>= 0) $ s "tastic" # gain 2
hush
I really enjoy the sound of drums and low-pitched tones because they feel closer to a heartbeat. Throughout this project, I spent a lot of time searching for different low-pitched sounds to combine with each other. Initially, it was quite difficult to find the right combination because many of the sounds were too similar. To add more variation, I applied heavy distortion effects (using krush and squiz) to most of them. This helped create distinct textures and added more character to the overall composition.
I started the project using a Tidal file and then tried to connect the sound with Hydra. Since many of the music blocks were built from various rhythms, it was quite difficult to represent them visually in Hydra. One solution I came up with was to make each sound trigger a different visual change in Hydra. I especially enjoyed experimenting with basic shapes and movements, and I tried out different ways to make those shapes move in response to the sound.
It was quite challenging to bring everything together into a cohesive composition because I wasn’t sure how to create a strong drop. I ended up starting with a verse, which repeats a few times throughout the piece, then gradually layered in different drums and bass sounds to build the chorus. To create a bridge, I used a variation of the verse, which helped lead into the buildup and eventually the drop. I finished the piece by working backwards, transitioning from the drop back into the chorus, and ending with a softer, more minimal sound to bring the composition to a close.