What I had in my mind was to create a “floral” pattern with vibrant colors in visuals to go along with my sound composition. The sound uses various synthesizers like supervibe, superhammond, superpiano, etc. The arp function generates arpeggios, while note specifies pitches. slow and jux rev are used to modify the playback speed and direction, adding variation and texture. once, solo, and unsolo are utilized for structural changes, introducing, highlighting, or removing elements temporarily. Drums and bass lines are created using both synthesized sounds (bd, sd) and MIDI-controlled instruments. Throughout the composition, various effects such as legato, gain, room, djf (filter), and amp are used to shape the sound’s envelope, volume, reverb, and filter cutoff, respectively. The all function applies effects (djf, legato, speed, cps) globally, affecting all patterns to create cohesive changes in the texture, tempo, and timbre of the composition.
The use of voronoi, osc, kaleid, and scale functions in combination is pivotal in generating visuals that resemble changing flower patterns. voronoi creates cell-like structures that can mimic the segments of a flower, osc adds movement and texture, kaleid with a high value (21) generates symmetrical, kaleidoscopic patterns resembling the petals of flowers, and scale adjusts the size, allowing the visuals to expand or contract in response to the music. colorama is used to cycle through colors dynamically, which is linked to the tonal shifts in the music.
I approached this by first composing the music in tidalcycles, then creating a visual pattern in hydra that I like, and binding them together. Challenges were to keep the composition interesting even with using one kind of visual. I had tried a lot of variants but it quite didn’t fit together to jump around on them while playing a consistent musical compostion – so I sticked to one. But developments could be made that my visual composition also unfolded gradually by starting from something simple then maybe blooming to these big floral patterns. Also I could’ve been more consistent with the color palette.
For my composition project, I delved into exploring Tidalcycles’ default synthesizers, systematically examining each one as detailed in the documentation. Ultimately, I settled on “superpwm” to craft a simple chord progression and establish an ambiance. To introduce a wobbly quality, I applied the pitch1 function. Throughout the composition, I incorporated various synthesizers like gabor, superfm, and supervibe for the keys, selecting them strategically for specific parts. Additionally, I introduced “superhammond” to infuse rhythm and a groovy bass into the composition. I opted for a relatively straightforward drum arrangement, avoiding an overwhelming sound. In the bridge section, I aimed to make a distinctive transformation using “superchip” and “supercomparator” sounds, incorporating an unpredictable bassline with the use of “?”.
Conceptually, my objective was to explore the life cycle of a flower, tracing its journey from formation to wilting. The initial oscillator and the blob symbolize energy floating in the air, inspired by the movement of energy across organisms and objects as the essence of life. As the composition progresses, the synchronized visuals grow further to depict the growth of a flower, reaching heights that symbolize vitality and the peak of its life cycle. In the bridge, it goes into a reflection phase before returning to the same liveliness later in its life cycle. The ending segment portrays the flower rotating in its original form with various colors, symbolizing the lasting legacy it leaves in the universe.
Personal reflections on recent realisations regarding memory. Sought to create an ambient evolving set.
Concept/Inspiration
Following 2019’s hibernation, Queeste reappears with the mnestic soundscapes of DJ Lostboi and Torus’ split album The Flash. Across eight gossamer evocations—four from each artist—the duo reflect on individual journeys from airport to sea during the blissful embers of fading summer. Their gaze expands and contracts naturally with passing locations but lingers on the titular flash, what the two artists describe as “the rare sight of the sun giving off a bright green lightburst into the horizon.”
Been listening to a lot of this lately. My focus with my visual work over the last year has shifted to memory and its decay. Have also been composing ambient tracks over the last year to go with it. Wished to implement techniques like granulation, trance gates and filter sweeps.
Sound
Layer 1: “I’m God” by Clams Casino and Imogen Heaps. Sample. Used miClouds and miVerb to space it out. Start a LFO on amp on a triplet grid when Layer 2 kicks in. Pitched up to 1 semitone to D#.
Layer 2: Fade in by accelerating SuperHoover synth to pitch up. Bring in place by making acceleration 0. Arpeggio pitched up to D#. Increased sustain to make sounds bleed into each other. The alternate cycle of arpeggio modes speeds it up during certain parts creating tension and release.
Layer 3: Default guitar sample, arpeggiated. Initally krushed and crushed. When layer 1 and 2 fade out, guitar slows down and crush is removed.
Layer 4: voice memos. Series of iPhone voice memos taken over flights over the last year. Both of the ones used here are JFK to DEL. Striated and formant shifted to create artefacts and simulate data/memory corruption.
Visual
Worked on a gyroid tunnel raymarching shader last semester for a Unity class. Decided to pursue it further, as the imagery of an infinite tunnel scape works with this exploration of memory. Parameters being controlled by midi are tunnel density, tunnel shaping, green value of ambience lighting, time multiplier, fog cutoff and in the end, the blend of my memory. Accentuating and distorting on risers and heavier parts. Empty, sparse – light at the end of the tunnel – during downbeats.
For the composition project, I’m focusing on the capybara who is cute and stupid. Their charm has also been captured in a capybara song that I adore.
Audio
Start:
To start I try a couple of individual sound to be the hook. The first sound is the sound of a capybara moving quickly. Then a lighter note. Then a trumpet sound is introduced. Then the first part begins.
I used a lot of drums in the first part to make the music more energetic.
The main rhythm for part 1 is: d1 $ slow 2 $ s “808ht:12 808ht:23 808ht:32 <808ht:43 808ht:5*2>” # room 0.7 # gain 808ht was chosen because it sounds make me think it can represent capybara
I use stacking, starting with one rhythm and stacking more rhythms.
For transition i used qtrigger and seqP
After transition i used drums with a stronger beat, and i added runs so that each beat can follow with a slight beat.
if I had more time I would have added more midi, and would have added more code to make different parts switch visual automatically.
Visual
I use initVideo to initate the capybara video. And grid it using scale. The midi sent “2 3 4 5” for the number of row and column.
I use initImage to initate the capybara image . And reverse it. The problem I encountered is the midi can only send value larger than 0, but if make the image reversed I need (1, -1). So for the reversing, the midi notes sent is “2 0”. The hydra code is scale(1, ()=> 1-ccActual[1]).
I wanted to blend old and new Japan via some connecting thread through the composition project. I decided to settle on the koto, a Japanese string instrument, for my principal sound. I got a YouTube video and trimmed it to a singular note in Audacity, then imported it into TidalCycles.
When making the melody for the first act, I tried to incorporate a lot of trills and filler notes as is common in many koto songs by breaking up each note into 16th notes and offsetting high and low pitch sounds. For the visuals, I decided to settle on developing a geometric pattern that felt Japanese as the music progressed. I did this by creating thin rectangles, rotating them, and adding a mask using a rectangle that expanded outwards, revealing pieces of the pattern at a time.
For the transition, I added the typical Japanese train announcement that signified the shifting of time period, but I didn’t get the transition down as smoothly as I hoped. My initial idea was to add train doors closing and for the whole scene to shift, but I couldn’t get the visuals to work as I wanted to. Looking back, I would add the sound of a train moving and the visuals shifting to create the next scene.
For the second scene, I wanted to show the new era of Japan, aka walking around Shibuya woozy and boozy as hell. I got a video of someone walking through Tokyo, and slapped on color filters as well as distortion. I tried to get hands to better replicate the first person perspective, but I the static png didn’t end up being very good.
I controlled for the modulation amounts via MIDI and added the JoJo time stop sound effect to alleviate and reintroduce tension. I also made the sound slow down and reverse when stopping the music.
For my composition project, I wanted to create an introduction/algo-rave-y promotion to the podcast my friend Nour and I have been recording. This podcast, “Thursday Night ‘Live’ Starring Nour & Juanma” is an effort to preserve some memories of our last semester in NYUAD. We’ve been interviewing our friends and sharing some memories in the hopes that when we’re 50, we can listen to this and show it to our families. Maybe I will show them this project as well!
Sound
Our first episode began with the song ‘Me and Michael,’ which also happens to be our song. My first step in the composition was to figure out its chords. Once I did, I spent a LOT of time in Tidal Cycles playing around with them; I did not want to recreate the song, but to create something new based on it. After various attempts and styles explored, I figured out that writing the chords myself in the super piano synth sounded really good! I used these chords as a base for the composition, and added percussion, custom samples, and other elements. All custom samples were taken from previous episodes of our podcast.
I initially built one loop with all of my elements. It had kick drums, high hats, snares, claps, a melodic percussion, a sequencer-like melody, phrases of ‘Me and Michael’s’ melody and the piano chords. I was happy with the result, and began assembling/adding parts for an actual structure.
I LOVE the sound of tuning forks, so I used the superfork synth as an alternative for super piano in the beginning. This, I believe, gave depth to my piece. I also used superpwm as an alternative when building up. For the latter, I added an effect to alter the pitch. I wanted this part to be quirky…I also tried to build an effective build up and a drop using techniques we learned in class.
Whenever we record, Nour and I always have one (or many) false starts. I thought this would be a good addition to the piece’s structure. If you look at my composition, you will see the piece build up once, go silent, build up again, and then drop. In the first build up, I used many more components from ‘Me and Michael’. Then using a sequenceP, slowing percussion, and a custom sample from one of our false starts, I constructed this transition. Then, for the second build up, I added a lot more percussion, and kept only the base chords from the song. After a transition using slow “bd”, there is silence. Nour says “umm, so that was the vibe” and the beat drops. The drop contains almost the same elements as the second buildup with altered speeds, tempos and octaves. For the end of the song, instruments are removed, leaving only the melody, and the melody fades out. I would imagine my piece to look something like this:
My composition structure is intended to follow the storyline of a Thursday evening: Nour finishes capstone, we meet to record, we have a couple of funny false starts, then we begin again, this time full force.
I used the code from the class example to toggle the visuals in Tidal Cycles.
loadScript('/Users/juanmanuelrodriguezzuluaga/Documents/LiveCoding_HW/launch.js')
s0.initVideo('/Users/juanmanuelrodriguezzuluaga/Documents/LiveCoding_HW/Composition_Vids/composite_S.mp4')
s1.initImage('/Users/juanmanuelrodriguezzuluaga/Documents/LiveCoding_HW/Composition_Vids/tittle-min.png')
s2.initVideo('/Users/juanmanuelrodriguezzuluaga/Documents/LiveCoding_HW/Composition_Vids/IMG_2673.mov')
visuals[0]()
// can use update and switch case with midi:
var whichVisual = 0
update = () =>{
// very important! only change source once, when necessary
if (whichVisual != ccActual[0]){
whichVisual = ccActual[0];
visuals[whichVisual]()
}
}
// clear update
hush()
// OR (without stopping visuals all together)
update = ()=> {}
Visuals
Whereas I was quite traditional with the sound structure, I wanted my visuals to be a bit more chaotic. I knew I wanted 3 main elements: Nour and I doing our podcast, Nour’s capstone cell imaging, and colors. I drew a title and added the aforementioned elements. Then, I played around with parameters and components to generate a visual for each part of the composition. My aim was to have a lot going on, but to have the piece be responsive to the beat & storyline. I tried my best to incorporate midi channels in the designs, and to transmit the same story as with the audio. In order to do so, I made sure that the visuals were triggered from Tidal Cycles. I had a lot of fun manipulating Nour’s capstone images. They had a natural pulse, which was difficult to adjust to the beat, but they also looked quite nice when in kaleidoscope.
One change I made based on class feedback was the removal of one of the visuals during the main build-up. It was originally supposed to come prior to the fading of our video into red, orange and yellow, but I made a mistake when writing the code to trigger the visuals. Even through I would have liked to see the fading out or breaking down into a gradient (as the original plan), I believe this visual was significantly more effective in this part of the structure, and not later. Thus, I decided to remove the gradient all together. You can see on my code as visual number 8 (I believe).
LIVE Coding
For the in-person component, I practiced a LOT. I needed to make sure I knew exactly when to trigger all of the functions. Furthermore, I included a small explanation/in person introduction and ending to my project. By telling all about you about my podcast while “nour_are_you_ready” was displaying the tittle and saying that the podcast is only available in our google drives as the beat faded out I hoped to make the experience more immersive and engaging. This is not pictured in the video, so you’ll have to see me perform it again.
Final Product
I’m not sure if the video is working. In case it is not, see it here.
The two biggest things I want to have in the pieces that I make are to contain samples and have some sort of cultural tie to me (or just something I am interested in). For this piece I really wanted to find an old Korean song to sample. This would end up proving difficult since I cannot speak Korean whatsoever. After much digging and asking some family members for some old songs they knew, I finally found a song that I liked: 님은 먼 곳에 (Ni-meun Meon Go-se, You are far away I think is the translation?) by Kim Chu-Ja.
After listening to the song a bunch, I eventually tried to take parts of the song I thought sounded nice and tried to put them together. This would prove to be such a pain in Tidal, because I have to find specific timestamps for the entire song and try to make sure they are about one measure. To make this easier, I was able to find the bpm to the song and then multiply that by the speed I set the song to (1.2). After this, it was mostly experimenting to find what sounded good together. Eventually, I found vocals I liked and had them in a pattern I thought sounded good. After this, I put some percussion in just to make the song sound a bit more like a hip-hop/lofi song.
Most of the inspiration for my percussion came from listening to J Dilla’s Donuts and MF DOOM and Madlib’s Madvillainy, trying to emulate percussion patterns they had in a way that would also fit my song. I was deadass listening to Donuts on repeat for like a week straight, just trying to break down in my head how Dilla samples. I also attempted to make my percussion in “Dilla time,” by nudging the snare and the cymbals a bit so they don’t all play at the same time. I especially like the rushed snare that Dilla likes to do, where it comes in just a fraction before the other instruments on the same beat. Since a lot of hip-hop songs are pretty repetitive, I wanted to do the same by just having a long beat that would keep playing with little to no change between measures.
To make my song an actual composition, I wanted to have a beat switch that would be transitioned by some sort of speech. I had a hard time finding inspiration for a speech, so kinda just ripped a speech of some TTS meme that ironically talked about certain negative things that plague today’s society, like microplastics or the conspiracy about 5G radio waves. As for the beat switch, I wanted to have a slowed down tempo that would use the amen break to have a kind of weird breakcore-esque beat. I swapped around the order of some of the samples I had and added some new ones as well to make the second part of the beat. I wanted to add extra sounds to it but I had trouble finding sounds that kinda fit with the overall sound to me. The other difficult part with the break beat is that the beat didn’t totally line up when it repeated, and that I had to manually make the midi beat for it, since it was one long sample. I think I got it close enough, but I know it isn’t perfect either. My ending is kinda bad because it just kinda abruptly cuts out, I wanted to have an xfade but I couldn’t get it to work for whatever reason.
Hydra Visuals
As for my visuals, it took me a long time to find some inspiration for something I thought would sort of fit with the sound that I had created. All I knew is I thought it would be cool to have something really hectic visuals-wise. For both visuals, I ended up taking gameplay of two games I liked: one being a speedrun from the FPS Ultrakill, and the other being a pro match from the card game Magic: the Gathering. While I wanted to have hectic visuals, I also wanted it to have some clarity so you could actually tell what the games were being played. The reason I chose games were because I felt like this, combined with the hectic visuals and the Korean song sample, sort of represented who I am. After tinkering with the visuals for some time, I got the calm and chaotic visuals I wanted and managed to sync up certain visuals to the midi, I was essentially finished with my full composition.