Group Members: Aalya, Alia, Xinyue, Yeji

Inspiration

Each of us brought a few visuals and sound samples together. Then we tried to adjust the parameters and midi so that the theme of the visuals and sounds matched with each other.
Who did what:

  • Tidal: Aalya and Alia
  • Hydra: Xinyue and Yeji

Project Progress

Tidal Process

  • First, we played around with the different audio samples in the “dirt samples” library and tested out code from the default samples table on the documentation on the official Tidal website.
  • We then mixed and matched several combinations and eliminated everything that didn’t sound as appealing as we wanted it to be.
  • As Yeji and Xinyue were working on the visuals on the same flok, we listened to all of the various soundtracks as we took the visuals into account and were able to narrow down all our ideas into one singular theme.
    We had a lot of samples that we thought worked well together, but we were lacking structure in our track, so that was the next step when working on the project.
  • We broke down the composition into four sections: Intro, Phase 1, Phase 2, and Ending.
  • After that, it was just about layering the audio in a way that sounded appealing and that transitioned smoothly. We were trying to figure out what sounds would be used when transitioning into a new section and what parts would be silenced. It was all about finding the balance and a matter of putting together tracks that worked well with the visuals and complimented them.


Hydra Process

  • We started with a spinning black and white image bounded by a circle shape. Our inspiration comes from the old-fashioned black-and-white movie.
  • We modified the small circle by adding Colorama, which introduces noise
  • .luma() introduced a cartoon effect to the circle, and reduced the airy outlines
  • The three other outputs depended on the o0 as the source.
  • We added an oscillation effect on top of the original source and added cc to create a slowly moving oscillating effect that goes with the music.
  • Then, the scale was minimized for a zoom-out effect as the intensity of the music built up.
  • For the final output, the color was intensified, along with another scale-down effect, before introducing a loophole effect that transitioned back into o0 for the ending scene.

Evaluation and Challenges

  • Some of the challenges that we encountered during the development process of the performance were coordinating a time that worked for everyone and making sure that our ideas were communicated with each other to ensure that everyone was on the same page.
  • Another issue we had was that the cc values were not reflected properly, or in the same way for everyone, which resulted in inconsistent visual outputs that weren’t in sync with the audio.
  • Something that worked very well was the distribution of work. Every part of the project was taken care of so that we had balanced visual and sound elements.

I found the underlying concept in this reading to be very engaging as it walks us through Olivero’s internal feelings and personal retreats and how they led up to her externalizing them and sharing them with people around her; leaving her mark. The work she has done is impressive in distinctive ways as she was able to incorporate music and vocals in her meditation and healing. Just reading about it felt calming and reassuring. Her initiative in making The Sonic Meditation group a female-only meeting has allowed for a safe, women-empowering space. It’s also interesting that, although it was catered for musical practices, they also incorporated “journaling, discussion and Kinetic Awareness exercises”, which makes them feel more like a family, especially in the time of women-liberation where the house would be a sanctuary for them.

One of the meditation exercises that I found compelling was “Teach Yourself to Fly” where they would focus on their breaths and at some point allow their voices to sound, resembling the noise that an airplane makes when up in the air. This was a kinetic awareness exercise with the main goal of healing. The idea of focusing on our breath when meditating is not new, I have attended multiple yoga and pilates classes that emphasized breathing and its importance, allowing for inner peace and overall calmness. Moreover, when I used to have trouble falling asleep, I read online about a method that helps when focusing on breathing. Inhale while counting to 4, hold my breath as I count to 7 and exhale while making a “whoosh” sound, counting to 8, and then repeating the full process for multiple cycles. This, in my experience, allows the mind to focus completely on the sound and count and avoid overthinking or thinking about things that keep the mind wandering and awake.

This reading contextualizes the way we see “musical art” today and how it was developed. It provides the theories and history behind its evolution as a concept and examples of works and artists in the field. There are several main ideas that I found to be relatable on a personal level, not in a way that they apply to me personally but rather based on my observations. Since we started this class, whenever we have a performance or look at documented works or experiences, I always think of house music or shows and concerts like Martin Garrix and similar artists. This, I feel, relates to section 5 which looks at the computer being a universal machine that combines clubs with galleries under a “one-person enterprise”(5). The author discusses how the artist-musician/musician-artist label has been influenced by the spread of electronic music such as techno and house. This led to clubs being a place that combines forms of expression like music and visuals, as well as other factors. I would say that such factors apply to concerts as well, alongside clubs. Not only does the performers’ music allow people to vibe, move with the beat, and dance together, but performing it live makes it 1000 times more interesting and captivating, engaging multiple senses at the same time. It creates some kind of immersive or 2D experience where you can see the visuals reacting with the audio/sounds and incorporating fire or smoke, in some cases, that also go with the beat.

I have been to many concerts in my life but I had the most fun at a Martin Garrix concert where I was close to the stage and could see all of the colors and visuals changing up close with the music. He also incorporated aspects that we have learned how to use in class such as images or videos within the visuals, which personalizes the performance even further and maybe incorporates the “filmic” form discussed in the reading. Another topic that I found to be very interesting when reading was how some terminology used for visuals has been taken from music to apply to paintings, for example, which we still use to this day. Referring to a visual work or producing it as a “composition, symphony, improvisation, or rhythm”(2) is a recent change, that emphasizes abstraction in art. However, I believe that there is a wider room for interpretation when it comes to understanding what each word means when applying it to visuals than it does for music.

Process

As someone who is very indecisive, it’s often difficult to find a starting point, especially when there are no limitations or specific guidelines but I had to start somewhere. I looked over my previous assignments, pasted in some code for hydra, and started experimenting. I started with the visuals first as I wanted to get an idea of how fast or slow I would like to have my audio.

Visuals

For the visuals in this project, I decided to take a more sleek and simple approach relative to my previous assignments. I noticed that in all of my previous assignments I would create visuals that take up the whole screen and have colors all over the place, so I decided to make something different and focus on the use of shapes. I chose to incorporate my personality into the theme by choosing a color palette consisting of my favorite colors; pink and blue, with some transitional colors and shades. I made at least 10 different mini-performances with the visuals, left them, and started working on the audio.

Audios

In regards to the audio, I also wanted to step out of my comfort zone and find and work with my own samples rather than the ones offered by tidal. The longer audios were a bit more difficult to incorporate as I had to figure out how many loops to set them as and add effects to make them sound like the original sample file. I got Aaron’s help for one of them and applied the same logic to the rest. As usual, for me, the audios were more difficult to put together, as I feel that I don’t have a musical ear and so it takes me some time to find different sounds that go together and asked a few of my friends if they sound good. Similar to the visuals, I put together maybe 7 mini-performances until I was satisfied with the results I had. I experimented with different scales of structure, however, I ended up choosing performances that included longer audio files so I didn’t feel that the scales went along with them.

Composition

Next, I had to put the visuals and audio together so I went through them all, and deleted the performances that I either didn’t like or felt that they don’t go with the rest. Testing by mixing and matching different visuals with different audios, I reached 2 of each that I really liked and decided to incorporate them with one another. I separated the performance into 2 phases, fading in and out of the first one, merging into the second one, and lastly fading out with the second one. I connected the visuals and audio through Midi, also testing out several output patterns, and organized everything in a thought-out order of events to put together the composition.

Reflection and Outcome

Out of all of the assignments that we have done since the beginning of the semester, I would say I enjoyed the composition project the most. It gave me the opportunity to explore and experiment but with the goal of having a well-put-together composition that takes into account the sounds as well as the visuals. Ironically, screen recording was a challenge that I had as every time I record something goes wrong; the laptop lags, the audio doesn’t play, I mess up, the audio doesn’t record, …etc. I had to record over 20 times until I finally had a proper recording as could be seen below:

I find the role of improvisation in our digital culture as Jurgenson discusses to be very inspiring. Improvising is a key part of how we create and interact with digital media for several reasons such as the fact that digital media are often designed to be flexible and adaptable, we often use them in unexpected ways and improvisation can help us create new meanings and experiences.

This piece resonated with me on a personal level. As someone who does a lot of work in the field of web design, I am constantly faced with the need to improvise. Whether it’s making small changes to an existing design or completely redesigning a website from scratch as the work requires me to be creative and flexible. And while it can be challenging at times, I find that improvising helps me come up with better solutions than if I were following a set plan.

It also made me think about how important improvisation is in our everyday lives. We might not always think about it, but many of our interactions with others are improvised – whether we’re having a conversation or simply exchanging glances across a room. It allows us to respond spontaneously to what others are saying or doing, which can lead to more meaningful interactions, and sometimes even funny ones.

Another concept that also got me thinking is “the digitization of information”. It made me consider how my personal relationship with technology has changed over time as more things become digitized; for example, instead of buying physical CDs or DVDs containing my favorite movies or TV shows, now I just stream them online on Netflix or YouTube. There’s definitely something lost in terms of tangibility when things go digital like this – you no longer have an “original” copy per se – but there are also advantages, in terms of ease and convenience, at least, gained.

Hello, here is the code from my live demo today!

This is the code for hydra:
voronoi(2,()=> cc[0]*5,0.3).color(2,0,50).out(o0)
src(o0).modulate(noise(()=>cc[0]),0.005).blend(shape(),0.01).out(o0)
hush(o0)
shape(5, .5,.01).repeat(()=> cc[2]*5,()=> cc[2]*4, 2, 4).layer(src(o0).mask(o0).luma(.1, .1).invert(.2)).color(()=> cc[1]*20,()=> cc[1],5).modulate(o1,.02).out(o0)
// .scrollY(10,-0.01)
// .rotate(0.5)
hush()

This is the code for tidal:
d7 $ (fast 2) $ ccv "" # ccn "0" # s "midi"
d1 $ sound "electro1:8*2" # gain 1.4
d2 $ sound "electro1:11/2" # gain 1.8
d3 $ sound "hardcore:8" # room 0.3
d4 $ sound "hardcore:0*8"
d5 $ sound "reverbkick odx:13/2" # gain "1 0.9" # room 0.3 -- odx *2
d6 $ sound "arp" # gain "1.2"
d8 $ ccv (segment 128 (fast 2 (range 127 60 saw))) # ccn "1" # s "midi"
d9 $ ccv "" # ccn "2" # s "midi"
hush
d4 silence

The order I followed is based on the line numbers, a screenshot is attached below!

 

Description

The coding platform I chose for this research project is Mercury. Mercury is a “minimal and human-readable language for the live coding of algorithmic electronic music.” It’s relatively new as it was created in 2019 and is inspired by many platforms that we have used or discussed before. Mercury has its own text-editor that supports sounds and visuals, however, since it’s made for music, I decided to dive deeper into that.

Process

Since the documentation on the music features in Mercury is not very thorough, it was better for me to learn and understand through example files and code that can be randomly chosen from the text-editor. I then went through the different sound samples and files available and tested them out on the online editor as it allowed me to comment out code and click and drag whenever I needed to. After reaching a final result that I liked, since it was the first time I tested these files, I started implementing similar code in the Mercury text editor. It did not sound exactly the same and so I made sure to make changes that would make it sound better.

One of the randomly chosen files that I came across had the following instruments and I used that for inspiration while changing it and adding more to it:

new sample kick_909 time(1/4)
new sample snare_909 time(1/2 1/4)
new sample hat_909 time(1/4 1/8)
new sample clap_909 time(1/2)

Below is a video of the “live coding performance” (that I had tested out):

Evaluation

Although Mercury had audio samples and effects that sounded really nice and allowed for experimentation as well as mix-and-matching, there were a few aspects that I am not a huge fan of. These include the limited documentation on the sounds and how they could be used, the fact that you can’t execute line by line but rather the whole file at a time, and the fact that you cannot use the cursor to move to another line or select text. However, it was an enjoyable software to explore and engaging when it comes to the text resizing as you type and reach the end of the line.