Our project includes multiple visual and audial experiences generated through Tidal, Hydra, and midi-communication done via a Flok server. As outlined in class, the heart of our project is improvisation and active listening to what is happening in the process.

 

     This is how we worked together: we met multiple times and IMPROVISED. For our first session, we started from scratch and just improvised, seeing where it took us. We found a jamming style that worked for our group, and it was splitting into groups of two in the beginning and then mixing it up towards the middle and end. This way, we could carefully listen and observe what was happening and adjust accordingly. Shreya and Dania were on Hydra, while Amina and Thaís were on Tidal. Each of us was creating simple shapes and beats, which were then used to build up to create a more complex composition. When midi was added and the audio-visual experience was past the beginning stage, each of us had the freedom to choose which part to modify, i.e. one could switch to Hydra, Tidal, or continue building up on the same part they were working on. We also helped each other with their codes or parts that were confusing and actively listened to each other to see what to add and when to trigger it to make a cohesive composition. Each subsequent time we met, we used our past code to experiment with and improvise on (assuming it to be another team’s code that we would be working with). Every time we practiced, we created a unique composition. 

 

     Below are a few pictures from our practice sessions:

     

     The complete set of our works can be found in this repository: https://github.com/ak7588/liveCoding       

     Our favorite jamming session was the following one, which we will also be presenting in class:

 

     P.S. – This was one of the highlights of our semester(<3<3)!!!!!!! Livecoding with others is such an amazing bonding experience: we went in as individuals and came out as a performance group.

And we also came to the conclusion that Tidal-Hydra >>>> Hydra-Tidal. The world came back to its senses. 😉

 

I found this reading very fascinating and relatable. It reminded me of this class I took in New York on Dance, Theatre, and Performance and we practiced a lot of Kinetic Awareness in it. Oliveros’s list reminded me of the lists we had in class:

  1. Feel your breadth. Dance to it. With eyes open and close. 
  2. Humm your name. Make it a ritual. React to it. 
  3. Hear your thoughts. That is the music you will perform to.
  4. Stand in a circle facing outward. Each of you has to sit, but no two together. Listen to each other. No talking. 

The idea of these practices was to “tune our mind and body” as Oliveros says. We learned in class to listen to ourselves and our surroundings, and become more aware and sensitive to the little things that go unnoticed. My teacher said, “You have to listen, listen carefully before you perform; become aware.” It was like practicing mindfulness and it was therapeutic! So yes, I agree with Oliveros’s saying, “Listening is healing.” She also says, “Listening is directing attention to what is heard, gathering meaning, interpreting and deciding on action.” I jumped when I read this. It is so true; I have tried this. This is what we practiced in our class in New York with those small exercises. We not just listened, but heard, gathered meaning, and interpreted our surroundings and ourselves before we performed. 

What I did not realize was how opening the inner body and mind to the outer world could lead to activism. I was really intrigued by the line “personal is political”. Once you know yourself and your surroundings, you can raise a voice for what is right and wrong and take a stand. Reading this article was inspiring!

This reading was very interesting for me. I did not know there were trends observed in the 20th century that led to sort of a movement and emerging roles of artists as musicians and musicians as artists, which led to this whole phenomenon of what the author calls ‘artist-musicians/musician-artists’. I was unaware of how music inspired artists in their work and vice versa. It was very eye-opening. Music not only led artists to better incorporate the patterns in their artwork but also art led to the development of new musical instruments beyond the chromatic keyboard. The relationship between audio and visuals is much more than just combining them together, it is the relationship between music and art that complements each other and gets these two disciplines under one umbrella. I really enjoyed reading how this practice gave rise to a broad variety of cultural activities like sound art, pop music, film and video, etc. The whole idea of breaking boundaries and producing content that defies general categorization was inspiring. I myself try to make work that is multimedial and interdisciplinary, and after reading this article, I am just more charged up to do so!

I do question how as artists can we best incorporate these multidisciplinary aspects in our work. After going through this reading, I realized how closely these art forms are linked with each other, that it might be hard to clearly categorize which genre one is working with. So, is it possible that we are already dealing with different art forms and not realizing it? One might already be incorporating their music or dance or painting knowledge into dancing or painting or music that they compose, but might be unaware of how one form inspires and contributes to the other. How can we better enhance this relation and intersectionality in our works? Also, with this stems the question of how important is it to know if we are working with multiple art forms at a time to produce art that is interdisciplinary.

The composition project was a really challenging one but also extremely fun and enjoyable at the same time. Initially, I struggled a lot with the audio and visuals, but slowly it all finally came together! :)) I started working with the sound first. I played around with multiple audio samples, layered them, added variations, and made a list of all the rhythms and patterns I liked. Then I worked on visuals, trying multiple different functions together to create unique and interesting patterns. After having a collection of various sounds and visuals I liked, it was time to get all these individual components into a whole composition.

Having no music knowledge, I started by reading about music and composition and different song structures. The first was to come up with a beat, a base rhythm on which my composition will be built. Then layer in and out sounds to make a composition. I listened to a lot of techno music, video games soundtracks, and other artists to understand how they build up and transition in their songs. The dramatic silence part of my composition is inspired by the song Dance Macabre – one of my favorites. The indie techno vibe and drums are inspired by Coke Studio music.

One very important thing I learned through this composition project is something my professor, Aaron Sherwood says, “Don’t force it.” I had to be very selective in my choices for the audio and visuals and only choose the ones that went with my theme. Some sounds and visuals looked good individually, but I did not use them if they did not add to the composition as a whole.

As I worked on this project, I took extensive notes – notes of what sounded nice, what needed improvement, which track I am playing and which ones to choose next, which sounds I like, and what is the structure and my thought process of getting it all together. Writing it all down was very helpful and I also recorded myself so that I could go back to these later if needed. (A few of my scribbled notes can be found above).

My final live composition project video can be found here:

The Tidal Code for my composition:

-- reset midi values

reset_midi = do {
  d15 $ stack [
    ccv "0*128" # ccn "0" # s "midi",
    ccv "0*128" # ccn "1" # s "midi",
    ccv "0*128" # ccn "2" # s "midi",
    ccv "0*128" # ccn "3" # s "midi",
    ccv "0*128" # ccn "4" # s "midi",
    ccv "0*128" # ccn "5" # s "midi",
    ccv "0*128" # ccn "6" # s "midi",
    ccv "0*128" # ccn "7" # s "midi",
    ccv "0*128" # ccn "8" # s "midi",
    ccv "0*128" # ccn "9" # s "midi",
    ccv "0*128" # ccn "10" # s "midi",
    ccv "0*128" # ccn "11" # s "midi"
  ]
}

reset_midi
d15 silence

hush
d15 $ ccv "0*128" # ccn "11" # s "midi"
d15 silence

-- midi values

segment128 = ccv (segment 128 (range 127 0 saw)) # ccn "0"  # s "midi"
alt0127 = ccv "0 127" # ccn "1"  # s "midi"
alt64127 = ccv "64 127" # ccn "2"  # s "midi"
segment4 = ccv (segment 4 (range 127 0 saw)) # ccn "3"  # s "midi"
triplehalf = ccv "127 127 127 64" # ccn "4" # s "midi"
trigger = ccv "127" # ccn "5" # s "midi"
trigger2 = ccv "127" # ccn "6" # s "midi"
trigger3 = ccv "127" # ccn "7" # s "midi"
ripOff = ccv "[0 127 0] [127 0 127] 0 127" # ccn "9" # s "midi"


-- composition parts

base_gtr = s "gtr"
base_drum = s "~  808bd:1" # room 1.5 # sustain 8
tabla1 = s "<tabla:24(3,8)? tabla:6(3,8)>" # gain "1.5 1" -- put gain for first one
tabla2 = fast 2 $ s "<808ht:1(3,8) 808ht:3(5,8)?>" -- why does it have a weird beat in the beginning?
acoustic = fast 2 $ s "<tabla2:8(3,8) sf(5,8)>" # gain "1 0.5"
playful = sometimesBy 0.9 (fast 2) $ "pluck*4" <| n (run 4) # gain 1.3
honeyComb = sometimesBy 0.2 (  scramble 8 ) $ fast 2 $ s "pluck:2" >| note (arp "updown" (scale "pelog" ("[1,2]"+"<0 2 3>") + "c5")) # gain 1

andd_Beginnn = do {
  xfade 1 $ fast 2 $ s "moog:6" >| note (scale "major" "<[1@2 2 3] [1@2 2 1] [1@.5 1@1.5 2 1] [1@.5 1@1.5 2 3]>") # cut 6 # gain (range 0.6 0.75 rand);
  d8 $ qtrigger 8 $ seqP [
    (0, 4, ccv (segment 128 (range 0 127 saw)) # ccn "11"  # s "midi"),
    (4, 6, ccv "127" # ccn "11"  # s "midi")
  ]
}

riseEE = do {
  d9 $ qtrigger 9 $ seqP [
    (0, 2, sometimesBy 1 (fast 2) $ "pluck*2" <| n (run 2) # gain "<1 1.2>"),
    (2, 4, sometimesBy 1 (fast 2) $ "pluck*4" <| n (run 4) # gain "<1.4 1.5>" # room 0.2),
    (4, 8, sometimesBy 1 (fast 2) $ "pluck*8" <| n (run 8) # gain "<1.7 1.8 1.9 2>" # room 0.3)
  ];
  d3 $ qtrigger 3 $ seqP [
    (0, 2, s "bd*4" # gain "<1 1.2>"),
    (2, 4, s "bd:1*8" # gain "<1.4 1.5>" # room 0.2),
    (4, 8, s "bd:2*16" # gain "<1.7 1.8 1.9 2>" # room 0.3)
    --(6, 8, s "bd:4*32")
  ] # room 0.3
}

dRAMAAATIC_SILENCE = do
  hush
  d12 $ trigger2
  d2 $ segment128
  d3 $ fast 1 $ segment4


attack = do
  d2 $ segment128
  d3 $ fast 2 $ segment4
  d8 $ trigger3

are_you_readyyyy = do {
    d6 silence ;
    d3 silence ;
    d1 silence;
    d2 silence ;
    d8 silence ;
    --d12 silence ;
    d4 silence ;
    d5 silence ;
    d11 $ qtrigger 11 $ seqP [
        (0, 1, fast 1 $ "tabla2:24*4" # gain 1 # room 0.2),
        --(0, 1, fast 4 $ ccv "0 127" # ccn "8" # s "midi"),
        (1, 2, rotL 1 $ fast 1 $ palindrome $ degradeBy 0.1 $ s "tabla2:5" <| n "[20 23 22] [15 12 15] [20 12 15] [22 20 23]" # gain ((range 1.1 0.75 rand)) # room 0.3 # gain 1),
        (1, 3, rotR 1 $ fast 2 $ s "tabla2" <| n "[<17 ~@7>]" # room 0.1 # gain 1),
        --(1, 3, fast 32 $ ccv "0 127" # ccn "8" # s "midi"),
        --(2, 24, fast 32 $ ccv "127" # ccn "8" # s "midi"),
        (2, 16, fast 2 $ s "tabla2" <| n "[13 22 13 22,[~ 19]*2,[<17 ~@7>]]" # gain 1 # room 0.1)
        --(2, 24, attack)
    ];
    d12 $ qtrigger 12 $ seqP [
        (0, 1, fast 4 $ ccv "0 127" # ccn "8" # s "midi"),
        (1, 3, fast 32 $ ccv "0 127" # ccn "8" # s "midi"),
        (2, 24, fast 32 $ ccv "127" # ccn "8" # s "midi")
        --(2, 24, attack)
    ];
    attack
}

finalleYYYY = do {
  d9 $ qtrigger 9 $ seqP [
    (0, 8, ccv "0" # ccn "10" # s "midi"),
    (0, 2, sometimesBy 0.9 (fast 2) $ "pluck*2" <| n (run 2) # gain "<1 1.1>"),
    (2, 4, sometimesBy 0.9 (fast 2) $ "pluck*4" <| n (run 4) # gain "<1.2 1.3>"),
    (4, 6, sometimesBy 0.9 (fast 2) $ "pluck*6" <| n (run 6) # gain "<1.5 1.7>"),
    (0, 2, segment4),
    (2, 8, fast 2 $ segment4),
    (4, 6, ripOff),
    (6, 8, fast 2 $ ripOff),
    (4, 6, s "[bd gretsch:9] [bd gretsch:0] gretsch:1 gretsch:1" # gain 2),
    (4, 6, s "~ ~ ~ hh:1" # gain 1.5),
    (6, 8, fast 2 $ palindrome $ degradeBy 0.1 $ s "gretsch" <| n "[20 23 22] [15 12 15] [20 12 15] [22 20 23]" # gain ((range 1.1 0.75 rand)) # room 0.2 # gain 1.5),
    (7, 9, rotR 1 $ fast 2 $ s "gretsch" <| n "[<17 ~@7>]" # room 0.3 # gain 2),
    (8, 9, ccv (segment 128 (range 0 127 saw)) # ccn "10" # s "midi")
  ];
}




-----------------------------------------------------------------------------
-- START PERFORMANCE

do
  d1 $ base_gtr
  d2 $ segment128
  d7 $ trigger3

do
  d3 $ base_drum
  d4 $ alt0127

do
  d5 $ acoustic
  d6 $ alt64127

do
  xfade 7 $ playful
  d8 $ segment4
  d2 silence

do
  clutch 5 $ tabla1
  d9 $ tabla2

d10 $ triplehalf

do
  clutch 7 $ honeyComb
  clutch 5 $ fast 2 $ "hh hh:5 hh:11 hh"
  d11 $ trigger

clutch 9 $ degradeBy 0.2 $ someCyclesBy 0.8 (fast 1) $ fast 2 $ s "pluck:2" >| note (arp "updown" (scale "pelog" ("[1,2,5]"+"<0 6 1>") + "c5")) # gain 1
d3 $ "bd*8 bd*2"

riseEE

dRAMAAATIC_SILENCE

andd_Beginnn

xfade 4 $ fast 2 $ struct "t(5,8)" $ s "dr55" <| n (run (range 10 15 rand))
xfade 5 $ every 4 (fast 2) $ sound "drum*2" # room 0.1
xfade 6 $ fast 2 $ "hh hh:5? hh:11? hh" # room 0.2
xfade 7 $ fast 2 $ "pluck*8" <| n (run 8) # gain 1.2

are_you_readyyyy

finalleYYYY

d7 $ silence

hush

The Hydra Code for my composition:

///////////////////////////////////////////
// Section 1

render(o0)
solid().blend(
  osc(10,0.1,0).color(0.1,0.5,0.3)
    .modulateRotate( osc(10,0.1), ()=>cc[0] )
    .hue(  ()=>cc[2] ).kaleid(10)
    .modulate(o0, ()=>(cc[1]*0.5) )
    .modulate( osc(1,()=>cc[2]*15) )
    .repeat(()=>cc[3]*4+1,()=>cc[3]*4+1)
    .kaleid(5)
    .blend(shape(100)
      .rotate(() => Math.PI /180)
      .repeatX(3)
      .repeatY( ()=>Math.sin(time)*4 )
      .scale(() => Math.PI/4)
      .repeat(()=>cc[3]*4,()=>cc[3]*4)
      .blend(src(o0).color(0.2,1,0))
      .modulate(osc([4,15,25,50].fast(8), 0,.4))
      .kaleid(10), ()=>cc[4])
    .blend( shape( ()=> cc[3]+4 ,0.000001,[0.2,0.7].smooth(1))
      .color(0.2,0.4,0.3)
      .scrollX(()=>Math.sin(time*0.27))
      .add(
        shape( ()=> cc[3]+4, 0.000001,[0.2,0.7,0.5,0.3].smooth(1))
        .color(0.6,0.2,0.5)
        .scrollY(0.35)
        .scrollX(()=>Math.sin(time*0.33)))
      .add(
        shape( ()=> cc[3]+4 ,0.000001,[0.2,0.7,0.3].smooth(1))
        .color(0.2,0.4,0.6)
        .scrollY(-0.35)
        .scrollX(()=>Math.sin(time*0.41)*-1))
      .modulate(voronoi(10,2,2)).rotate(()=>cc[3],2), ()=>cc[5] )
    .modulateRotate(src(o0)).modulateRotate(src(o0))
    .modulateRotate(src(o0)).modulateRotate(src(o0))
    .blend(solid(), ()=>cc[6] )
    ,()=>cc[7])
  .out()

///////////////////////////////////////////
// Section 2

solid().blend(
  osc( ()=>cc[3]*100 ,0.2,1)
  .mult(osc(20,-0.1,1).modulate(noise(3,1)).rotate(0.7))
  .luma(()=>cc[0])
  .mult(osc(5,2,1))
  .modulateRotate( noise(1)  )
  .modulateRotate( o1,1  )
  .blend(o0).blend(o0)
  .rotate(()=>cc[0],()=>cc[3])
  .blend (osc(10,0.1,()=>cc[0])
    .modulateRotate(osc(10,0.1).rotate(),()=>cc[3])
    .hue(0.2).kaleid(4).modulateRotate(noise( ()=>cc[3]*10 ))
    .color(()=>cc[0],0,0.5)
  , ()=>cc[8])
, ()=>cc[11])
.out()

///////////////////////////////////////////
// Finale

osc(10,0.1,1)
    .modulateRotate(osc(10,0.1).rotate(),1)
    .hue(0.2).kaleid(4).modulateRotate(noise( 1*10 ))
    .color(1,0,0.5)
    .out()
shape(30,0.3,1).invert(()=>cc[9]).out(o2)
noise(()=>cc[3]*4,0.2,3)
  .thresh(0.3,0.03)
  .diff(o2).out(o1)
gradient([0.3,3].fast(4)).mult(o0).blend(o1)
  .modulateRotate(o3).modulateRotate(o3).modulateRotate(o3)
  .blend(solid(), ()=>cc[10])
  .out(o3)
render(o3)

hush()
render(o0)

///////////////////////////////////////////

 

Heyy guys! Some of you asked me how I did the visuals for my previous weekly assignment. Here is the code:

src(o0).modulateHue(src(o0).scale(1.01),1).layer(osc(10,.1,2).mask(osc(200).modulate(noise(3)).thresh(1,0.1))).out()

The above code I used to transition from my previous visual. And the below one is the one connected to midi:

osc(10,.1,2).mask(osc(200).modulate(noise(()=>cc[0]*3+1)).thresh(()=>(cc[0]),0.1)).rotate().out()

Altering the thresh values and noise level made interesting patterns!

Also, I wondered what the difference was between layer and mask. While coming up with the above, I discovered that layer and mask can be used interchangeably in the following way:

// the following 2 codes are same
osc(10,.1,2).layer(osc(200).luma(0.5,0.8).color(0,0,0,1)).out()
osc(10,.1,2).mask(osc(200)).out()

The above 2 lines of code produce the same output. Hope this was helpful! 🙂

I personally did not like this reading much. It felt it was very abstract and the speaker Paul Miller goes back and forth about improvising and the digital community, giving us so many different examples and ways to think about it that the discussion loses its main focus. Miller talks about magic, film, memory, recordings, economics, and commerce but does not give a clear understanding of putting it into the improvisation context. There is too much to keep track of and follow in the article which I believe actually digresses from the main point and makes the reader more confused and the dialogue hard to follow. The moderator and Iyer keep trying to bring Paul back to tie this to the idea of the digital community.

 

Having said that, there are a few lines I particularly found interesting. After reading the article twice, the definition of improvisation that I understood was, “Improvisation is navigating an informational landscape.” This is why learning how to just be in this world is also classified as a primal level of improvisation – quite an interesting idea actually! Putting it into the digital context, Miller says, “For me as an artist, digital media is about never saying that there’s something that’s finished. Once something’s digital, anything can be edited, transformed, and completely made into a new thing.” I never thought about digital media that way. In this case, there is so much scope for improvising in a digital culture. Everything online is a record and everyone listens to records, which are just documents or files essentially. But improvising with this cascade of information is what converts raw data into useful information – making it a form of improvisation in the digital community.

For my research project, I chose Max Software to explore and experiment with. It is developed by a San Francisco-based software company called Cycling ’74. It is written in C and C++ on the JUCE platform. It has been used by composers, performers, software designers, researchers, and artists to create recordings, performances, and installations.

So, what is Max?

Max is a node-based visual programming language that allows its user to easily code live. Max can be used to play around with (mix, layer, distort, corrupt) audio and visual files. Programming in Max helps you create a tool like an interactive software which you can then use to tweak parameters of your code and perform live. Max has special features which allows you to set up hardware like synthesizers, MIDI control keypad to your program which may assist you in your live performance. Max has 2 parts to it – the MAX MSP and the MAX Jitter. Max MSP is used for any work related to audio and Jitter is used for video.

The reason I found Max interesting and different from what we have been doing in class is the interface. Being node based, its GUI is very user friendly. Also, since it follows a flow, it is easy to follow and understand for the audience as well. The present mode in Max makes it unique. Basically, after programming the entire thing, you can use the buttons, toggles etc. to set controls and organize your panel so that it is performance ready. Instead of writing each code line from scratch, Max let’s you set your program up as an instrument whose usage you can demonstrate live. In the context of love coding, Max let’s the artist / creator make a toolbox for themselves (like a DJ Mixer).

I thoroughly enjoyed working on this project and learned a lot. There is so much to Live Coding and I am just beginning to explore. Max is well documented as well (just like Processing) which was very helpful to delve deeper into the functioning of the objects.

Here is my final video of Live Coding performance with Max:

Pictures of my code can found below: