Hi, flock-ers! Or should we say, hackers? Happy finals season — enjoy our documentation:

How We Started, aka the Chaotic Evil:

During the first two weeks of our practice, we approached the final project with the drum circle mindset. For every meeting, we would split into Hydra (2 ppl) and Tidal (2 ppl) teams and improvise, creating something new and enjoying it. When it came time to show the build-ups and drops, we struggled, because we had a lot of sounds and visuals going on separately, but not in one sequence. One evening, we created a mashup which later turned into our first build-up and drop music, yet without cohesive visuals or any other connecting tissue.

How We Proceeded, aka Narrowing the Focus:

A week later, Amina and Shreya were still improvising with Tidal, perfecting the first build-up and drop along with composing the second one, while Dania and Thaís were working on visualizing the first build-up and drop. One moment, Shreya was modifying Amina’s code, and a happy accident happened. That turned into our second drop, with a little magic of chance at 11 PM in the IM Lab.

At the same time, we also decided to narrow our focus only to certain sounds or visuals, critically choosing the ones that would go with our theme and not sound disintegrated or chaotic. 

The Connecting Tissue, aka Blobby and Xavier:

While working on the visuals, we decided to use simple shapes to make it as engaging and as aesthetically pleasing as possible. We narrowed down the visuals that we made during our meetings into 2 sections, circles and lines, which we later decided to name Blobby and Xavier respectively. The choice of circles growing was inspired by our dominating sounds – ‘em’ and ‘tink’ in the first build-up. When we thought of these sounds, Blobby is the visual we imagined. Similarly, Xavier was given its form. Dania and Thaís came up with this names. 

Initially, we wanted to tell the story of the interaction between Blobby and Xavier but the sounds we had did not quite match up with the stories we had in mind. From there, we started to experiment with different ways we could convey a story that had both Blobby and Xavier. Since we already had the sound, we started thinking of what visuals looked best with the sound that we had, and then it all started coming together almost too perfectly. 

We had the sounds and visuals for our 2 build-ups and drops but we needed some way to connect the two. Because Blobby and Xavier had no connection with each other, we tried finding different ways to connect them so the composition would look cohesive. This is when we decided to stick with one color scheme throughout the entire composition. We chose a very bright and colorful palette because that’s the vibe we got from the sounds we created. To transition from Blobby to Xavier, Dania came up with the idea of filling the entire screen with circles after the first drop. The circles would then overlap and create what looks like an osc() that we can then slowly shrink till it becomes a line that can then be modulated to get to xavier. Although this sounded like a wonderful idea, it was a painful one to execute. But in the end, we managed to do it and the result was absolutely lovely <3

Guys, let’s…. aka Aaron’s “SO”:

As we were playing around with the story behind Blobby and Xavier, Shreya and Dania came up with an idea… “Guys, what if we add Aaron’s voice into the performance?” Of course we could not resist, especially when Thaís happened to have a few of his famous lines recorded. This idea quickly became the highlight of our performance and provided us with a way to transition between the different buildups and drops we created while also adding some spice and flavour to our performance.

The Midi Magic aka How It All Came Together:

We had the sounds and the visuals but we still needed some way to connect the 2. This is where the midi magic happened. Because we had a slow start to the music, we decided to map each sound to a visual element, using midi, so that each sound change can be accompanied by a visual change and it doesn’t get monotonous either on the sonic or the visual side. But after the piece builds up, we thought it would be too much to have each sound correspond to a visual element so we grouped the sounds we had together into different sections, for example, all the drum sounding sounds would correspond to one specific visual element. For example, clubkick and soskick would both modulate Blobby instead of one of them modulating it and the other having some other effect. We also thought it would be better to have the dominant sounds have the biggest visual change – something that was inspired by the various artists’ work we saw in class. We applied the same concept to Xavier. While linking the sounds with the visuals, we also put a lot of thought into what the visual effect of each sound should be. We used midi to map the beat of the sounds to the visual change and also to automate the first half of the visuals and somehow, we ended up using around 25 different midi channels (some real midi magic happened there). 

one_min_pls(), aka the story of our composition

Once we had our composition ready, now was time for the performance, after all, it is live coding! So we had to decide on how we wanted our composition to look on screen, how to call the functions or evaluate the line, while also having some story. One thing all of us were super keen on was for it to have a story, and not random evaluation of lines. After much discussion, we decided to make the composition a story of the composition itself – how it came to life and how we coded it. To do this, we decided to make the performance a sort of conversation between us, where the function name would sometimes correspond to something we would usually say while triggering that specific block of code (ie. i_hope_this_works() for p5 bec it would usually crash) and other times they would be named based on what we were saying at the time (ie. i_can_see_so()). Because the function names were based on our conversations, it was really easy (and fun) to follow and remember – all we had to do was respond to each other as we usually would. It was a grape 🍇 bonding experience

Reflection(do we include this? ) CAUTION: the amount of cheese here is life threatening:  

Our group was very chaotic most of the time, but that somehow seemed to work perfectly for us and we’re glad we were able to somehow showcase some of this chaos and cohesiveness through our composition.Through our composition our own personalities are very prominent. Everytime we see a qtrigger, we think of Shreya. A clubkick reminds us of Thaís and the drop reminds us of how we accidentally made our first drop very late at night and couldn’t stop listening to it. The party effect after the drop reminds us of Amina (i’m not sure why?) and everytime we hear our mashup we start dying from laughter. At times we could even see the essence of ourselves through this composition. What we really liked the most about it is that we would usually get excited to work on it – it didn’t feel like a chore but rather it felt like hanging out with some friends and jamming. 

P.S: 

The debate has finally been settled by vote from the class, hydra-tidal supremacy hehe

– someone who is obviously wrong

No it hasn’t, I yielded because of Shreya’s mouse, nothing else. 

– the voice of reason

Documentation video of our (live) performance:

Documentation video of our (not SOOO live) performance:

Final Hydra script of our performance:

https://github.com/daniaezz/liveCoding/blob/main/aaaaaaaa.js

Final Tidal start-up code of our performance:

https://github.com/daniaezz/liveCoding/blob/main/finalCompositionFunctions.tidal

The evolution of our code can be seen here:https://github.com/ak7588/liveCoding

I had read some of Oliveros’s Meditations before for a class, however, I really appreciate that the reading gave some context as to how the exercises came to be. I would never have taught of creating music as a way to practice healing and meditation through active listening. For me music usually is the result of some sort of meditative process: the composer goes through some sort of realization or feeling that in turn is used as inspiration for music. But Oliveros saw making music as an inspiration for meditation, which is something that flipped in my head.

It was also interesting to see how this practice of Sonic Meditation created communities. We talked before in class about drum circles and about live coding communities, so music as a means to create community kind of a recurrent theme in our readings. However, this specific method is not only a result of political situations that people wanted to discuss but rather an exercise in learning how to listen to these situations and topics. This act of listening then leads towards a communal healing process which I found fascinating.

Oliveros mentions when talking about accepting others, especially minorities that “Healing can occur… when one’s inner experience is made manifest and accepted by others”. Throughout the reading, we come back to this idea of learning to actively listen to a performance, and this is something very important for the audience to do. Oliveros makes a point of how healing and meditation require an audience. It can be just one individual for the initial stages but then grows into developing an audience that must learn how to listen. Her techniques empower individuals to speak up when they are ready and explain to the audience how to engage with what they are listening to. Her philosophy promotes the type of audience needed for a communal healing process: an audience who actively listens.

Hello everyone!

For personal reasons, I had to be out of campus and I just returned, so I couldn’t do the presentation live. I am attaching a video of my short performance here. I was trying to see how P5.js could be combined with things we already know from Hydra to create animations.

This is the performance

It was pretty fun to do. I think the music is pretty basic but it’s supposed to be just a base, the visuals are the actual show.

One problem that I was having with P5.js was that sometimes the previous sketch would stay in the channel. It was quite frustrating at the start but afterwards, I was able to use it as a way to create.

I had never consciously realized that the terms used for music and visual “composition” were actually the same. It is only now that I read into a bit of the history of how these terms came to be that I made the connection. But to be honest, it makes sense. Music is made of waves and so is color, one of the components of visuals. These two mediums use waves as a sub-medium. The focus of artists and musicians to deal with the material primarily would then make sense placed in this context. They do not have to worry about linking different types of arts with one another because their own nature is similar and links naturally due to the mediums they share (as is the example of both visuals and music using sound waves).

On the other hand, I really resonated with the fact that artists change their medium due to “purely pragmatic reasons”. I remember that in highschool, I wanted to be both on the school band and the art club, however, the professors that managed both groups forced students to give up on the other and commit only to one group regardless of whether or not the student had enough time to do both. I always felt quite discouraged about this because it forced me to choose one “box” rather than allowing me to explore both mediums. I believe that many schools have that narrow approach and I wonder whether we would see much more significant artworks and performances coming from students in normal schools (not art schools) if we did not push students to encapsulate themselves in one medium. Perhaps the art or performances won’t be the best, but I believe that it would certainly improve children’s overall artistic practice and creative process. This would then tie in with the concept that anyone can make noise and thus can create a sound composition. At the same time, anyone can create a visual and thus make a visual composition.

It seems that one of the main themes that the artists in this reading have is that they go back to the very basics of each medium and in a way question the established artistic or musical norm to make this artistic practices accessible to people. It would be interesting to see this applied not just in colleges or formal art institutions but in public schools.

Concept:

For this assignment, I tried so many things that were not coming together. I had some very interesting visuals and I spent so much time crafting melodies (even used the piano in class to try out new patterns), but they were not coming along together. Then last week I had a very deep reflection on water and how we usually associate the sounds of water as something relaxing, but water can be very dangerous and wild depending on the context. To me water can be very relaxing but also very anxiety-inducing under certain conditions. It was somewhat of a deeply personal moment of introspection that inspired me to create water-like visuals and then come up with a soundscape to match them.

I was really interested in exploring my relationship with water and I came up with a story for that. I decided to use lines of the story as a way to name my functions in Tidalcycles. I personally really enjoyed this idea and thought it was one of the most appealing aspects of my presentation.

Music:

For the music, I wanted to emphasize the calmness of the ocean and then build it up to a very chaotic place, reminiscent of a storm.

The music was the trickiest part of the assignment. I was very locked into the idea of traditional composition, but even though I was able to come up with very nice melodies, I could not come up with beats that matched the melody well. It was very hard to get those to match and I spent a lot of time working on sound elements that I ended up not incorporating into my presentation.

After narrowing down a theme for my performance and coming up with the story that I want to tell, I actually threw away my more traditional composition approach and just started playing around with sounds that I found appealing and how to merge them with the rest of my music.

I think music was where I spent the bulk of my time. It was very hard, but I tried to keep myself organized with the code and that really help me tailor the kind of sound experience that I wanted to create. I also loved the Mask function in Tidalcycles. I watched this performance by Dan Gorelick. He uses mask in a very playful way and I thought that it would be a nice way to live code but at the same time have an outcome that will be synced to the rest of the music. It was very fun to try mask out and come up with different beats.

I am very happy with the organization of my code. Maybe it won’t make much sense to someone seen the code, but I understand very well how I placed things together and how I use variables and functions in the code.

by_The_Ocean

lived_A_Man

who_Manned

d9 $ a_lighthouse 1 "t t t t"

but_he_didnt_like

the_Ocean

because_it_was_beautiful

but_dangerous

so_when_the

storm_hits

he_has_to

go_up

_the_lighthouse

and_sound

the_alarm

to_warn_people

when_the_storm_stops

the_man_repeats_this

until_the_ocean_is_silent

hush


-- Evaluate before performance

-- to send cc values
do
  d11 $ ccv "20 40 64 127" # ccn "0" # s "midi"
  d12 $ ccv "0 20 64 127" # ccn "2" # s "midi"

but_he_didnt_like = do
  d1 $ slow 2 $ s "wind" <| note ("bf a g d") # room 0.5
  d2 $ slow 2 $ s "newnotes" <| note ("bf a g d") # room 0.5
  d3 $ slow 2 $ s "gtr" <| note ("bf a g d" + 12) # room 0.5
  d13 $ ccv "2" # ccn "3" # s "midi"

the_Ocean = do
  d1 $ every 4 (+ "c6 as a as")$ slow 2 $ s "wind" <| note "bf a g d"# room 0.5
  d2 $ every 4 (+ "c6 as a as")$ slow 2 $ s "newnotes" <| note "bf a g d"# room 0.5
  d3 $ every 4 (+ "c6 as a as")$ slow 2 $ s "pluck" <| note ("bf a g d" + 12)# room 0.5
  d4 $ every 4 (+ "c6 as a as")$ slow 2 $ s "pluck" <| note ("bf a g d")# room 0.5
  d11 $ ccv "20 40 64 127 [0 10] [70 50] [90] [100]" # ccn "0" # s "midi"
  d13 $ ccv "3" # ccn "3" # s "midi"

because_it_was_beautiful = do
  d5 $ slow 2 $ s "em2" <| note ("[e5 d5 ~] [c5 b4 ~] [a4 g4 ~] [f4 e4 ~]") # gain 1.2
  d6 $ slow 2 $ s "wind" <| note ("[e5 d5 ~] [c5 b4 ~] [a4 g4 ~] [f4 e4 ~]")
  d7 $ slow 2 $ s "newnotes" <| note ("[e5 d5 ~] [c5 b4 ~] [a4 g4 ~] [f4 e4 ~]" + 12) # gain 1.2
  d13 $ ccv "4" # ccn "3" # s "midi"

but_dangerous = do
  d1 $ n (arpg "'major7 [0,4,7,11]") # sound "wind"
  d2 $ n (arpg "'major7 [0,4,7,11]") # sound "newnotes"
  d3 $ n (arpg "'major7 [0,4,7,11]" - 12) # sound "pluck"
  d4 $ n (arpg "'major7 [0,4,7,11]" - 38) # sound "pluck"
  xfade 5 silence
  xfade 6 silence
  xfade 7 silence
  d13 $ ccv "5" # ccn "3" # s "midi"

-- cuerdas
so_when_the = do
  xfade 3 $ slow 2 $
    n (off 0.25 (+12) $ off 0.125 (+7) $ "c(3,8) a(3,8,2) f(3,8) e(3,8,4)")
    # sound "pluck"
  d7 $ s "hh*8"

-- storm is coming
storm_hits = do d1 $ jux rev $ s "sine*8" # note (scale "iwato" "0 .. 8" + "f3" ) # room 0.9 # gain 0.6

he_has_to = do
  d1 $ n (arpg "'major7 [0,4,7,11]") # sound "trump"
  d13 $ ccv "6" # ccn "3" # s "midi"

--storm
go_up = do
  d1 $ slow 2 $ s "superpiano" <| note ("[e5 d5 ~] [c5 b4 ~] [a4 g4 ~] [f4 e4 ~]")
  d2 $ slow 2 $ s "pluck:2" <| note ("[e5 d5 ~] [c5 b4 ~] [a4 g4 ~] [f4 e4 ~]" + 24) # gain 1.2
  d4 $ slow 2 $ s "gtr" <| note ("[e5 d5 ~] [c5 b4 ~] [a4 g4 ~] [f4 e4 ~]" + 24)
  d5 $ slow 2 $ s "em2" <| note ("[e5 d5 ~] [c5 b4 ~] [a4 g4 ~] [f4 e4 ~]") # gain 1.2
  d6 $ slow 2 $ s "wind" <| note ("[e5 d5 ~] [c5 b4 ~] [a4 g4 ~] [f4 e4 ~]")
  d7 $ slow 2 $ s "newnotes" <| note ("[e5 d5 ~] [c5 b4 ~] [a4 g4 ~] [f4 e4 ~]" + 12) # gain 1.2
  d8 $ slow 2 $ s "notes" <| note ("[e5 d5 ~] [c5 b4 ~] [a4 g4 ~] [f4 e4 ~]"+ 12) # gain 1.2
  d13 $ ccv "7" # ccn "3" # s "midi"

_the_lighthouse = do d1 $  fast 2 $ s "hh*2 hh*2 hh*2 <hh*6 [hh*2]!3>" # room 0.7 # gain (range 1 1.2 rand)

and_sound = do
  d8 $ a_lighthouse 1 "t t t t"
  d13 $ ccv "8" # ccn "3" # s "midi"

  -- High Hat beat
the_alarm = do
  xfade 5 $ alarm
  d13 $ ccv "9" # ccn "3" # s "midi"

-- maybe the alarm
to_warn_people = do xfade 9 $ slow 2 $ n (off 0.25 (+12) $ off 0.125 (+7) $ "c(3,8) a(3,8,2) f(3,8) e(3,8,4)") # sound "trump"

-- After buildup
when_the_storm_stops = do
  d1 silence
  d2 silence
  d3 $ s "sax" # note (arp "updown"(scale "major" ("[0,2,4,6]" +" <0 0 9 8>") + "f5"))
  d4 silence
  d5 silence
  d6 silence
  d7 silence
  d8 silence
  d9 silence
  d13 $ ccv "10" # ccn "3" # s "midi"

alarm = stack[
  s "~ trump" # room 0.5,
  fast 2 $ s "gtr*2 gtr*2 gtr*2 " # room 0.7 # gain (range 1 1.2 rand)] # speed (slow 4 (range 1 2 saw))

-- Ending

the_man_repeats_this = do
 xfade 1 $ s "em2:2" <| note ("c [d e] c")
 d3 silence
 d13 $ ccv "11" # ccn "3" # s "midi"

until_the_ocean_is_silent = do
  xfade 1 silence
  d13 $ ccv "12" # ccn "3" # s "midi"

-- Ambience
by_The_Ocean = do
  xfade 15 $ slow 4 $ s "sheffield" # gain 0.7 -- ambience
  d14 $ slow 8 $ s "pebbles" # gain 0.7 -- very long, maybe pebbles on a beach
  d3 $ slow 8 $ s "birds" <| n (run 4)# legato 1.8
  d13 $ ccv "0" # ccn "3" # s "midi"

lived_A_Man = do
  d3 silence
  d3 $ qtrigger 3 $ slow 8 $ s "em2:2" <| note ("[b ~] [a ~] [g ~] [e ~]")

who_Manned = do
  d1 $ s "bd bd cp ~"
  d13 $ ccv "1" # ccn "3" # s "midi"

a_lighthouse speed t = slow speed $ mask t $ sound "clubkick clubkick clubkick [clubkick clubkick clubkick]"

 

Visuals:

The visuals were very fun to make. Again, a lot of the things I made were not used, but I did notice a pattern in the types of animations I was creating, which were very similar to the way water behaves. I decided to narrow down my visuals to those that reflected water behaviour (both the calm and the storm).

While I was practising, I was struggling a lot between triggering things in Tidal and then going to the hydra and vice-versa. It was very overwhelming for me because of how much code I had. So, after trying to optimize my triggering skills, I decided to instead automize the visuals by using the update() function and controlling which visual is shown through one of the cc value arrays (channel 3). This took away the pressure of modifying things in both files, so I could just focus on the music and that would change the visuals without me having to do it into the js file.

To be honest, I really thought that I was been selective with my visuals because I got to narrow them down to a theme and even then, I did not use all of my water-related animations. However, I see now that it was not enough and I should have trimmed down on the visuals more. It would have been better to dedicate more time to how using the cc[] could modify the visual into something completely different.

update = () =>
{
  if(ccActual[3] == 0) ocean() //by the ocean
  if(ccActual[3] == 1) sparklingWater() // who Manned
  if(ccActual[3] == 2) drop()// he didn't like
  if(ccActual[3] == 3) honda() //ocean
  if(ccActual[3] == 4) waterwheel() //beautifu;
  if(ccActual[3] == 5) psychoWave() //dangerous
  if(ccActual[3] == 6) wave()// he has to
  if(ccActual[3] == 7) splash()//go up
  if(ccActual[3] == 8) storm()//sound
  if(ccActual[3] == 9) hurricane() // the alarm
  if(ccActual[3] == 10) dripple() //sax
  if(ccActual[3] == 11) ocean() // repeats
  if(ccActual[3] == 12) hush() //silent
}

//THINGS THAT WORKED

//wavey water
// b = 0
// update = () => b += 0.001 * Math.sin(time)

wave = ()=> voronoi(()=>ccActual[0],0,0).color(.2,0.3,0.9)
  .modulateScrollY(osc(10),0.5,0)
  .out()

  dripple()

//storm
storm = ()=> shape(2).mask(noise(10,0.1)).color(1,0,0).invert(1).rotate(()=>6+Math.sin(time)*cc[0]).out()

drop = ()=> shape(2).mask(noise(cc[0],0.1)).color(1,0,0).invert(1).rotate(()=>6+Math.sin(time)*cc[0]).out()

hurricane = ()=> voronoi()
  .add(gradient().invert(1))
  .rotate(({time})=>(time%360)/2)
  .modulate(osc(25,0.1,0.5)
              .kaleid(50)
              .scale(({time})=>Math.sin(time*cc[0])*0.5+1)
              .modulate(noise(0.6,0.5)),
              0.5)
  .out()


splash = ()=>gradient().color(1,0,1).invert(1).saturate(5)
  .add(voronoi(50,0.3,0.3)).repeat(10,10).kaleid([3,5,7,9].fast(cc[0])).modulateScale(osc(4,-0.5,0),15,0).out()

dripple = ()=> gradient().invert(1).kaleid([3,5,7,9].fast(0.5))
.modulateScale(osc(cc[2],-0.5,1).kaleid(50).scale(0.5),10,0)
.out()

drippleActual = ()=> gradient().invert(1).kaleid([3,5,7,9].fast(0.5))
.modulateScale(osc(()=>ccActual[2],-0.5,1).kaleid(50).scale(ccActual[0]),10,0)
.out()

//wavy pattern
  psychoWave = ()=>shape(4,0.95)
    .mult(osc(0.75,1,0.65))
    .hue(0.9)
    .modulateRepeatX(
      osc(10, 5.0, ({time}) => Math.sin(time*(cc[0])))
    )
    .scale(10,0.5,0.05) //first value could be a cc
  .out()

// honda
honda = ()=> osc(10,-0.25,1)
    .color(0,0.1,0.9)
    .saturate(8).hue(0.9)
    .kaleid(50)
    .mask(
        noise(25,2)
        .modulateScale(noise(0.25,0.05))
      )
    .modulateScale(
      osc(6,-0.5,()=>cc[2])
      .kaleid(50)
    )
    .scale(()=>cc[2],0.5,0.75)
    .out()

    // background of house
    waterwheel = ()=> noise(10, 0.1).color(1,0.5,0.5).invert([1]).add(shape(2,0.8).kaleid(()=>6+Math.sin(time)*4)).out()


sparklingWater = ()=> noise(25,2).invert(0).color(0.7, 0.9, 1).saturate(4).modulateScale(noise(()=>cc[2],0.05)).out()

ocean = ()=> osc(1,1,0).color(0,0,1).hue(0.9).modulatePixelate(noise(5,1),()=>cc[0]).out()

 

Performance:

Link to Performance
My computer slows down a lot when I try to record the audio from supercollider, so I decided to just screenrecord.

Reflection:

I felt like this assignment made me really grasp how things work in TidalCycles. I spent a lot of time going through the documentation and trying out things. Hydra was a bit more about experimenting, but I liked the idea of using Update as a way to automize certain things and not have to worry about both triggering things in Tidal and in Hydra. My understanding of both was really reinforced and expanded.

However, I do think that I could have been more selective on the visuals and maybe should have focused more on transitions. Because I tried so many things out that I ended up not using, I felt like as long as the visuals were coherent in themes and palette with each other then my performance would be good. I was a bit blinded by how much I did vs how much I should be doing on my presentation.

I really liked some of the visuals I created and I would have liked to maybe do more in the interaction of the visual with the cc values, not just triggering things and scaling but also transforming the visual. I also thought that automating certain things was something smart that I was doing to help myself but I think it ended up downgrading the overall experience of the performance.

I practised a lot for this assignment and I think that made me have certain thinking of how well the performance was working compared to my previous versions of the assignment, but now I see that just because the performance is better than when I previously practised, that doesn’t mean that it’s good overall. I think there were a lot of things I was lacking because I was worried about the wrong things. I think that now that I am looking back on the performance more clearly, I can definitely see that my concept was good but the execution was not as well planned as I had falsely thought and the result was not what I thought.

I think that maybe I should have put more of myself out there in the performance and though the topic was very mine, the execution could have been better. I really like the idea of using my artwork combined with hydra for the visuals of my performances and I think I really want to explore that this second half of the semester.

Hello everyone!

I remembered Shreya asked for the off in my performance. Here is it. You can play with the numbers and it does some very interesting stuff. You are basically offsetting the time.

d3 $ jux rev $ off 0.25(|+ n 12) $ off 0.125(|+ n 7) $n "" # sound "supermandolin" # legato 4
d4 $ ccv (stitch "" 127 0) # ccn 0 # s "midi"

Kodelife is a real-time GPU Editor that allows you to create shades. It was created with the purpose of rapid prototyping without having to use heavy software or builds in compilers. Kodelife’s main programming language is OpenGL GLSL, but it can also be used with platform-specific shading languages such as the Metal Shading Language and DirectX HLSL.

Kodelife was designed to be a teaching tool for beginners, but it was also created with enough industry-standard for experienced shader developers to work with. However, Kodelife couldn’t keep up with modern shader engines, so it was re-framed as a prototyping tool for developers and a platform for live coding visuals.

The editor runs your code at real-time without any need to evaluate functions. It mainly uses vectors to create textures that can be modified to make visuals. Kodelife also comes with a panel that manages the inputs and outputs that the program can receive and this can be defined in the preferences.

Cool Things About Kodelife:
– Code evaluates automatically, no need for any commands
– Based on C so it’s easy to use glsl
– Very easy to control and create variables
– Very easy to get input and output from other sources
– Has a MIDI bus
– Flexibility to how simple or complex projects can get
– Can be easily used in the web with the library GLSL Canva
– Has a mobile editor app on Android and iOS

Downsides
– Free version always asks if you want to purchase a licence
– Since code evaluates automatically if something crashes it takes time to figure out what provoked it
– You can’t have more than 1 project open
– No actual documentation
– Sometimes the code loads after you type
– Could be too mathematical for some people
– Lots of stuff going on in the backend that you can’t control
– Mobile app is paid

I found Kodelife to be quite user-friendly. Despite the fact that documentation does not really teach how to use the software and that I mainly used YouTube Tutorials to learn after you get a hang of how to use the software, it becomes very easy to experiment.


Before trying Kodelife, I actually spent a lot of time trying to use other live coding platforms. However, I had a lot of issues since many of them are either not updated or work only on specific computer models.

Comments on other Live Coding Platforms:
– tidal-Unity doesn’t work anymore because it was created using Unity version 5.4 which is very very old (older than 2016)
– Tidal-Unity is missing some declarations in the code so the OSC does not work correctly
– Gideon is pretty bad
– Ciryl doesn’t have a working version for M1 Macs
– Arcadia: install is very fidgety and though I was able to compile the UNITY project, it only works with Miracle for Clojure, which needs Clojure-mode (that is not working on my computer even though Clojure itself is fine)
– The Force is good but documentation is a bit lacking