The given reading looks into how the ‘live’ part of live coding is defined and executed. It was particularly interesting to see such spectrum of liveliness as it was something we also talked about in class.
I remember in class we talked about how much of the code should be prepared or not, in order to balance making sure that the audience is constaly entertained and that not too much of the performance is preplanned. The reading mentions both styles of live coding, implying it depends on the style and the performer and the performance, and what definition of liveliness they wish to hold.
For such decision, I think it’s important to remember that “within live coding, the performer consciously adopts a medial position, actively maintaining the conditions that will keep the action dynamic.”
It was interesting to see the background / history of how the definition or range of art and music were pushed further. I think the part that stood out most to me was the turning point where art schools provided the environment in which students could try new things, becoming “the point of origin for interdisciplinary and multidisciplinary work.”
It’s also interesting to see how pop started, with “the principle that a good punk song only needed three chords applied just as much as the do-it-yourself attitude.” I think this is a mindset that still prevails, even in our course right now. It’s noteworthy that the combination of a good environment and a shift in people’s perspectives opened the field to new definitions of musicians and artists, and what they could do.
At some point, the author describes a live coder as somebody “concurrently playing and composing.” The complexity of live coding is much more well delivered with this description, as it captures the essence of having to make decisions on the composition and arrangement AND the actual playing of the components at the same time. Based on this, the author goes to explain the importance of notation in live coding, and how these notations serve as a way live coders can manipulate different aspects of the music. Reading this, I finally understood why the concept of ‘live coding’ itself gave me a lot of pressure. I think it may be because there are tools readily available, opening up so many possibilities. However, such large number of options mean that it’s going to be hard to make a decision. But, in live coding, decisions have to be made quickly, on the spot, during the performance.
I also like how he wraps up saying that live coding “treads on uncommon path between oral and written culture, improvisation versus the composed, and the spontaneous versus the arranged.” This again holds the essence of having the minimum amount of plan done before the performance, and having to do both the arranging and the playing at the same time during the performance.
There was no end goal at first. I was trying combinations of different sound, thinking about how they would sound like if they worked as my intro etc. Playing around, I came up with two combinations that I liked. They gave a vibe of somebody maybe being chased, or maybe being high. Then a game I watched a video of years ago popped up in my head, so I decided to maybe try making a visual similar to this. (It’s an educational game and this project too, is far from promoting drug abuse.)
<Performance>
For the performance, I realized that it would be chaotic to go back and forth between the music and the visuals. At the same time, I wanted some aspect of live coding to be there. To get around this, I made the music as a complete piece, allowing myself to focus on the visuals and evaluating the codes according to the preset music.
I could not do any type of screen recording because whenever I tried, the hydra code lagged so much, completely stopping the video in the background and freezing the screen. Because of that, some sounds of the music sounds a little bit different in the video above.
<Tidal Cycles Code>
intro = stack [slow 2 $ s "[ ~ cp]*4", slow 2 $ s "mash*4"]
baseOne = stack [s "bd bd ht lt cp cp sd:2 sd:2", struct "<t(4,8) t(3,8,1)>" $ s "ifdrums" # note "c'maj f'min" # room 0.4]
baseTwo = stack[slow 2 $ s "cosmicg:3 silence silence silence" # gain 0.7, slow 2 $ s "gab" <| n (run 10)]
baseThree = stack[s "haw" <| n (run 4), s "blip" <| n (run 13) # gain (range 0.9 1.2 rand)]
buildHigh = palindrome $ s "arpy*8" # note ((scale "augmented" "7 3 1 6 5") + 6) # room 0.4
buildMidOne = sound "bd bd ht lt cp cp sd:2 sd:2"
buildMidTwo = stack[s "<feel(5,8,1)>" # room 0.95 # gain 1.3 # up "1", s "feel" # room 0.95 # gain 1.3 # up "-2"]
buildLow = s "f" # stretch 2 # gain 0.75
explosion = slow 2 $ s "sundance:1" # gain 1.2
mainHigh = s "hh*2!4"
mainMid = stack[fast 1.5 $ s "bd hh [cp ~] [~ mt]", fast 1.5 $ s "<mash(3,8) mash(5,8,1)>" # speed "2 1" # squiz 1.1, s "circus:1 ~ ~ ~ "]
endBeep = s "cosmicg:3 silence silence silence" # gain 0.7
-- midi
midiIntro = ccv "2 4 -2 1" # ccn 0 # s "midi"
midiEnd = ccv "2" # ccn 0 # s "midi"
midiBase = ccv "127 0 0 64 127 0" # ccn 1 # s "midi"
midiBuild = ccv "40 10" # ccn 2 # s "midi"
midiMain = ccv "2" # ccn 2 # s "midi"
midiSlow = ccv "20 16 12 8 4" # ccn 3 # s "midi"
playMusic = do {
d2 $ qtrigger $ filterWhen (>=0) $ seqP [
(0, 6, intro),
(0, 42, fast 4 midiIntro),
(6, 12, intro # gain 0.8)
];
d3 $ qtrigger $ filterWhen (>=0) $ seqP [
(6, 50, midiBase),
(6, 42, baseOne),
(12, 42, baseTwo),
(18, 22, baseThree),
(22, 26, baseThree # up 4),
(26, 30, baseThree),
(30, 34, baseThree # up 4),
(34, 42, degradeBy(0.5) baseThree # gain 0.8),
(42, 46, degradeBy(0.5) baseThree # gain 0.65),
(46, 50, degradeBy(0.5) baseThree # gain 0.45)
];
d4 $ qtrigger $ filterWhen (>=0) $ seqP [
(42, 58, buildHigh),
(46, 58, buildMidOne # gain 1.1),
(50, 58, buildMidTwo),
(50, 60, fast 6 midiBuild),
(50, 58, buildLow),
(58, 59, explosion)
];
d5 $ qtrigger $ filterWhen (>=0) $ seqP [
(60, 62, mainHigh),
(60, 86, midiEnd),
(60, 86, midiMain),
(60, 86, midiSlow),
(62, 84, mainMid)
];
d6 $ qtrigger $ filterWhen (>=0) $ seqP [
(68, 76, baseOne # gain 0.5),
(68, 80, baseTwo # gain 0.5),
(68, 68, baseThree # gain 0.5),
(76, 86, midiEnd),
(76, 78, slow 2 endBeep),
(78, 82, degradeBy(0.7) $ slow 2 endBeep # gain 0.6),
(82, 86, degradeBy(0.5) $ slow 3 endBeep # gain 0.5)
]
}
playMusic
hush
When I started working on the visuals, I actually thought it would be a good idea to have the beat of the visuals a bit off from the music to create that psycho mood. Looking at the end result, I’m not sure if it was implemented in the way I intended. I think the idea was okay, but making something off in a harmonized way was not such an easy job. I also think there could’ve been more filler visuals. I think there’s a part or two where the visuals are repetitive for a while- making more visuals for these parts, I think, would have made it more intense to watch.
I love how the author explains that in live coding, each of the programmer and the program becomes “mutually part of an open-ended and contingent process of both problem generation and problem solving.” Until now, although I had the understanding that live coding is all about showing the process, there was some pressure that the approach that will be shown ‘has to make sense’ and ‘build up’ towards something. The author’s word created a shift in my perspective that the beauty of live coding performances lies in that the process is not just about problem solving and making things better, but also about creating or running into problems and showing how the programmer deals with them through trials and errors.
Expanding on my initial thought that live coding performance still has to ‘build up’ to create something fancy, I think I limited myself to thinking that there has to be an ultimate product I should be aiming for. After reading the philosopher Paolo Virno’s saying “what characterizes the work of performing artist is that their actions have no extrinsic goal” and that “the purpose of their activity coincides entirely with its own execution.” My understanding is that yes, the performance should show progress in some way, but the focus is not necessarily about the end goal and more about making the best (or interesting) moves for that instance and seeing where the performer can go with that.
I honestly started with clive because I’m a bit interested in C language and wanted to explore something with it. However, there was a lack of information on clive. The documentations on installing and composing music with it was not well done, making it hard to explore. So, I started looking at other options and found FoxDot quite similar to TidalCycles. It also had better documentations to help beginners get it started. (Although I had to change from clive to FoxDot, I still think this exploration was meaningful as it highlights the importance of documentation, especially in more non-common fields such as live coding.) Also, FoxDot uses SuperCollider to output sound, which made it more familiar for me.
What is FoxDot:
FoxDot is an open source live coding enviornment, designed specifically for live coding. It’s in Python, making it more intuitive for the users. (At the same time, after looking at other enviornments our classmates have been exploring, I think there are many more options that are easier and more intuitive.)
Many live coders liked FoxDot mostly because of the fact it was in Python. However, it has been explicitly announced that there will be no further developments/updates: only minor changes when issues are found.
Advantages and Disadvantages of FoxDot:
Personally, I think live coding environments that have higher technology and more reflection of physical music composing are harder for people like me (who don’t have experience in composing music) to understand and play around with. As somebody with a background in computer science, environments like FoxDot, where composition can be done in a more code-like manner, is much more approachable. While I was not able to get to the level of using it, FoxDot can be used along with Sonic Pi, making it more attractive for live coders.
The downside is that since it’s not being updated, there are so many issues that happen with installing and running it. There were many clashes with python and I had to manually edit multiple piles to make it running. Also, there’s a limited number of sounds in FoxDot’s sound library, which may limit the mood and style of music live coders can create.
Example Performance:
Here is a video of a live coding performance with FoxDot done in Momentum Tech Conference 2019, Karachi.
Short Performance:
Below is the code I prepared to show for our in class performance. I left out some parts by mistake, so hopefully this sounds better.
The reading was a bit complicated, but I tried to understand by thinking of some examples I can relate to. To me, a person who’s not knowledgeable about but likes music, music seems like a combination between repetition and variation- balancing the two. There has to be some sort of repetition to create the underlying beat, something that will work nicely as a plate. On top of this plate has to be variations- something that can stand out on top of somewhat plain plate. When listening to music, like the reading mentioned, if there’s a lack of variation, it feels like the music has no build-up. By this, I mean that the purpose of the music gets very blurred. Music also has something it’s trying to deliver- a sad emotion, some hype, or maybe even calmness. These purpose cannot be fulfilled if the listener only gets a repetition of the same beat and melody, as it gives them no room to build up these emotions.
Because such analysis of mine, if we can call it that, is purely based on my feelings, I didn’t think about how musicians can overcome this: I could only criticize that this music is ‘boring.’ It was interesting to learn the basic approaches I can take, as a musician now, to tackle such stillness.