LiveCodeLab is a web-based livecoding environment for real-time 3d visuals and sample-based sequencing. It was created and released by Davide Della Casa in April 2012, and then from November 2012 co-authored by Davide Della Casa and Guy John.

Motive for the development of LCL is to gather good elements in livecoding environments, and ground the new environment on a new language that is compact, expressive, and immediately accessible to an audience with low computer literacyTechnically, LiveCodeLab has been directly influenced by Processing, Jsaxus, Fluxus, and Flaxus. The language has changed from JavaScript -> CoffeeScript -> Current language (LiveCodeLang). 

The distinct characteristics bring LCL different value in live performances and education. Audience can understand the code better (if they want), and users ranging from young children to adults, with different backgrounds can all easily access. 

In addition, the web provides a clean flow of tutorial, where starters can learn from scratch quite quickly.

However, I did find gap between the tutorial and demos. More advanced manipulations are not included in the tutorial so sometimes it’s hard to comprehend code for the demos.

In sum, I have found Livecodelab:

  • Straightforward & Compact
    • E.x. rotate red box
  • “On-the-fly”: as you type, things pop up
    • transient states ->  “constructive” nature of the performance 
    • though simpler, more liveness*
  • 3D, but can also create 2D pattern by tricks (paintOver, zooming in etc.)
  • With limited manipulation (both audio and visual)

 

Apart from the tutorial, valuable things I found:

  1. the number after object:

Absolute value controls scale, negative numbers make the object transparent.

  1. “pulse” as good expression when accompanied by audio

 

Supplementary Materials:

*Liveliness hierarchy

TOPLAP Manifesto

HTML color name

 

Trials:

References

Davide Della Casa, [personal website], http://www.davidedc.com/livecodelab-2012

Della Casa, G. John, LiveCodeLab 2.0 and its language LiveCodeLang, ACM SIGPLAN, FARM 2014 workshop. 

LiveCodeLab, https://livecodelab.net/

Kodelife is a real-time GPU Editor that allows you to create shades. It was created with the purpose of rapid prototyping without having to use heavy software or builds in compilers. Kodelife’s main programming language is OpenGL GLSL, but it can also be used with platform-specific shading languages such as the Metal Shading Language and DirectX HLSL.

Kodelife was designed to be a teaching tool for beginners, but it was also created with enough industry-standard for experienced shader developers to work with. However, Kodelife couldn’t keep up with modern shader engines, so it was re-framed as a prototyping tool for developers and a platform for live coding visuals.

The editor runs your code at real-time without any need to evaluate functions. It mainly uses vectors to create textures that can be modified to make visuals. Kodelife also comes with a panel that manages the inputs and outputs that the program can receive and this can be defined in the preferences.

Cool Things About Kodelife:
– Code evaluates automatically, no need for any commands
– Based on C so it’s easy to use glsl
– Very easy to control and create variables
– Very easy to get input and output from other sources
– Has a MIDI bus
– Flexibility to how simple or complex projects can get
– Can be easily used in the web with the library GLSL Canva
– Has a mobile editor app on Android and iOS

Downsides
– Free version always asks if you want to purchase a licence
– Since code evaluates automatically if something crashes it takes time to figure out what provoked it
– You can’t have more than 1 project open
– No actual documentation
– Sometimes the code loads after you type
– Could be too mathematical for some people
– Lots of stuff going on in the backend that you can’t control
– Mobile app is paid

I found Kodelife to be quite user-friendly. Despite the fact that documentation does not really teach how to use the software and that I mainly used YouTube Tutorials to learn after you get a hang of how to use the software, it becomes very easy to experiment.


Before trying Kodelife, I actually spent a lot of time trying to use other live coding platforms. However, I had a lot of issues since many of them are either not updated or work only on specific computer models.

Comments on other Live Coding Platforms:
– tidal-Unity doesn’t work anymore because it was created using Unity version 5.4 which is very very old (older than 2016)
– Tidal-Unity is missing some declarations in the code so the OSC does not work correctly
– Gideon is pretty bad
– Ciryl doesn’t have a working version for M1 Macs
– Arcadia: install is very fidgety and though I was able to compile the UNITY project, it only works with Miracle for Clojure, which needs Clojure-mode (that is not working on my computer even though Clojure itself is fine)
– The Force is good but documentation is a bit lacking

For my research project, I chose Max Software to explore and experiment with. It is developed by a San Francisco-based software company called Cycling ’74. It is written in C and C++ on the JUCE platform. It has been used by composers, performers, software designers, researchers, and artists to create recordings, performances, and installations.

So, what is Max?

Max is a node-based visual programming language that allows its user to easily code live. Max can be used to play around with (mix, layer, distort, corrupt) audio and visual files. Programming in Max helps you create a tool like an interactive software which you can then use to tweak parameters of your code and perform live. Max has special features which allows you to set up hardware like synthesizers, MIDI control keypad to your program which may assist you in your live performance. Max has 2 parts to it – the MAX MSP and the MAX Jitter. Max MSP is used for any work related to audio and Jitter is used for video.

The reason I found Max interesting and different from what we have been doing in class is the interface. Being node based, its GUI is very user friendly. Also, since it follows a flow, it is easy to follow and understand for the audience as well. The present mode in Max makes it unique. Basically, after programming the entire thing, you can use the buttons, toggles etc. to set controls and organize your panel so that it is performance ready. Instead of writing each code line from scratch, Max let’s you set your program up as an instrument whose usage you can demonstrate live. In the context of love coding, Max let’s the artist / creator make a toolbox for themselves (like a DJ Mixer).

I thoroughly enjoyed working on this project and learned a lot. There is so much to Live Coding and I am just beginning to explore. Max is well documented as well (just like Processing) which was very helpful to delve deeper into the functioning of the objects.

Here is my final video of Live Coding performance with Max:

Pictures of my code can found below:

For my research project, I chose to look into Alda which is described as “a music programming language for musicians.” It enables musicians to create compositions using text editors and the command line — super straightforward and simple if you’re already familiar with music notation! In terms of where Alda stands within the Live Coding context, I actually don’t think it’s much of a live coding platform. Although it has a “live-ish” mode, it is most powerful in its ability to simplify writing sheet music without being too focused on the act of notation, and this is what the creator Dave Yarwood intended as a musician and programmer. But who knows? Maybe the ability to simply notate and write notes for instruments in parallel allows for live band performances? or improvisation using more classical instruments and typical notation.
To understand how Alda works, I simply installed it and played around with its live or repl mode while following the cheat sheet. Afterward, I tried to find online tutorials or performances, and only found one which was sufficient for me to understand the potential of Alda! I then started breaking down some notation to try to put together a presentation that portrays this potential to my classmates.

I personally really enjoyed working with Alda and reviving my music theory knowledge, although I’ve never properly composed a track I watched a youtube video and tried to give it a go. Here’s my (very basic) composition:

and here’s the code:

gMajScale = [g a b > c d e f+ g]
gMajChord = [o2 g1/b/>d] (vol 50) #First of the scale
cMajChord = [o3 c1/e/g ] (vol 50)#Fourth of the scale
dMajChord = [o3 d1/f+/a ] (vol 50) #Fifth of the Scale

piano:
V1:
gMajChord | cMajChord | gMajChord | dMajChord #LH: 1-4-1-5 , 1-4-1-1 chord progression.
gMajChord | cMajChord | gMajChord | gMajChord

V2:
g4 a b a | c e c d | g2 (quant 30) > g4 (quant 99) b | d1 #RH (melody): inserting random notes from the scale

g4 a b a | c e c d | g2 (quant 30) > g4 (quant 99) b | < g1

midi-acoustic-bass:
o2 g8 r r b8 r r r r | r e4 c8 r r r | g8 r r b8 r r r r | d8 r r f8 r r r (volume 100) #played around with notes from the scale and note lengths
o2 g8 r r b8 r r r r | r e4 c8 r r r | g8 r r b8 r r r r | g8 r r b8 r r r r (volume 100)

percussion:
[o2 c8 r r c c r r ]*8 #experimented until something worked(?)
o3 c+

Screen Recording 2022-09-19 at 10.05.52 Screen Recording 2022-09-19 at 09.54.29

Praxis Live is a hybrid live visual programming platform that uses both Java and Processing to code visuals and audio. It is node-based so that users have easy access to important pieces of code, and it also has real-time processing.

To get started with using Praxis Live, I downloaded example projects. I experimented with ‘Funky Origami’, ‘Smoky 3D’, ‘Circles’ and ‘Mouse Drawing’. One good thing about this platform is that you can make changes to the project in real-time once you save. The node-based system also allows you to easily make changes without changing code. Though, you can edit the code for more detailed changes.

Praxis Live would be a very good choice for a live coding performance, due to the easily accessible code through the nodes, which also makes everything much easier for the audience to understand.

 

I found this reading particularly interesting because of the mathematical part of it (information theory) but at the same time a little difficult to follow because of the music technicality. What I really found eye-opening was the difference between random corruption versus random generation. For my second assignment, I was initially struggling a lot to create something that sounded nice or meaningful because I do not have any musical background and I do not know which notes to hit, what sounds to use together, etc. Because of this lack of music technicality, I was just trying to generate random patterns I could think of that sounded monotonous after a point. As the author writes, even my music was, “Informative, unpredictable, not conforming to something heard before, but it [fell] short of being a musical composition.”

This reading made me realize how we could create a better, more meaningful musical composition that conveys emotions of anticipation, prediction, surprise, disappointment, reassurance, or return through the usage of noise (random corruption of carefully selected notes/sounds). I now understand what Prof. Aaron meant when he said, “Put in some question marks here and there, use random range, degradeBy, sometimesBy, etc.” I earlier wondered how this could lead to music that would be more pleasurable to hear because it will be so off-pattern, out of sync, uncertain, and not have a clear rhythm but I now understand why this is important and a better move than random generation. This reading has enlightened me to a different approach to composing music. From really struggling to create live computational music, I believe I now have a direction I want to explore – replace defined information with random data at random times, degrading otherwise fully intelligible signals.

I do not know if this will make me a better composer or the next assignment a bit easier but the idea of using information theory in computational music is quite fascinating and I think I will definitely look more into it later 🙂

The reading states that random corruption could help prevent redundancy and repetition, resulting in less bored listeners. This inspires me to incorporate more noise during my live coding sessions, in the form of ? and rand functions, making the piece more unpredictable. The challenge, however, seems lie in navigating the right amount of randomness to use. During last class, when we were playing around with the ? function, I noticed that there was always a point where overusing the “?” function lead to the piece sounding more off and empty. Hitting the right amount of randomness seems to be optimize the sound, but anything below and beyond it seems to do the opposite.

This lead me to question, how much randomness is too much? Is there a point where adding more randomness decreases the musicality of the piece? How can I know when that point would be? Is it subjective or is there a formula for that too?