Kilobeat is a collaborative web-based dsp (digital signal processing) live coding instrument, with aleatoric recording (applies when a random function is used) and playback. The user interface is as follows. Each row represents a connected device

Kilobeat Main Interface

There was no one in the main server every time I connected, so I took the entire server for myself during the experimentation. I opened four different tabs on my browser and tested running different functions for each tab. There are default functions available as tabs (Silence, Noise, Sine, Saw, …), and each function can be combined to produce new sounds. Some examples are layering (addition), amplitude modulation (multiplication), function composition (passing in one thing as an argument to another). The players can look at the oscilloscope and the spectrum analyzer to visualize their output.

I found the output created by kilobeat limiting, compared to supercollider. It was also quite difficult to make the piece sound enjoyable to the ear. The strength of the platform, it seems, lies in offering users with an easy collaborative experience on the web, which made me wonder whether there was an option on atom for real-time online collaboration. If so, although I appreciate the conceptual idea behind kilobeat, I personally would not use the platform again.  

So, what is Mosaic?

Mosaic is an open source multiplatform live coding and visual programming application based on openFrameworks! (https://mosaic.d3cod3.org/)

The key difference is that it integrates two paradigms: visual programming (diagram) and live coding (scripting).

History

Emanuele Mazza started the Mosaic project in 2018, in strict relation with the work of ART+TECHNOLOGY research group Laboluz from the Fine Art faculty of the Universidad Politécnica de Valéncia in Spain.

Mosaic even has its own paper published here: https://iclc.toplap.org/2019/papers/paper50.pdf

The goal of Mosaic really is to make live coding as accessible as possible by giving it a seamless interface and minimum coding requirements:

It’s principally designed for live needs, as can be teaching in class, live performing in an algorave, or running a generative audio-visual installation in a museum. It aims to empower artists, creative coders, scenographers and other creative technologists in their creative workflow.

Source: https://mosaic.d3cod3.org/

Mosaic Interface + Experience

Mosaic interface is easy to navigate because it has functional blocks that can be connected with each other. For example, if I have a microphone input, I can then amplify the sound and connect it to the visual output straight away, like my project below:

Technical Details

Mosaic can be scripted with Python, OF, Lua, glsl and bash. In addition, pure data live-patching capability, and a selection of audio synthesis modules, multiple fullscreen output windows capabilities.

Mosaic is mainly based on two frameworks : OpenFrameworks and ImGui. OpenFrameworks is an open source C++ toolkit for creative coding.

Get started with Mosaic

To download Mosaic, head to the website instructions here.

You can start with a few built-in examples and see tutorials on Vimeo.

My experience and project

As I found out, there are not that many resources available on getting started with Mosaic. There is good documentation on the website and associated GitHub repository, but not that many independent creators who share their projects in Mosaic online, as compared to its parent tool openFrameworks (OF).

Because I have some background in OF, it was manageable to understand the coding mechanics in Mosaic, but it took some time to understand how to produce live coding that is not result of random gibberish with noise and my microphone’s input.

What I ended up testing in the end is creating visual output with Sonogram based on my microphone input.

Watch my examples:

  1. https://youtu.be/IXW6jBlr85I (audio and visual output)
  2. https://youtu.be/xm02jKemx2c (video input and visual output)
  3. https://youtu.be/5ofT4aOYJoI (audio and visual output)

And corresponding visuals that were created in the process from above:

Finishing thoughts

Mosaic provides an understandable GUI to see what’s happening with the code, audio, and visual output. My main challenge as a beginner was finding ways to make output make sense — coming up with code and block connections that would create a cohesive generative visual at the end.

LiveCodeLab is a web-based livecoding environment for real-time 3d visuals and sample-based sequencing. It was created and released by Davide Della Casa in April 2012, and then from November 2012 co-authored by Davide Della Casa and Guy John.

Motive for the development of LCL is to gather good elements in livecoding environments, and ground the new environment on a new language that is compact, expressive, and immediately accessible to an audience with low computer literacyTechnically, LiveCodeLab has been directly influenced by Processing, Jsaxus, Fluxus, and Flaxus. The language has changed from JavaScript -> CoffeeScript -> Current language (LiveCodeLang). 

The distinct characteristics bring LCL different value in live performances and education. Audience can understand the code better (if they want), and users ranging from young children to adults, with different backgrounds can all easily access. 

In addition, the web provides a clean flow of tutorial, where starters can learn from scratch quite quickly.

However, I did find gap between the tutorial and demos. More advanced manipulations are not included in the tutorial so sometimes it’s hard to comprehend code for the demos.

In sum, I have found Livecodelab:

  • Straightforward & Compact
    • E.x. rotate red box
  • “On-the-fly”: as you type, things pop up
    • transient states ->  “constructive” nature of the performance 
    • though simpler, more liveness*
  • 3D, but can also create 2D pattern by tricks (paintOver, zooming in etc.)
  • With limited manipulation (both audio and visual)

 

Apart from the tutorial, valuable things I found:

  1. the number after object:

Absolute value controls scale, negative numbers make the object transparent.

  1. “pulse” as good expression when accompanied by audio

 

Supplementary Materials:

*Liveliness hierarchy

TOPLAP Manifesto

HTML color name

 

Trials:

References

Davide Della Casa, [personal website], http://www.davidedc.com/livecodelab-2012

Della Casa, G. John, LiveCodeLab 2.0 and its language LiveCodeLang, ACM SIGPLAN, FARM 2014 workshop. 

LiveCodeLab, https://livecodelab.net/

Kodelife is a real-time GPU Editor that allows you to create shades. It was created with the purpose of rapid prototyping without having to use heavy software or builds in compilers. Kodelife’s main programming language is OpenGL GLSL, but it can also be used with platform-specific shading languages such as the Metal Shading Language and DirectX HLSL.

Kodelife was designed to be a teaching tool for beginners, but it was also created with enough industry-standard for experienced shader developers to work with. However, Kodelife couldn’t keep up with modern shader engines, so it was re-framed as a prototyping tool for developers and a platform for live coding visuals.

The editor runs your code at real-time without any need to evaluate functions. It mainly uses vectors to create textures that can be modified to make visuals. Kodelife also comes with a panel that manages the inputs and outputs that the program can receive and this can be defined in the preferences.

Cool Things About Kodelife:
– Code evaluates automatically, no need for any commands
– Based on C so it’s easy to use glsl
– Very easy to control and create variables
– Very easy to get input and output from other sources
– Has a MIDI bus
– Flexibility to how simple or complex projects can get
– Can be easily used in the web with the library GLSL Canva
– Has a mobile editor app on Android and iOS

Downsides
– Free version always asks if you want to purchase a licence
– Since code evaluates automatically if something crashes it takes time to figure out what provoked it
– You can’t have more than 1 project open
– No actual documentation
– Sometimes the code loads after you type
– Could be too mathematical for some people
– Lots of stuff going on in the backend that you can’t control
– Mobile app is paid

I found Kodelife to be quite user-friendly. Despite the fact that documentation does not really teach how to use the software and that I mainly used YouTube Tutorials to learn after you get a hang of how to use the software, it becomes very easy to experiment.


Before trying Kodelife, I actually spent a lot of time trying to use other live coding platforms. However, I had a lot of issues since many of them are either not updated or work only on specific computer models.

Comments on other Live Coding Platforms:
– tidal-Unity doesn’t work anymore because it was created using Unity version 5.4 which is very very old (older than 2016)
– Tidal-Unity is missing some declarations in the code so the OSC does not work correctly
– Gideon is pretty bad
– Ciryl doesn’t have a working version for M1 Macs
– Arcadia: install is very fidgety and though I was able to compile the UNITY project, it only works with Miracle for Clojure, which needs Clojure-mode (that is not working on my computer even though Clojure itself is fine)
– The Force is good but documentation is a bit lacking

For my research project, I chose Max Software to explore and experiment with. It is developed by a San Francisco-based software company called Cycling ’74. It is written in C and C++ on the JUCE platform. It has been used by composers, performers, software designers, researchers, and artists to create recordings, performances, and installations.

So, what is Max?

Max is a node-based visual programming language that allows its user to easily code live. Max can be used to play around with (mix, layer, distort, corrupt) audio and visual files. Programming in Max helps you create a tool like an interactive software which you can then use to tweak parameters of your code and perform live. Max has special features which allows you to set up hardware like synthesizers, MIDI control keypad to your program which may assist you in your live performance. Max has 2 parts to it – the MAX MSP and the MAX Jitter. Max MSP is used for any work related to audio and Jitter is used for video.

The reason I found Max interesting and different from what we have been doing in class is the interface. Being node based, its GUI is very user friendly. Also, since it follows a flow, it is easy to follow and understand for the audience as well. The present mode in Max makes it unique. Basically, after programming the entire thing, you can use the buttons, toggles etc. to set controls and organize your panel so that it is performance ready. Instead of writing each code line from scratch, Max let’s you set your program up as an instrument whose usage you can demonstrate live. In the context of love coding, Max let’s the artist / creator make a toolbox for themselves (like a DJ Mixer).

I thoroughly enjoyed working on this project and learned a lot. There is so much to Live Coding and I am just beginning to explore. Max is well documented as well (just like Processing) which was very helpful to delve deeper into the functioning of the objects.

Here is my final video of Live Coding performance with Max:

Pictures of my code can found below:

For my research project, I chose to look into Alda which is described as “a music programming language for musicians.” It enables musicians to create compositions using text editors and the command line — super straightforward and simple if you’re already familiar with music notation! In terms of where Alda stands within the Live Coding context, I actually don’t think it’s much of a live coding platform. Although it has a “live-ish” mode, it is most powerful in its ability to simplify writing sheet music without being too focused on the act of notation, and this is what the creator Dave Yarwood intended as a musician and programmer. But who knows? Maybe the ability to simply notate and write notes for instruments in parallel allows for live band performances? or improvisation using more classical instruments and typical notation.
To understand how Alda works, I simply installed it and played around with its live or repl mode while following the cheat sheet. Afterward, I tried to find online tutorials or performances, and only found one which was sufficient for me to understand the potential of Alda! I then started breaking down some notation to try to put together a presentation that portrays this potential to my classmates.

I personally really enjoyed working with Alda and reviving my music theory knowledge, although I’ve never properly composed a track I watched a youtube video and tried to give it a go. Here’s my (very basic) composition:

and here’s the code:

gMajScale = [g a b > c d e f+ g]
gMajChord = [o2 g1/b/>d] (vol 50) #First of the scale
cMajChord = [o3 c1/e/g ] (vol 50)#Fourth of the scale
dMajChord = [o3 d1/f+/a ] (vol 50) #Fifth of the Scale

piano:
V1:
gMajChord | cMajChord | gMajChord | dMajChord #LH: 1-4-1-5 , 1-4-1-1 chord progression.
gMajChord | cMajChord | gMajChord | gMajChord

V2:
g4 a b a | c e c d | g2 (quant 30) > g4 (quant 99) b | d1 #RH (melody): inserting random notes from the scale

g4 a b a | c e c d | g2 (quant 30) > g4 (quant 99) b | < g1

midi-acoustic-bass:
o2 g8 r r b8 r r r r | r e4 c8 r r r | g8 r r b8 r r r r | d8 r r f8 r r r (volume 100) #played around with notes from the scale and note lengths
o2 g8 r r b8 r r r r | r e4 c8 r r r | g8 r r b8 r r r r | g8 r r b8 r r r r (volume 100)

percussion:
[o2 c8 r r c c r r ]*8 #experimented until something worked(?)
o3 c+

Screen Recording 2022-09-19 at 10.05.52 Screen Recording 2022-09-19 at 09.54.29

Praxis Live is a hybrid live visual programming platform that uses both Java and Processing to code visuals and audio. It is node-based so that users have easy access to important pieces of code, and it also has real-time processing.

To get started with using Praxis Live, I downloaded example projects. I experimented with ‘Funky Origami’, ‘Smoky 3D’, ‘Circles’ and ‘Mouse Drawing’. One good thing about this platform is that you can make changes to the project in real-time once you save. The node-based system also allows you to easily make changes without changing code. Though, you can edit the code for more detailed changes.

Praxis Live would be a very good choice for a live coding performance, due to the easily accessible code through the nodes, which also makes everything much easier for the audience to understand.