“Improviz is a live-coding environment built for creating visual performances of abstract shapes, blurred shades, and broken GIFs.”
When I was choosing a platform from the list, the caption for Improviz mentioned something about how it’s based on using and abusing basic shapes. This caught my attention as I am always interested in seeing how far you can push basic structures to create significantly more complex visuals.

Improviz is built in Haskell and interacts directly with OpenGL and it was developed by David Guy John, who goes by ‘rumble-san’. This platform is considered to be a fork of another platform: LiveCodeLab which rumble-san also took part in developing. Still, Improviz stands out a lot more to users as it is much faster than LiveCodeLab, but most importantly, it also offers the option to load custom textures, materials, models, … which gives a lot of room for personalization and experimentation.

It has a web version (you can find it here), but it lacks the ability to load all the custom objects I mentioned previously. Therefore it is recommended to download the latest release from GitHub, run it from the terminal, and use an editor (like Atom) alongside it.

The language used reminds me a lot of p5.js. It has a quite simple syntax where you call functions that either draw shapes to the screen or change those shapes in some way. “By default, the only thing that actually changes in Improviz is the time variable, which tracks the number of seconds that have passed since the program was started.” So introducing the time variable in the code is a good idea to get some cool variations and effects.

 

Here’s one of the demos I experimented with using Improviz:

 

And, here’s the code I used for it:

background(30)
t = time/2
rectSize = 2

paintOver()

texture(:art4)

//axis
ax = sin(time) *3

move(0,0,0.5*ax*0.4)
  rotate(0,0,time*2)
  rectangle(rectSize*sin(t))

move(0,0,-0.5*ax*1.5)
  rotate(0,0,time)
  rectangle(rectSize*sin(t)*0.3)

move(0,-0.5*ax*2,0)
  rotate(0,0,time*0.5)
  rectangle(rectSize*sin(t)*2)

move(0,0.5*ax*4,0)
  rotate(0,0,time*2)
  rectangle(rectSize*sin(t)*1.5)

move(0.5*ax*2,0,0)
  rotate(0,0,time*0.2)
  rectangle(rectSize*sin(t))

move(-0.5*ax*3,0,0)
  rotate(0,0,time*0.4)
  rectangle(rectSize*sin(t)*0.7)

 

I chose Motifn as the live coding platform. It is a platform that focuses on creating “visualized “music by utilizing “quite a lot of third parties” like Tone.js and receiving support from live coding artists, namely Alex McLean and Charlie Roberts, the creators of the live coding software Tidal Cycles and Gibberwocky respectively. Later on, I did find that there are quite a lot of similarities between the language used in Motifn and in Tidal Cycles. I guess that’s because after all, they are all created based on JS.

It basically provides two modes for coders to play around with. One is “FUN”, which is to code on the web, and the other one is “DAW”, which is to connect MIDI to play music. I mainly tried the “FUN” mode as it is more convenient (but definitely more risky lol). 

In my exploration, I found that this platform is indeed quite user-friendly as it contains music examples as well as interactive tutorials for coders to self-study. The tutorials not only teach users how to create melody based on notes and rhythms based on using drums but even JavaScript basics. It is definitely very helpful for newbies like me. But still, this is not a highly integrated platform like Garageband on the ios system (I played around quite a lot with it), it still requires users with fundamental knowledge of coding. Nevertheless, I believe that this will not be “live coding” if it doesn’t contain the process for users to code themselves. 

 

The UI is also user-friendly. Users can manually adjust the space of the three sidebars based on their preferences or needs. Here, I really want to point out the note tracks this platform provides, which is absolutely one of its best features. Similar to the tracks in Garage Band, this system also demonstrates the track for each instrument, and users can check each note by clicking on them, and in return, the corresponding code part will also be highlighted by the cursor.

For the practical usage part, I first learned the basic structure of programming on this platform and found that it contains three parts: 1) Define a function (like function song ()), 2) “Let” function for each track (let ss = songStruture ( { ), and 3) return song. Then learned details of how to create melodies, rhythms, and arrange music structure. 

In terms of the content that we can create on this platform, I feel like there are a lot of very interesting effects that we can make, for example, the “ties” and the “vibrato” that can trigger very funny sounds. Although it’s true that this platform doesn’t provide many preset synths for users to choose is also possible to use this platform to create absolutely great pieces if you know quite a lot about composition. Like this one:

And here is a demo I made:

 

 

 

Description

The coding platform I chose for this research project is Mercury. Mercury is a “minimal and human-readable language for the live coding of algorithmic electronic music.” It’s relatively new as it was created in 2019 and is inspired by many platforms that we have used or discussed before. Mercury has its own text-editor that supports sounds and visuals, however, since it’s made for music, I decided to dive deeper into that.

Process

Since the documentation on the music features in Mercury is not very thorough, it was better for me to learn and understand through example files and code that can be randomly chosen from the text-editor. I then went through the different sound samples and files available and tested them out on the online editor as it allowed me to comment out code and click and drag whenever I needed to. After reaching a final result that I liked, since it was the first time I tested these files, I started implementing similar code in the Mercury text editor. It did not sound exactly the same and so I made sure to make changes that would make it sound better.

One of the randomly chosen files that I came across had the following instruments and I used that for inspiration while changing it and adding more to it:

new sample kick_909 time(1/4)
new sample snare_909 time(1/2 1/4)
new sample hat_909 time(1/4 1/8)
new sample clap_909 time(1/2)

Below is a video of the “live coding performance” (that I had tested out):

Evaluation

Although Mercury had audio samples and effects that sounded really nice and allowed for experimentation as well as mix-and-matching, there were a few aspects that I am not a huge fan of. These include the limited documentation on the sounds and how they could be used, the fact that you can’t execute line by line but rather the whole file at a time, and the fact that you cannot use the cursor to move to another line or select text. However, it was an enjoyable software to explore and engaging when it comes to the text resizing as you type and reach the end of the line.

(Before I decide to use speccy as my project target, I chose mutateful and ossia score. However, mutateful is a tool based on a paid, powerful but complex platform called Ableton live. As for Ossia Score, it says its free and open source, while it asks you to pay 60 dollars only if you want to download it on Windows.)

Speccy actually doesn’t have fancy main pages. It is a live-coding tool made by Chris McCormick, who seems interested in making games and code-generated music. According to the simple documentation and a short demo video, it doesn’t have complex functions, but things like

(sfxr {:wave :square
:note #(sq %
[36 -- -- 41]})

to adjust when and in what pitch the digital note will occur.

However, that doesn’t mean the tool is useless. The tool is bonded with another tool made by the author called jsfxr, to make 8-bit sounds and generate SFX. You can easily access it to get some prefabbed sound effects like coin or jump sound, or you can adjust the parameters to have the different sound effects you want.

Then, you can copy the code and paste it into Speccy like this:

 

(sfxr "34T6PkkdhYScrqtSBs7EnnHxtsiK5rE3a6XYDzVCHzSRDHzeKdFEkb7TSv3P3LrU7RpFpZkU79JLPfBM7yFg1aTfZdTdPR4aNvnVFWY6vqdHTKeyv8nSYpDgF"
{:note #(at % 16 {0 5 1 27 8 24 14 99})})

 

And you can make this sound you made just now to occur in your live coding at a certain time and in certain pitches. And you can adjust it with some other parameters too.

Then I will show you what I randomly made with the two tools, hope you can enjoy it. (I’m not using magic but using the clipboard to paste the codes!)

For my research project, I decided to go with Gibber. Gibber is a live coding environment for audiovisual performance. It is a program based in Javascript that works by dynamically injecting realtime annotations and visualizations into a programming environment for a live coding performance.

There are a few things that stood out to me regarding Gibber. First, is the fact that it is in JavaScript. It made understanding the syntax of the code way easier. I may not have been particularly familiar with the actual application, but because I understood what the syntax of JavaScript, I was able to piece things together and make sense of the bigger picture. The documentation was also quite extensive, which is something that really helped me in understanding the core concepts of this program. The second things that particularly stood out to me, was the interface used to execute the code. You can see this through the link attached below but I really enjoyed how there were different elements on the screen being highlighted, to indicate which part of the code was being currently executed. It is a small addition, but I really feel like it ties everything in together, especially from a users perspective. Finally, it is quite versatile in the sense that it allows for the incorporation of external audiovisual libraries such as: p5.js, tidal, Hydra, and more.

Overall, I find Gibber to be an extremely useful tool that is quite developed. I really enjoyed playing around with the different settings and it was a fun experience.

Gibber Playground – Try and experiment for yourself!

LiveCodeLab is a web-based livecoding environment for 3D visuals and sound that are created on the spot without evaluating of lines of code. The project was built on top of of the Code Mirror editor by Marijin Haverbeke and the live coding framework Fluxus by the Pawful collective. A lot of the infrastructure, logic and routines are inspired from computer web-based computer graphics libraries like three.js and processing.js.

 

I thought this project was really interesting in the way that it removes the traditional frustrations of programming from the workflow, thus making it an optimal entry point for visual artists with no programming experience. Its facility to pick up and simple same-window on-the-go live tutorials make it a great tool for early prototyping. In LiveCodeLab, code is evaluated as it is typed. So code that doesn’t make sense is simply not implemented, while other lines are evaluated–no delay whatsoever.

 

The Github isn’t very well documented but I’m assuming that as opposed to Hydra it isn’t based on shaders but rather a live canvas updated frame by frame through the CPU. This makes it noticeably slow when shapes become more complex (I’ve done some testing and the window crashes at 100 cubes rotating on an M1). But you have to take it for what it is: a prototyping and educational tool.

The program I chose for this research project is Vuo.

 

Vuo was originally designed after Apple’s Quartz Composer (QC), which was released in 2005. The producers of Vuo felt like Apple’s QC was not growing or improving and therefore decided to create a program that could carry out the same functionality as QC and more. It was first released in 2014 and has grown to have a large community. 

 

Vuo allows its users to create visual compositions using a node programming method, which makes its GUI very user friendly and can be used with little to no prior experience in coding. If needed, Vuo also allows its users to manipulate shader code and add shaders from the web, which makes it suitable for both beginners and professionals. Although Vuo does not have a way of composing music in the program, it has some music processing capabilities which makes it a very appealing platform for music visualization projections and performances. It is also used for projection installations and performances as it has a built-in function to project a composition onto a dome or other surfaces. I think it’s also important to note that Vuo can also be used to create and manipulate 3D graphics, images and videos.

 

There is definitely a lot to unpack in Vuo, but I decided to focus on creating a 2D composition. What I liked the most about Vuo is the ability to see how everything connects and what the effect each node has on the image. One thing I noticed is that there is a small lag each time a node is connected, which causes the program to stop for a bit, making the transition between effects unnatural for live coding. 

 

Final Performance:

https://youtu.be/mJOZnfs2GiI

Nodes used: