Sonic meditations seem to have a healing power not just in artistic evidence, but in psychology research, too. I recently read a paper called “Why Robots Can’t Haka” that cites that activities like dancing, playing music, or humming together improve collective well-being, and that it will be difficult — if not impossible — to replicate that with robots and A.I.

Listening as a fully embodied pursuit, sensory awareness, and Tai Chi all made me think of the mindfulness benefits that come with being in nature or exercising. When I meditate, I find I am able to observe the environment more acutely, appreciating what’s happening outside but also within. Taking a pause from everything around me and thus honing my intuition and self-empowerment is perhaps what the author describes as activism.

Kinetic awareness is also something I was taught in a theatre class at NYUAD. We were regularly performing breathing and listening (to body) exercises to release pressure and let go of external thoughts. This sounds rather abstract, but in practice, it made a big difference. 

In terms of arts and interactive media, this made me think of Janet Cardiff and her projects with audio and sound narration. In 2005, she presented an audio walk in Central Park, that takes the listener on a journey in the middle of Manhattan:

Janet Cardiff’s Her Long Black Hair is a 35-minute journey that begins at Central Park South and transforms an everyday stroll in the park into an absorbing psychological and physical experience. Cardiff takes each listener on a winding journey through Central Park’s 19th-century pathways, retracing the footsteps of an enigmatic dark-haired woman.

Listening has great power, and somehow it is difficult to describe that in words. One must feel that first.

It is interesting to note how previously non-existent forms of art are coming to be merged together in the 21st century. Mixing music, generating design patterns, creating noise, and then finally making sense of it. I think this is partly how interactive media has emerged as a discipline, or rather a combination of so many other disciplines.

From the reading, it seems that IM art is far from being considered “classic”:

The main focus of modernist art was therefore on the basic elements (color, forms, tones, etc.) and the basic conditions (manner and place of presentation) of artistic production (p.2)

Although the reading mentions artists like Paul Klee and Wassily Kandinsky who began experimenting with different representations of music, in the case of multidisciplinary art, do we consider the Fluxus movement and Happenings as the “foundation” of new media art?

What is fascinating about this new movement is the open-access philosophy behind it:

It was about creating a glorious adventure from non-existent talent and unprofessionalism. Most of my ideas and art products are simply the result of my attitude to life. And are intended to cause unrest. (p.4)

The process of making new media / multidisciplinary art accessible and easy to start with, just like with open source software these days, is probably what draws people from so many backgrounds to it:

Anyone can make noise, for that you don’t need digital recording equipment or a 36-track studio with thousands of sophisticated elements. (p.4)

 

Given all of the experimentation and improvisation, it is interesting to see where new media will be in just a few years from now, as already, it broadens the understanding of art and how we perceive it.

Hi everyone!

This is the documentation for my Composition Performance (linked is a recorded draft — the actual piece to be performed with minor improvements in class tomorrow!)

Intro

The theme of my performance is visualizing and sound-narrating a gameplay experience. My intention was to depict the energy, intensity, and old-fashioned vibe of Mario-like 2D games. This is done through heavy use of pixelated graphics, noise, vibrant colors in Hydra, and electronic sounds/chords (mostly from the ‘arpy’ library) as well as some self-recorded samples in Tydal Cycles.

Audio

The overall intention behind the audio in the piece is to take the audience from the start of the game, followed by some progression with earning score points, then culminating and progressing to level 2, and lastly to “losing” the game and the piece ending.

The audio elements that I recorded are the ones saying “level two”, “danger!”, “game over!”, “score”, and “welcome to the game!”:

-- level 2
d1 $ fast 0.5 $ s "amina:2" # room 1
-- danger
d1 $ fast 0.5 $ s "amina:0" # room 1
-- game over
d1 $ fast 0.5 $ s "amina:1" # room 1
-- score
d1 $ fast 0.5 $ s "amina:3" # room 1
-- welcome to the game
d1 $ fast 0.5 $ s "amina:4" # room 1

 

Learning from class examples of musical composition with chords, rhythms, and beat drops, I experimented with different notes, musical elements, and distortions. My piece starts off with a distorted welcome message that I recorded and edited:

d1 $ every 2 ("<0.25 0.125 0.5>" <~) $ s "amina:4"
  # squiz "<1 2.5 2>"
  # room (slow 4 $ range 0 0.2 saw)
  # gain 1.3
  # sz 0.5
  # orbit 1.9

Building upon various game-like sounds, I create different compositions throughout the performance. For example:

-- some sort of electronic noisy beginning
-- i want to add "welcome to the game"
d1 $ fast 0.5 $ s "superhoover" >| note (arp "updown" (scale "minor" ("<5,2,4,6>"+"[0 0 2 5]") + "f4")) # room 0.3 # gain 0.8 -- change to 0.8

-- narration of a game without a game
-- builds up slightly
d3 $ stack [
  s "sine" >| note (scale "major" ("[<2,3>,0,2](3,8)") + "g5") # room 0.4,
  fast 2 $ s "hh*2 hh*2 hh*2 <hh*6 [hh*2]!3>" # room 0.4 # gain (range 1 1.2 rand)
]

Visuals

My visuals rely on the midi output generated in Tydal Cycle and build upon each other. What starts as a black-and-white modulated and pixelated Voronoi noise grows gets color and grows into playful animations throughout.

For example:

voronoi(10,1,5).brightness(()=>Math.random()*0.15)
  .modulatePixelate(noise(()=>cc[1]+20),()=>cc[0]),100)
  .color(()=>cc[0],2.5,3.4).contrast(1.4)
  .out(o0)

// colored
voronoi(10,1,5).brightness(()=>Math.random()*0.15)
  .modulatePixelate(noise(25,()=>cc[0]),100)
  .color(()=>cc[0],2.5,3.4).contrast(1.4)
  .out(o0)

voronoi(10,1,5).brightness(()=>Math.random()*0.15)
  .modulatePixelate(noise(()=>cc[1]+20,()=>cc[0]),100)
  .color(()=>cc[0],()=>cc[1]-1.5,3.4).contrast(0.4)
  .out(o0)

// when adding score
voronoi(10,1,5).brightness(()=>Math.random()*0.15)
  .modulatePixelate(noise(()=>cc[2]*20,()=>cc[0]),100)
  .color(()=>cc[0],5,3.4).contrast(1.4)
  .add(shape(7),[cc[0],cc[1]*0.25,0.5,0.75,1])
  .out(o0)

// when dropping the beat
voronoi(10,1,5).brightness(()=>Math.random()*0.15)
  .modulatePixelate(noise(cc[0]+cc[1],0.5),100)
  .color(()=>cc[0],0.5,0.4).contrast(1.4)
  .out(o0)

 

Challenges

When curating the whole experience, I first started by creating the visuals. When I found something that I liked (distorted Voronoi noise), my intention was to add pixel-like sound, and the idea of games came up. This is how I decided to record audio and connect it to the visuals. What I did not anticipate is how challenging it would be to link transitions on both sides and timing them appropriately.

Final reflections and future improvements

I put a lot of effort into making the sounds cohesive and creating a story that I wanted to tell with the composition. When doing the next assignment in class, I think I will try to make the transitions smoother and make visuals be more powerful in the overall storytelling.

Most importantly, I think I am finally starting to get the hang of live coding. This is my first fully “intentional” performance — although many elements will be done with improv when performing live, this project feels less chaotic in terms of execution and the thought process put behind the experience.

The full code can be found here.

What does it take to improvise? What is a good improvisation rooted in the context of a performance? I found these questions being answered in this week’s reading by Paul D. Miller (aka DJ Spooky) and Vijay Iyer.

An interesting motif that both speakers outline in their discussion is that of jazz. Combining orchestrated French cinema, i.e. the example of improvisation in new media, they draw a connection to music.

There’s another new media professional, the father of VR Jaron Lanier, who also talks about jazz and cinema in his work. Namely, he defines VR through jazz:

A twenty-first century art form that will weave together the three great twentieth-century arts: cinema, jazz, and programming (Lanier).

It seems interesting to me how the concepts of programming both in live coding and VR refer to “jazz” — what do they really mean? Is it because of the similar effect of expectancy that comes in? Miller and Iyer discuss it as when “the audience [isn’t] quite sure how to respond” and “navigating an informational landscape”. Connected to what the reading’s authors say, the following quote actually starts to make more sense in the connection of new realities and especially the context of VR:

Improvisation is embedded in reality. It’s embedded in real time, in an actual historical circumstance, and contextualized in a way that is, in some degree, inseparable from reality (Miller and Iyer).

Another observation that I drew from the text is rooting jazz in its original cultural context. Jazz developed from Afro-American music. Much like hip hop, it was born in marginalized communities and became greatly adopted worldwide after. Given the context of improvisation then, it is much more than just creating randomized art; it’s about standing for who you are and your identity.

What I didn’t quite understand in the discussion of cultural context was the following excerpt:

Paul Miller: Yes, there’s an infamous Brian Eno statement in Wired magazine a couple of years ago where he said, “the problem with computers is that there’s not enough Africa in them.” I think it’s the opposite; there’s actually a lot of Africa in them. Everyone’s beginning to exchange and trade freeware. [You can] rip, mix and burn. That’s a different kind of improvisation. What I think Vijay is intimating is that we’re in an unbalanced moment in which so much of the stuff is made by commercial [companies]. My laptop is not my culture. The software that I use—whether it’s Protools, Sonar, or Audiologic—[is not my culture per se]. But, it is part of the culture because it’s a production tool.

What does Miller mean by saying “there’s not enough Africa in them…it’s the opposite; there’s actually a lot of Africa in them”? Does he refer to an argument of remixing to the point where there is no originality left? And then responds that the roots will always be there no matter how many times the original source has been edited? Perhaps someone in the class discussion can elaborate on or explain this part further.

I will end this discussion with the quote that I enjoyed most from the reading:

I was just sort of cautioning against was the idea that “we are our playlists.” I just didn’t want [our discussion] to become like, “because we listen to these things, this is who we are,” especially at a moment when the privileged can listen to anything.

— I agree; we are so many things and we should embrace that through our most important individual improv, the life itself!

So, what is Mosaic?

Mosaic is an open source multiplatform live coding and visual programming application based on openFrameworks! (https://mosaic.d3cod3.org/)

The key difference is that it integrates two paradigms: visual programming (diagram) and live coding (scripting).

History

Emanuele Mazza started the Mosaic project in 2018, in strict relation with the work of ART+TECHNOLOGY research group Laboluz from the Fine Art faculty of the Universidad Politécnica de Valéncia in Spain.

Mosaic even has its own paper published here: https://iclc.toplap.org/2019/papers/paper50.pdf

The goal of Mosaic really is to make live coding as accessible as possible by giving it a seamless interface and minimum coding requirements:

It’s principally designed for live needs, as can be teaching in class, live performing in an algorave, or running a generative audio-visual installation in a museum. It aims to empower artists, creative coders, scenographers and other creative technologists in their creative workflow.

Source: https://mosaic.d3cod3.org/

Mosaic Interface + Experience

Mosaic interface is easy to navigate because it has functional blocks that can be connected with each other. For example, if I have a microphone input, I can then amplify the sound and connect it to the visual output straight away, like my project below:

Technical Details

Mosaic can be scripted with Python, OF, Lua, glsl and bash. In addition, pure data live-patching capability, and a selection of audio synthesis modules, multiple fullscreen output windows capabilities.

Mosaic is mainly based on two frameworks : OpenFrameworks and ImGui. OpenFrameworks is an open source C++ toolkit for creative coding.

Get started with Mosaic

To download Mosaic, head to the website instructions here.

You can start with a few built-in examples and see tutorials on Vimeo.

My experience and project

As I found out, there are not that many resources available on getting started with Mosaic. There is good documentation on the website and associated GitHub repository, but not that many independent creators who share their projects in Mosaic online, as compared to its parent tool openFrameworks (OF).

Because I have some background in OF, it was manageable to understand the coding mechanics in Mosaic, but it took some time to understand how to produce live coding that is not result of random gibberish with noise and my microphone’s input.

What I ended up testing in the end is creating visual output with Sonogram based on my microphone input.

Watch my examples:

  1. https://youtu.be/IXW6jBlr85I (audio and visual output)
  2. https://youtu.be/xm02jKemx2c (video input and visual output)
  3. https://youtu.be/5ofT4aOYJoI (audio and visual output)

And corresponding visuals that were created in the process from above:

Finishing thoughts

Mosaic provides an understandable GUI to see what’s happening with the code, audio, and visual output. My main challenge as a beginner was finding ways to make output make sense — coming up with code and block connections that would create a cohesive generative visual at the end.

While reading this week’s article, I found myself asking — Why do we enjoy live coding?

“But the longer we listen, the more boring it becomes. Our sense of anticipation grows as we wait for something more, for change, uncertainty, the unpredictable, the resumption of information.”

Is it because of the entropy it creates? We can unconsciously seek experiences that are chaotic and unpredictable, and creating this in a predictable, coding environment can seem safe and exciting.

“Random corruption should not be confused with random generation.”

When differentiating between corruption and generation, I wonder if it’s this precise noise that is so appealing about live coding.

Are there people who don’t / can’t enjoy this random noise “corruption”? It seems that at times, performances can get intense on visuals and audio. Speaking from personal experience of creating a visually intense experience in another IM class, people with epilepsy were advised not to part-take in it during the showcase, because it could trigger a physical response. So is it the same with live coding? Can some of the improvisation elements get so out of control that they become triggering?

“Live coding is a way of improvising music or video animation through live edits of source code, using dynamic language interpreters.”

Is this a universal definition of live coding? Is it a new discipline at all? I found this cool website that talks about the history of live coding and performance.

One resonating idea throughout the reading — to what extent a human is a real artist in computational creativity? What would happen if an A.I. algorithm were to replicate a human in a live coding performance? Has this been done before?

What in my opinion live coding embraces greatly is the affordability of mistakes. It is the spontaneity that is born with experimentation on the spot and embracing imperfection that gives a unique spike to each performance.

There is no such confusion with live coding, there is a human clearly visible, making all the creative decisions and using source code as an artistic medium.

A programmer making generative art goes through creative iterations to, only after each edit they have to restart the process before reflecting on the result. This stuttering of the creative process alone is not enough to alter authorship status.

What live coders themselves have to say about their art is what’s the most interesting in the reading. One can infer a great deal about their bold character and adventurous work style has given many rounds of iterations, experimentations, and failing that they have to go through before giving THE performance.

What has been said about personal style and the design process of their own language reminds me of what Richard Hamming says about style in his “Learning to Learn” lecture:

“There is no one style which is successful. Painters paint many different styles. You have to find a style that fits you. Which means you have to take what fragments you can from other people, use them and adapt them and become yours.”

What I am taking away from this paper: Live coding means that there is beauty in imperfection that is born on stage during the performance. Live coding music ⇒ music that “could be understood in a novel way”. This is not electronic music, neither it is music created by an algorithm; it is a collaboration of human, chance, and code.