What does it take to improvise? What is a good improvisation rooted in the context of a performance? I found these questions being answered in this week’s reading by Paul D. Miller (aka DJ Spooky) and Vijay Iyer.

An interesting motif that both speakers outline in their discussion is that of jazz. Combining orchestrated French cinema, i.e. the example of improvisation in new media, they draw a connection to music.

There’s another new media professional, the father of VR Jaron Lanier, who also talks about jazz and cinema in his work. Namely, he defines VR through jazz:

A twenty-first century art form that will weave together the three great twentieth-century arts: cinema, jazz, and programming (Lanier).

It seems interesting to me how the concepts of programming both in live coding and VR refer to “jazz” — what do they really mean? Is it because of the similar effect of expectancy that comes in? Miller and Iyer discuss it as when “the audience [isn’t] quite sure how to respond” and “navigating an informational landscape”. Connected to what the reading’s authors say, the following quote actually starts to make more sense in the connection of new realities and especially the context of VR:

Improvisation is embedded in reality. It’s embedded in real time, in an actual historical circumstance, and contextualized in a way that is, in some degree, inseparable from reality (Miller and Iyer).

Another observation that I drew from the text is rooting jazz in its original cultural context. Jazz developed from Afro-American music. Much like hip hop, it was born in marginalized communities and became greatly adopted worldwide after. Given the context of improvisation then, it is much more than just creating randomized art; it’s about standing for who you are and your identity.

What I didn’t quite understand in the discussion of cultural context was the following excerpt:

Paul Miller: Yes, there’s an infamous Brian Eno statement in Wired magazine a couple of years ago where he said, “the problem with computers is that there’s not enough Africa in them.” I think it’s the opposite; there’s actually a lot of Africa in them. Everyone’s beginning to exchange and trade freeware. [You can] rip, mix and burn. That’s a different kind of improvisation. What I think Vijay is intimating is that we’re in an unbalanced moment in which so much of the stuff is made by commercial [companies]. My laptop is not my culture. The software that I use—whether it’s Protools, Sonar, or Audiologic—[is not my culture per se]. But, it is part of the culture because it’s a production tool.

What does Miller mean by saying “there’s not enough Africa in them…it’s the opposite; there’s actually a lot of Africa in them”? Does he refer to an argument of remixing to the point where there is no originality left? And then responds that the roots will always be there no matter how many times the original source has been edited? Perhaps someone in the class discussion can elaborate on or explain this part further.

I will end this discussion with the quote that I enjoyed most from the reading:

I was just sort of cautioning against was the idea that “we are our playlists.” I just didn’t want [our discussion] to become like, “because we listen to these things, this is who we are,” especially at a moment when the privileged can listen to anything.

— I agree; we are so many things and we should embrace that through our most important individual improv, the life itself!

Some people asked me how to make the ‘misty’ effect. This is my code and you can kinda figure it out. I also changed the hue() with cc value and the effect is really cool.

shape(3).kaleid(3).scale(0.4).out(o0)
osc(10,0.1,0.7).hue(0.6).modulate(noise(2,0.01),0.5).out(o1)
src(o2).modulate(src(o1).add(solid(1,1),-0.5),0.007).blend(o0,0.1).out(o2)
src(o2).mult(o1).out(o3)
render(o3) 

Hello, here is the code from my live demo today!

This is the code for hydra:
voronoi(2,()=> cc[0]*5,0.3).color(2,0,50).out(o0)
src(o0).modulate(noise(()=>cc[0]),0.005).blend(shape(),0.01).out(o0)
hush(o0)
shape(5, .5,.01).repeat(()=> cc[2]*5,()=> cc[2]*4, 2, 4).layer(src(o0).mask(o0).luma(.1, .1).invert(.2)).color(()=> cc[1]*20,()=> cc[1],5).modulate(o1,.02).out(o0)
// .scrollY(10,-0.01)
// .rotate(0.5)
hush()

This is the code for tidal:
d7 $ (fast 2) $ ccv "" # ccn "0" # s "midi"
d1 $ sound "electro1:8*2" # gain 1.4
d2 $ sound "electro1:11/2" # gain 1.8
d3 $ sound "hardcore:8" # room 0.3
d4 $ sound "hardcore:0*8"
d5 $ sound "reverbkick odx:13/2" # gain "1 0.9" # room 0.3 -- odx *2
d6 $ sound "arp" # gain "1.2"
d8 $ ccv (segment 128 (fast 2 (range 127 60 saw))) # ccn "1" # s "midi"
d9 $ ccv "" # ccn "2" # s "midi"
hush
d4 silence

The order I followed is based on the line numbers, a screenshot is attached below!

Hello everyone!

I remembered Shreya asked for the off in my performance. Here is it. You can play with the numbers and it does some very interesting stuff. You are basically offsetting the time.

d3 $ jux rev $ off 0.25(|+ n 12) $ off 0.125(|+ n 7) $n "" # sound "supermandolin" # legato 4
d4 $ ccv (stitch "" 127 0) # ccn 0 # s "midi"

I personally did not like this reading much. It felt it was very abstract and the speaker Paul Miller goes back and forth about improvising and the digital community, giving us so many different examples and ways to think about it that the discussion loses its main focus. Miller talks about magic, film, memory, recordings, economics, and commerce but does not give a clear understanding of putting it into the improvisation context. There is too much to keep track of and follow in the article which I believe actually digresses from the main point and makes the reader more confused and the dialogue hard to follow. The moderator and Iyer keep trying to bring Paul back to tie this to the idea of the digital community.

 

Having said that, there are a few lines I particularly found interesting. After reading the article twice, the definition of improvisation that I understood was, “Improvisation is navigating an informational landscape.” This is why learning how to just be in this world is also classified as a primal level of improvisation – quite an interesting idea actually! Putting it into the digital context, Miller says, “For me as an artist, digital media is about never saying that there’s something that’s finished. Once something’s digital, anything can be edited, transformed, and completely made into a new thing.” I never thought about digital media that way. In this case, there is so much scope for improvising in a digital culture. Everything online is a record and everyone listens to records, which are just documents or files essentially. But improvising with this cascade of information is what converts raw data into useful information – making it a form of improvisation in the digital community.

This is the code from my performance the other day! I’m not entirely sure if this is the correct way of making functions in tidal but it seemed to work pretty well for me. I made functions for both the sound and the visuals in tidal so that it would be easier to control both at once. In hydra, I set it up so that the parameters that I wanted to change were linked to the functions I made in tidal

 

Another thing that could be helpful is writing code on multiple lines at once, just press shift and click on where you want to type and viola! This was especially helpful for when I wanted to add effects to the sound and wanted them to reflect on the visuals 

 

Hope this helps!

//hydra 

blobs = ()=> osc(()=> cc[2]*30+15, .01, 1).mult((osc(20, -.1,1)).modulate(noise(()=>(cc[0])*5, cc[0])).rotate(1)).posterize(()=> cc[1]*5).pixelate(200,200)

hush()

blobs().out()


--tidalCycles

--------------------------  fucntions --------------------------
beep_beep_bop = s "arpy*2 arpy?" # n "c5 <g4 c4> a5" -- ? fast
some_beats = s "bd*4" -- g1.2
more_beats = s "hh*8"
deeeep = s "<bass3(5,8) techno:1(3,8)>"
hehe = note "<g5 e5 c5> [<c4 g4> a4] <g4 g5>? c4" #s "sine"
deep = "techno:1*2 techno:1?" -- krush8
noisy = s "bleep*4" #speed 0.5 -- fast
genocide = note "<g1 e2 c3> [<c1 g1> a2] <g1 g2>? c2" #s "arpy" #squiz 2 #krush 9
-------------- VISUALS -----------------------------------
amount = struct "t*2 t?" $ ccv ((segment 128 (range 127 0 saw))) # ccn "0" # s "midi"
colour = struct  "<t(5,8) t(3,8)>" $ ccv ((segment 128 (range 127 30 saw))) # ccn "1" # s "midi"
wobble = struct "t [t*2] t? t" $ ccv ((segment 128 (range 127 30 saw))) # ccn "2" # s "midi"
--------------------------  functions end ----------------------

hush

d1 $ every 4 (fast 2) $ beep_beep_bop
d2 $ every 4 (fast 2) $ amount
d3 $ deeeep
d4 $ colour
d5 $ more_beats
d6 $ noisy #gain 0.8
d7 $ hehe
d8 $ wobble


hush

 

FoxDot is a Python programming environment created in 2015 by Ryan Kirkbride. It is designed in the aim to help people to combine coding and audio to create lice coding performance that’s mainly made up of musical elements. 

On the technical side, the installation of FoxDot is fairly simple. All it takes are a few lines of commands in the terminal and installation within SuperCollider. A command line will take you to the user interface of FoxDot within Python. 

The language FoxDot adopts actually resembles python in many ways. It also used the concept of  pattern and with functions it provides for manipulation of the patterns, we can easily build patterns for percussions and simple notes. 

hHowever, it does not have a ready to use web interface. With the limited number of functions in the library, there are also limited variations to the rhythm we are making. Moreover, the platform itself has not had any major updates for a long time, which means we are missing out on a lot of the novel development in live coding with this platform.