Overview + Breakdown

Max, also known as Max/MSP/Jitter, is a visual programming language for music and multimedia developed and maintained by software company Cycling ’74. Over its more than thirty-year history, it has been used by composers, performers, software designers, researchers, and artists to create recordings, performances, and installations. It offers a realtime multimedia programming environment where you build programs by connecting objects into a running graph, so time, signal flow, and interaction are visible in the structure of the patch. Miller Puckette began work on Max in 1985 to provide composers with a graphical interface for creating interactive computer music. Cycling ’74’s first Max release, in 1997, was derived partly from Puckette’s work on Pure Data. Called Max/MSP (“Max Signal Processing” or the initials Miller Smith Puckette), it remains the most notable of Max’s many extensions and incarnations: it made Max capable of manipulating real-time digital audio signals without dedicated DSP hardware. This meant that composers could now create their own complex synthesizers and effects processors using only a general-purpose computer like the Macintosh.

The basic language of Max is that of a data-flow system: Max programs (named patches) are made by arranging and connecting building-blocks of objects within a patcher, or visual canvas. These objects act as self-contained programs (in reality, they are dynamically linked libraries), each of which may receive input (through one or more visual inlets), generate output (through visual outlets), or both. Objects pass messages from their outlets to the inlets of connected objects. Max is typically learned through acquiring a vocabulary of objects and how they function within a patcher; for example, the metroobject functions as a simple metronome, and the random object generates random integers. Most objects are non-graphical, consisting only of an object’s name and several arguments-attributes (in essence class properties) typed into an object box. Other objects are graphical, including sliders, number boxes, dials, table editors, pull-down menus, buttons, and other objects for running the program interactively. Max/MSP/Jitter comes with about 600 of these objects as the standard package; extensions to the program can be written by third-party developers as Max patchers (e.g. by encapsulating some of the functionality of a patcher into a sub-program that is itself a Max patch), or as objects written in C, C++, Java, or JavaScript.

Max is a live performance environment whose real power comes from combining fast timing, real time audio processing, and easy connections to the outside world like MIDI, OSC, sensors, and hardware. Max is modular, with many routines implemented as shared libraries, and it includes an API that lets third party developers create new capabilities as external objects and packages. That extensibility is exactly what produced a large community: people can invent tools, share them, and build and remix each other’s work, so the software keeps expanding beyond what the core program ships with. In practice, Max can function as an instrument, an effects processor, a controller brain, an installation engine, or a full audiovisual system, and it is often described as a common language for interactive music performance software.

Cycling ’74 formalized video as a core part of Max when they released Jitter alongside Max 4 in 2003, adding real time video, OpenGL based graphics, and matrix style processing so artists could treat images like signals and build custom audiovisual effects. Later, Max4Live pushed this ecosystem into a much larger music community by embedding Max/MSP directly inside Ableton Live Suite, so producers could design their own instruments and effects, automate and modulate parameters, and even integrate hardware control, all while working inside a mainstream performance and production workflow.

Max/MSP and Max/Jitter helped normalize the idea that you can build and modify a performance instrument while it is running, not just “play” a finished instrument. This is the ethos of live coding – to “show us your screens” and treat computers and programs as mutable instruments.

Live Coding + Cultural Context

With the increased integration of laptop computers into live music performance (in electronic music and elsewhere), Max/MSP and Max/Jitter received attention as a development environment available to those serious about laptop music/video performance. Max is now commonly used for real-time audio and video synthesis and processing, whether that means customizing MIDI controls, creating new synths or sampler devices (especially with Max4Live), or just creating entire generative performances within Max/MSP itself. Live reports and interviews repeatedly framed their shows and laptops running Max MSP, which helped cement the idea that algorithmic structure and realtime patch behavior can be the performance. In short, the laptop became a serious techno performance instrument. In order to document the evolution of Max/MSP/Jitter in popular culture, I compiled a series of notable live performances and projects where Max was a central part of the setup, inspiring their fans to use Max for their own music-making performances and practices. Max because a central programming language between artists who performed on the stage and in the basement, uniting an entire community around live computer music.

Video of Someone’s Algorave in Brazil

Monolake live at Ego Düsseldorf, June 5, 1999

“This is a live recording, captured at Ego club in Duesseldorf, June 5 1999. The music has been created with a self written step sequencer, the PX-18, controlling a basic sample player and effects engine, all done in Max MSP, running on a Powerbook G3. The step sequencer had some unique features, e.g. the ability to switch patterns independently in each track, which later became an important part of a certain music software” from RobertHenke.com.

Carl Stone’s Flint’s

“Flint’s was premiered San Francisco Electronic Music Festival 2000. Created and performed on a Mac SE-30 using Max/MSP with samples prepped in Sound Designer II software. A somewhat different version of the piece appeared on the 2007 release Al-Noor on the Intone label” from “Unseen Worlds.”

Radiohead

Autechre

The group built their shows around laptops running Max MSP, with the show being the real time processing of a system rather than fixed playback.

Daito Manabe

He used Max to create live music from Nike shoes, and generally talked about how he likes using Max MSP and Jitter to map real time physical oscillations (like grooves and needles – or the bending of shoes) into live audiovisual outcomes.

Princeton Laptop Orchestra

MaxMSP allows for a collective performance format, such as laptop orchestras and networked ensembles. Princeton Laptop Orchestra lists MaxMSP among the core environments used to build the meta instruments played in ensemble, turning patching into a social musical practice, not just an individual studio practice.

By highlighting MaxMSP/Jitter’s centrality within a wider canon of musicians and subcultures, and linking examples of its development through time, I hope I have demonstrated how Max has contributed to community building around live computer music. Both MaxMSP and Live Coding express the same underlying artistic proposition that the program is the performance, but it lowers the barrier for people who think in systems and signal flow rather than syntax, and it scales all the way from DIY one person rigs to commercial mixed music productions to mass user communities.

My Experience

Computers are very intimidating animals to me. Because of this, I especially appreciated Max/MSP for a couple reasons: One, because it’s so prolific within the live coding/music community, like Arduino, there is an inexhaustible well of resources to draw from to get started. For every problem or idea you have, there is a Youtube tutorial or a reddit/Cycling ’74/Facebook forum to help you out. The extensive community support and engagement makes Max extremely accessible and approachable, and promotes outreach between both beginners and serious artists. Two, Max gives immediate feedback that easily hooks and engages you.

I liked using Jitter, because visuals are just so fun. This is max patch I made some time ago, where I would feel old camcorder videos into various filters and automatically get colorful, glitchy videos.

But for the purposes of this demonstration, I wanted to create a patch where visuals and sound were reacting to each other somehow. I searched up cool Max jitter patches and found a particle system box on Github, linked here: https://github.com/FedFod/Max-MSP-Jitter/blob/master/Particles_box.maxpat. Then, I searched up how I could make a basic sound that affects the size of particles, making them react to the noise realtime. I created the input object or sound (cycle), connected it to the volume (the wave cycles between those parameters, in this case -0.2 and +0.2) and the output object is the speaker, so we can hear it. In order to link this sound to the jitter patch, metro tells the transition objects to read the signal. I connected the sound signal to abs, snapshot and scale to remap it into measurements/numbers that can be read by “point-size” and visualized in jitter. Toggling the metro off stops “point size” from reading the measurements from the audio signal, severing the connection between the sound and the particles, as I demonstrate in my video. The result is, the louder the sound, the bigger the particles and vice versa. I liked using Max because it was extremely helpful to make the “physical” connections and see how an effect is created by visually seeing the signal flow, and more easily understanding how audio can be translated to visual signals in real time.

My Video

https://youtu.be/0VNm5xZR6XY

My Presentation

https://www.canva.com/design/DAHBFWu7qIk/GJ8dR2D2fcaD61wcU1XEpg/edit?utm_content=DAHBFWu7qIk&utm_campaign=designshare&utm_medium=link2&utm_source=sharebutton

It also has my references/resources and links.

My Patch Code

{
  "patcher": {
    "fileversion": 1,
    "appversion": {
      "major": 8,
      "minor": 6,
      "revision": 0,
      "architecture": "x64"
    },
    "classnamespace": "box",
    "rect": [0.0, 0.0, 820.0, 520.0],
    "bglocked": 0,
    "openinpresentation": 0,
    "default_fontsize": 12.0,
    "default_fontface": 0,
    "default_fontname": "Arial",
    "gridonopen": 1,
    "gridsize": [15.0, 15.0],
    "gridsnaponopen": 1,
    "toolbarvisible": 1,
    "boxanimatetime": 200,
    "imprint": 0,
    "enablehscroll": 1,
    "enablevscroll": 1,
    "boxes": [
      {
        "box": {
          "id": "obj-1",
          "maxclass": "newobj",
          "text": "cycle~ 220",
          "patching_rect": [110.0, 70.0, 80.0, 22.0]
        }
      },
      {
        "box": {
          "id": "obj-2",
          "maxclass": "newobj",
          "text": "*~ 0.2",
          "patching_rect": [110.0, 110.0, 55.0, 22.0]
        }
      },
      {
        "box": {
          "id": "obj-3",
          "maxclass": "ezdac~",
          "patching_rect": [90.0, 165.0, 45.0, 45.0]
        }
      },
      {
        "box": {
          "id": "obj-4",
          "maxclass": "newobj",
          "text": "abs~",
          "patching_rect": [310.0, 110.0, 45.0, 22.0]
        }
      },
      {
        "box": {
          "id": "obj-5",
          "maxclass": "newobj",
          "text": "snapshot~",
          "patching_rect": [310.0, 150.0, 70.0, 22.0]
        }
      },
      {
        "box": {
          "id": "obj-6",
          "maxclass": "toggle",
          "patching_rect": [240.0, 250.0, 24.0, 24.0]
        }
      },
      {
        "box": {
          "id": "obj-7",
          "maxclass": "newobj",
          "text": "metro 20",
          "patching_rect": [280.0, 250.0, 62.0, 22.0]
        }
      },
      {
        "box": {
          "id": "obj-8",
          "maxclass": "newobj",
          "text": "scale 0. 0.5 1. 12.",
          "patching_rect": [410.0, 220.0, 150.0, 22.0]
        }
      },
      {
        "box": {
          "id": "obj-9",
          "maxclass": "message",
          "text": "point_size $1",
          "patching_rect": [600.0, 220.0, 95.0, 22.0]
        }
      },
      {
        "box": {
          "id": "obj-10",
          "maxclass": "outlet",
          "patching_rect": [725.0, 222.0, 20.0, 20.0]
        }
      }
    ],
    "lines": [
      {
        "patchline": {
          "source": ["obj-1", 0],
          "destination": ["obj-2", 0]
        }
      },
      {
        "patchline": {
          "source": ["obj-2", 0],
          "destination": ["obj-3", 0]
        }
      },
      {
        "patchline": {
          "source": ["obj-2", 0],
          "destination": ["obj-3", 1]
        }
      },
      {
        "patchline": {
          "source": ["obj-2", 0],
          "destination": ["obj-4", 0]
        }
      },
      {
        "patchline": {
          "source": ["obj-4", 0],
          "destination": ["obj-5", 0]
        }
      },
      {
        "patchline": {
          "source": ["obj-6", 0],
          "destination": ["obj-7", 0]
        }
      },
      {
        "patchline": {
          "source": ["obj-7", 0],
          "destination": ["obj-5", 1]
        }
      },
      {
        "patchline": {
          "source": ["obj-5", 0],
          "destination": ["obj-8", 0]
        }
      },
      {
        "patchline": {
          "source": ["obj-8", 0],
          "destination": ["obj-9", 0]
        }
      },
      {
        "patchline": {
          "source": ["obj-9", 0],
          "destination": ["obj-10", 0]
        }
      }
    ]
  }
}

I overall appreciated how the reading emphasized music’s inextricability from the body. Because we grew up ensconced in Western philosophies (pointing fingers at you Plato & Descartes & Kant), I believe we, albeit subconsciously, mistakenly divide the mind and the body. The lofty Mozart-esque realm of music seems more associated with “the mind” while dance belongs to the realm of the body, but if we look within, I believe we all intuitively understand that the gap was never there. But the historical assumption of that gap is why this reading exists in the first place, which it outright acknowledges: “I am arguing that a significant component of such a process occurs along a musical dimension that is non-notatable in Western terms – namely, what I have been calling microtiming.” That’s why I had to laugh when I read: “Though these arguments are quite speculative, it is plausible that there is an important relationship between the backbeat and the body, informed by the African-American cultural model of the ring shout.” Modern academia – always the cautious skeptic, for better and worse. Also always the exclusionary imperialist. Like, oh you finally caught up! (Not speaking to the reader, just speaking in general.) The idea of the drum set as an extension of the body makes complete sense. The bass drum at the feet, stable and steady. The snare at the hands, which, with their greater dexterity, can more readily linger or attack, flavoring the music, giving it “that feel.” Literally our feel.

There were some phrases I really liked that particularly spoke to this: “It is a miniscule adjustment at the level of the tactus, rather than the substantial fractional shift of rhythmic subdivisions in swing.” I also loved this quote: “It seems plausible that the optimum snare-drum offset that we call the “pocket” is that precise rhythmic position that maximizes the accentual effect of a delay without upsetting the ongoing sense of pulse. This involves the balance of two opposing forces: the force of regularity that resists delay, and the backbeat accentuation that demands delay.” I also love how everything “seems plausible” hahaha. I also really loved this phrase: “bears the micro-rhythmic traces of embodiment…”

I was thinking of a couple things. One, what is the source of the pulse? Our breathing, our heartbeat, walking, running, how rocks feel on a hot day. Two, the main point of the reading, how to reconcile computers with the music of our bodies. The reading goes into several methods people have used to do this, the best of which, to me, was when it went over how DJs sampled songs by scratching records, and how the music is the material manifestation of the movements of the hands themselves. See here: “…bears a direct sonic resemblance to the physical motion involved” and “causing it to refer instead to the physical materiality of the vinyl-record medium, and more importantly to the embodiment, dexterity and skill of its manipulator.” These are just really great observations.

So when it comes to computers? Where to start? I had a conversation with dad I still remember a year ago. He said our phones are stupidly made because they’re made for our eyes, to please the Kantian aesthetes in us hahah. If they were really made for our hands, they would be designed like small conch shells. Look at the antiquated wall phone, how slenderly it wrapped itself inside your palm. I’m trying to say that the devices we use today were not built for us. (The divorce we made between the body and the mind is hurting us.) The computer is inherently disembodied, and we all know this. This is why I really like hyper pop, because its very sound contains the shifting disembodiment of a generation, yet, our inviolable presence throughout. It is us dancing through the divorce hahah. None of this is bad, it all just tells the story. So yeah, I’ll employ the tips and tricks the reading offered. Mostly, I will focus on the computer’s liaison with my hands.

this was a great reading, and captured precisely why I am grateful I majored in this hodgepodge field despite kind of sucking. I read a book about mycelium-inspired-anarchy a little over a year ago, and a lot of the aspirations expressed in this reading echoed it. Through live coding, we can discover what we should cherish and encourage within ourselves and among each other. For example, this resistance towards being defined, of having to be boxed in. Running away from the idea that you have to know or control something to love it. It is possible to love even if you can’t do both those things, which is fearlessness I guess. Unashamed love. I am still trying to get around to this idea regarding this major, and actually, probably regarding everything, now. I’ve been getting into rituals and these sorts of things, but I acknowledge that live coding can be another practice in order to lean into that way of embodied living. It actually is probably a good thing for me to do, which is why I took this class. Get more comfortable opening up in the world, real time! Literally real time. I’m a reader, and I think this always gave me a sense of safety and distance. I liked processing and analyzing things from afar, and pressure is probably one of the scariest things to me. So truly, this idea of REAL TIME. We’re all here right now. I’m 22, and I never expected to get this far, and the fact that I’m 22 and still required to figure it out as I go is insane to me. Like, I can never quite wrap my head around the absurdity of it. It’s all really real-time. How does live coding open up? How do you open up? We’ll see, I guess.

I also really liked the idea of “thinking in public.” I am a bit ashamed to admit that I have incel-tendencies. But people are not something you should be afraid of. I think I learned this because of the times I am in, but this idea of thinking in public really resonated with me. Like it’s no big deal. In fact, being around people is magic, and leads to magical times. Sweat and breath in a dance room, you know?

Something else that hit was this quote: “This way of computing . . . helps me ‘unthink’ the engineering I do as my day job. It allows for a relationship with computers where they are more like plants, rewarding cultivation and experimentation.” I think, in the modern world, and for a lot of human history, we maintain the relationships that we are required to to survive, but we also know there is a better way of being in the world that feels like beauty, feels lighter, and feels true. Live coding is a way to be that way, I understand. Sort of a release, something you GET TO DO rather than something YOU HAVE TO DO.

I also really liked the idea of presenting “familiar things in a strange way.” I think a lot of things look dead to us and we have to shake things up to recognize them as alive. It goes hand in hand with this idea: “This is a problem for her (and us) inasmuch as Big Tech wants computers to be invisible so our experience of using them becomes seemingly natural.” I like minimalism, but it is also a trap of invisibility and complacency. A really well-designed snare. So keeping things strange and LOUD and VISIBLE appeals to me. And the point of all this is, exactly as was written here: “The capacity of live coding for making visible counters the smart paradigm in which coding and everyday life are drawn together in ways that become imperceptible. The invisibility here operates like ideology, where lived experience appears increasingly programmed, and we hardly notice how ideology is working on us, if we follow this logic, then we do not use computers; they use us.” How do you learn to SEE what can’t be seen? How do you learn to acknowledge that it takes two to tango and it’s not just you running things? I think live coding makes you sharper, or rounds out the depth in the back of your eyes. There’s this great Susanne Sundfor quote: “We don’t do life. Life does us.” And yep, just about. I don’t believe it in my bones yet. Another reason why I took this class.

And the last thing I want to write is that the nature of computers is soooo hyper hyper intimate. It’s like your pet dog except that dog is a mirror and a portal to any world you want to go to. A magical object! The art that has come out of computers consequently has this…feel. It feels kind of windy and spicy, and like an igloo. But then bringing that hyper super intimacy into a public space…? It’s kind of like you’re opening up that intimacy to everyone. It’s safe vulnerability. So wow.

I used to believe there are real humans and not-real humans. I am slowly coming around to the assertion that there is no such thing as one right way to be alive, and that everyone, in their own way, converses with the divine. But in every respect, Kurokawa seems to be a real one––and what I mean by that is:

Obviously, he is interested. Just no bullshit, point-blank interested. You can tell by how he talks about Charles and Ray Eames (I liked their chairs but until recently, I didn’t know they made videos) and films and space music. I have really come to despise spaces that reek of “people who like art trying to one-up each other about how much they like art,” but maybe what I got from this article was a sense of belonging. The mediums and spaces that Kurokawa saw growing up, and felt he belonged in, and that explain why he did what he did, and why he was being interviewed that day.


Watching that Eames video, I was thinking isn’t that how we all have felt for forever? Isn’t that what my Christian forefathers meant when they said we are made in God’s image? Quantum explosion meets hydrogen bomb. This connection between macro and micro. You just have to cut things up and put seemingly-dissimilar-but-actually-similar-things together. A blown out shot of Chicago and the quivering white of a proton. And what is art but adjusting things into your perspective, and hence, our perspective––we just hadn’t realized it was our perspective too, yet. There were a lot of perspectives in this article that stuck out to me.

The first being Kurokawa’s insistence on simultaneous sound and video. I am really coming to dislike categorizations. I think because of Plato and Descartes and a whole lot of other old white guys, we automatically and unconsciously think of things in fixed, Platonic essences. We forget to see things as actually changing and blurring, constantly arranging and rearranging co-constitutions. Like the senses, for example. There are five senses, or so we say. But no––we see sound and hear what we see. That’s why it is so important for performance to be simultaneous. There are senses and realities that exist in between our divisions. Maybe Kurokawa likes bringing people into these liminal spaces, more “real” because they are liminal? Or maybe, grounding people in what we have forgotten. Art is supposed to do that, too. But I think Live Coding is really attached to this––reminding us to return to pure experience, without preconceived notions or categories, the way we have to do in our day to day. To just witness your body and all its interactions for what they are.


That’s why Kurokawa’s performances are “in constant flux, with entities exploding into fragments and dynamically reassembling…” The journalist writes “Nothing is solid in Kurokawa’s universe.” That’s because nothing is solid in our universe. But through live performance, Kurokawa helps us remember that. Or, I think that’s one part that goes into why he does what he does. At least, that’s the part that speaks to me.

I liked reading about how Kurokawa started doing what he was doing. He was just an interested, alive human who started playing around. Doing wacky projects with his friends. I think I often forget that life is just one big playground. Most of the photographers and videographers I admire, when you read their backstories, it’s like this. They were trying, but they weren’t really trying. They were, I guess, playing hard. The journalist said the artists around his time “cut their teeth in clubs and graduated to galleries.” F galleries. Keep it in the clubs, I say.

I also really liked Kurokawa’s observations about nature. I often feel really at odds with my world. I really hate the utilitarian nature of modernity. I just think it’s ugly. Sorry Mies van der Rohe, but I’ve always hated the obvious order of blocks and skinny skyscrapers. I, too, preferred the natural sprawl. How everything looks chaotic on the outside but is governed by a pristine, internal order. I think we have to totally bulldoze our modern notions of organization and cleanliness and Kurokawa is apart of that.

Through all of Kurokawa’s words, I do detect a real human. Or, just maybe what I recognize as real in myself. His indifference towards technological development. His choice of words: “he doesn’t show himself.” His draw towards nature. He says that nature changes gradually and I breathe a little, being reminded yet again that I am allowed to do the same, even if it doesn’t seem like it. I visited this exhibition by TaeZoo Park that was a bunch of glitching televisions but beneath the spectacle of it all, was a tribute to his lifelong love for Nam Jun Paik. I’ll never forget the first time I saw Riyoji Ikeda’s “Superposition.” Left me breathless. I’m still trying to figure out how I balance the real human spirit of things with it all. The journalist mentions that Kurokawa had a Ulysses butterfly in Yves Klein Blue. I saw that blue in a gallery. But what I thought about while looking at it was Maggie Nelson. She said it was too garishly blue for her. She preferred the sky.

Someday, I want to forget about showing myself. Kurokawa reminded me that all you need to stay a real human is to remain truly interested. To just play. Once I accept this, or as I accept this, hopefully it helps me with how I approach live coding, too.

Mosaic is an application designed by Emmanuel Mazza that combines Live Coding with Visual Programming in order to streamline the creative process without sacrificing artistic vision. Based on OpenFrameworks, an open source creative coding toolkit founded by Zachary Lieberman, Theo Watson and Arturo Castro, it utilizes a wide variety of programming languages including C++, GLSL, Python, Lua, and Bash. As seen in my patch above, it works by connecting blocks of sonic and visual data, facilitating an easy-to-follow data flow. Because Mosaic allows for more complex possibilities while remaining accessible, it is suitable for both beginners and advanced users who wish to create a wide range of real-time audio-visual compositions.

The diagram above outlines how Mosaic hybridizes Live Coding and Visual Programming paradigms in order to reinforce feedback and promote human-machine interaction. Its page, linked here, quotes Henri Bergson: “The eyes see only what the mind is prepared to comprehend.” For someone like me, whose comprehension of programming is limited, I am grateful for applications like Mosaic that allow me to create projects I can understand.

Having previously worked with MaxMSP, I found Mosaic’s interface buggy but still intuitive and to use. For my project, I wanted to create audio-reactive visuals that convey feelings of nostalgia and loss of memory. I found this old video of my father recording a song he had made for my brother and me. In Mosaic, I navigated to ‘Objects’ and then ‘Texture’ to find all the nodes that could manipulate and export video.

As seen above, I juggled various concepts and imported multiple videos to explore how Mosaic is able to warp and blend textures to serve whatever concept I landed on. I really liked how the video grabber blended my real-time position via the MacBook camera with the singing video of my father to convey how memories stay with us even as we change and grow. Because Mosaic can only play sound and video separately, I extracted the audio file from the video using VLC media player, and started focusing on how I wanted to manipulate the audio to convey a sense of loss.

As seen above, I used the compressor and bit cruncher objects to add distortion to the sound file so that I could lower or amplify the distortion real-time by lowering the thresh and moving the slider. The entire time, I was reflecting on how if I was using with a platform that only allowed for written code, like TidalCycles, I would have to manually write out these effects, but using Mosaic, I could drag and drop the objects that I wanted and simply connect them to achieve the control the audio the way I wanted to.

The most difficult part of my project was figuring out how to connect visual components with audio so that I could manipulate the blended video of myself and my father as I increased or decreased distortion. I really liked this audio analyzer object because, as seen by the yellow input and green output, it allowed me to do just that, and as an additional bonus, it manipulated the video by sound that were playing real-time, so I could speak into the Mac microphone and the video would distort even further. I really liked how this object strengthened the concept of my project, because I could speak about memory loss and the video would distort even further in response.

The audio analyzer object converted the sound data into numerical data that could then be converted back into visual data. And I blended visual distortion with the video of my father by controlling the sliders as seen above. I really loved how accessible the controls were, allowing me to manipulate the video and sound real-time according to the demands of the performance.

The finalized video and audio components of the project can be viewed here and here respectively. Through live manipulating the video and audio with my voice and the Mosaic controls as seen in the footage, I was able to convey concepts like memory loss and nostalgia for the purposes of this project. I really loved the potential for creativity via Visual Programming that Mosaic offers, and I will definitely continue to use this application for personal projects into the future.

When I was in middle school, during one of his rare visits, my father showed me an Aphex Twin song from Syro. At that point, having grown accustomed to Ariana Grande and Justin Bieber radio hits, I had said, “It’s just noise.” He responded, “You’ll be able to see the patterns–the music–someday.” My brother and I would sit in front of our battered Bluetooth speaker and listen to Aphex Twin songs in order to understand what our father possibly saw in these strange metal-like songs. We would point out a sound when it arrived what at first seemed too early or late. We would gape at sounds that surprised us because they arrived and repeated in ways we hadn’t expected them to, and gradually, we began to love this sense of musically organized disintegration. Ariana Grande and Justin Bieber no longer––that bored us. My brother eventually became a jazz drummer and obsessed over the likes of Miles Davis and John Coltrane and is now majoring in music. I trace his interest back to this anecdote.

Aphex Twin is inseparable from experimental electronic music, and yet, despite being so electronic, his work is undeniably intimate and human. I believe one of the reasons for this is his genius knack for variation and timing, something he learned, as the reading pointed out, from African rhythms. Stockhausen wasn’t the biggest fan of Aphex Twin’s work because he claimed he should “stop with all these post-African repetitions” and should “look for changing tempi and changing rhythms.” But Aphex Twin’s ability to continuously repeat interlocking rhythms with genius variations is why people have come to love and idolize him so much. Not everyone can do so much with so little. Another artist that this reading reminded me of was Tune-Yards, who also draws from African rhythms to create human-like complexities and variations in her work. She has a lyric that goes: “I use my white woman’s voice to tell stories of African men…” As we go full throttle into a tactile-less, screen, technocratic world, I believe prioritizing the HUMAN in music is more important than ever. Or, not even important, but the kind of music we will increasingly seek out because we need it. I was never a fan of dubstep and EDM because they were too saccharine and simple. I want imperfection. I want the human body, with all its limitations and attempted breaking of those limitations. I really enjoyed this reading and the insight I was able to glean into the music I love through it.

I spent a lot my time in New York City hanging around Washington Square Park between study breaks from Bobst. One of my most memorable encounters was with a jean skirt-wearing Jewish fella who liked to dance and spoke about how they believed their dead grandmother still lived in their hands. I think they danced because of this idea––that they were, in a way, facilitating communication between their grandmother and the audience through their living body. I commented on how this form of person-to-person interaction was more important than ever in our time of image saturation. Today, between the Internet and social media, we are constantly inundated by what I call “dead” material, or, material that has left the alive, breathing body into fixed positions, such as poetry that has been written down, or photographs. While these mediums are beautiful and important, the links between living things to other living things have been increasingly replaced with various methods of digital pseudo-connection, which could help explain the loneliness epidemic. The true poem is the one that Walt Whitman was constantly rewriting and sharing from his heart. I believe in tangible communication. I believe in dance.

That is why I believe in live coding. I signed up for this class in order to use computers towards this end. I really resonated with the idea that “we do not use computers; they use us…” All we have to do is look at how dependent we are on our phones to realize that we might be the ones being used. But computer manipulation is a way of responding back; of admitting awareness in this current technological landscape, and saying it takes two to tango––I am going to shape you just as you shape me. Within the confines of capitalism and production, most technology is used to manipulate and exploit, but within the framework of this class, I am really excited to collaborate with technology and engage in it as a means of physical, living communication. Get out of the DMs and into the techno raves! I say. One last thing I wanted to mention was the notion of “Being a User,” which entails acknowledging that there “there is, whether visible or not, a computer, a programmed system that you use.” All of us live within inherited systems and ideologies that perpetuate a lot of destruction and suffering. Whether we realize it or not, these ideologies form our thoughts, dreams, jokes, and very realities, and it is our responsibility as thinking, alive humans to question and challenge these often-invisible frameworks so that the world can actually start to change for the better.

To be a User is to be alive. Becoming a live coder means becoming an active participant in and hopefully challenger against whatever systems you find yourself in. It means observing with intent and responding real-time. This reading inspired me to keep this philosophical groundwork in the back of my head as we all start to exercise this practice.