People often compare music to writing mathematical sentences, as both can convey a story of their own. Therefore, I think that it seems natural for sound and composition to represent both abstract and physical aspects of the world. Live coding, combined with visual elements, enhances this representation by engaging multiple senses, creating a richer, more immersive context for the audience. This multisensory experience aligns with Kurokawa’s central concept of “synaesthesia and the deconstruction of nature.” I believe that activating multiple senses simultaneously during a performance not only deepens the audience’s engagement but also highlights the intricate relationship between sound and visual representation. Similar to the way sound waves and color frequencies correspond, the synaesthesis theory leads me to wonder: if sound can be associated with specific colors, can colors, in turn, evoke specific sounds?
Furthermore, it is through the visual representation of sound that deeper meaning emerges. By transforming the auditory into the visual, the performance gains an additional layer of interpretation, embodying what Kurokawa refers to as the “decoding of nature.” This seamless fusion of “graphic analyses and re-syntheses” introduces a poetic quality to the performance, where sound and visuals breathe life into the piece. As a result, the work takes on the fluid, organic qualities of nature, embracing both its structure and its inherent noise and randomness.
Dorothy Feaver’s article on Ryoichi Kurokawa introduced me to an artist who thinks like both a scientist and a poet. Kurokawa’s ability to dissect natural phenomena and rebuild them into immersive audiovisual experiences is captivating. While I found the technical aspects of his work slightly intimidating, the underlying themes of synesthesia and “denaturing” nature resonated with me.
The article paints a vivid picture of Kurokawa’s process, from the isolated Berlin studio to the collaborations with his producer. His concerts, like syn_, aim to fuse sound and image, creating a total sensory experience. The concept of “denaturing” nature, using data to reveal hidden structures, is equally intriguing.
What struck me most was Kurokawa’s almost architectural approach to time and space. He’s not just creating art; he’s designing experiences that challenge our perception. The article left me pondering the relationship between technology and nature, and how artists like Kurokawa can help us see the world in new ways. While I’m not sure I fully grasp the technicalities, I’m definitely curious to experience Kurokawa’s work firsthand and get lost in his synthesized sensory landscapes.
Synesthesia allows artists to experience one sensory input while automatically perceiving it through multiple senses. While Ryoichi Kurokawa is not a clinical synesthete—nor am I—I find myself intrigued by how his artistic process unfolds. In true synesthesia, the brain acts as a bridge between senses, but Kurokawa seems to approach it differently. He envisions the brain not just as a connector but as a target, an audience, or even a collaborator. As he puts it, “I want to stimulate the brain with sound and video at the same time.”
Exploring his portfolio, I noticed a strong sense that his works are created using TouchDesigner (and turns out it’s not). As a new media student constantly exposed to different tools, my instinct was to research his technical choices. But then I came across his statement: “I don’t have nostalgia for old media like records or CDs, and I’m equally indifferent toward new technology.” This struck me. As an artist, he moves fluidly, guided not by tools but by his vision, much like the way he deconstructs nature’s inherent disorder only to reconstruct it in his own way. It’s not the medium that matters—it’s the transformation.
Watching Octfalls, I could already imagine the experience of standing within the installation, anticipating each moment, immersed in the sudden and precise synchronization of sound and visuals. As I explored more of his works, I noticed how they differ from what I have seen in many audiovisual performances, where sound usually takes precedence while visuals are more of a support role. In Kurokawa’s pieces, sound and visuals are equal partners, forming a unified whole. This made me reconsider my own approach—perhaps, instead of prioritizing one element over the other, a truly cooperative relationship between sound and visuals could be even more compelling.
“Nature is disorder. I like to use nature to create order and show another side of it. I like to denature.” Kurokawa bends over the iMac, clicks through examples of his work on a hard drive, and digs out a new concert piece that uses NASA topographic data to generate a video rendering of the Earth’s surface. Peaks and troughs dance overa geometric chunk on the black screen, light years from the cabbie’s satnav. “The surface is abstract, but inside it’s governed by natural laws,” he says.
I find Kurokawa’s perspective on nature as disorder, and his desire to “denature” it very interesting, particularly how it resonates deeply with the tension between chaos and order that exists in both art and science. His use of natural data such as NASA’s topographic information to create structured, perhaps even arguably a surreal presentation of the Earth highlights the duality between the organic and the artificial. It suggests that while nature may appear unpredictable, it operates within a framework of fundamental laws that can be harnessed and reshaped through human interpretation.
Kurokawa ultimately challenges our perception of what is ‘natural’ and what is ‘artificial.’ His work demonstrates that the act of imposing order on nature does not necessarily strip it of its essence but rather reveals another dimension of its beauty—one that we might not perceive in its raw, untamed state.
Dissected, “de-natured,” and distilled into abstract sounds and images, natural phenomena become eccentric compositions, revealing the wonders contained within.
Ryoichi Kurokawa’s artistic approach reminds me of deconstruction—the process of breaking down natural forms, sounds, and movements into abstracted components before reassembling them into new configurations. His work does not aim to capture nature as it appears to the naked eye; rather, he dissects it, isolates its underlying structures, and reconstructs it through audiovisual synthesis. This act of deconstruction is not about destruction but rather about rediscovery, revealing hidden layers of complexity that are often overlooked in everyday perception.
In Octfalls, Kurokawa takes the movement of water—something familiar and continuous—and deconstructs it into disjointed yet synchronized audiovisual fragments. The cascading motion of waterfalls is translated into sharp, digital interruptions, reframing the fluidity of nature as something mechanical, algorithmic, and layered. In this process, he distills the essence of motion rather than merely illustrating it, allowing the audience to experience nature in a new, mediated form.
This act of reconstruction aligns with the Japanese aesthetic concept of wabi-sabi, which values imperfection and the beauty of decay. Kurokawa’s work does not seek to present a polished, idealized version of nature; instead, he embraces its instability and unpredictability. His compositions often feature glitches, distortions, and irregularities, mirroring the natural world’s inherent disorder. Through deconstruction, he does not strip nature of its essence but rather uncovers its transient, ever-evolving nature, which keeps the audience continuously engage in the nature system.
“Composers sculpt time. In fact, I like to think of my work as time design.” — Kurokawa
Whether it be simultaneously stimulating multiple sensory systems or mixing up rational with irrational realities and emotions, one of the biggest themes in Kurokawa’s compositions seem to be tying multiple elements and worlds that are seemingly different from each other all at once; and to me, it’s hard to imagine what kind of unexpected harmony his technique might bring. It seems that Kurokawa likes to take things that are already existing and are accessible to him (i.e. nature’s disorderly ways, hyper-rational realities) and give them a twist that acts as a “shocker” factor that brings forth curiosity and confusion from the recipient, which I thought was similar to other artists who do the same when they’re drawing inspiration for their works. However, I also came to realize that a lot of these other artists are all more focused on producing visual works, such as photographs, paintings, installations, etc., which made me wonder how it’s possible for sound engineers/composers to apply this “mixing reality with hyper-reality” into their works — I imagine that it might be more difficult for them because making this collision of two worlds seems to be much more prominent and thus easier to tell in visual artworks rather than audio.
I found his remark about how computer acted as a gateway for him to dive into graphic design/the “art world” fascinating, because this was one of the discussions we had in class a few weeks ago on whether the technology tools we have these days make it easier or harder for us to create art/music, etc. Because I was already inseparable from the more “traditional” art techniques such as painting and sculpting since youth before I transitioned into “tech-y” art, I thought creating works on technology was harder/more limiting to artists like me because learning about computers was always a scary realm. However, I can see how to many, it can be the exact reverse, especially if they didn’t have any background information or experience in creating artworks/music. Regardless of which circumstance you relate to more, I think this new relationship between technology and art opens up entirely new fields to both sides that allows us to expand our creativity and explore all kinds of possibilities in a much broader way.
Mosaic is an application designed by Emmanuel Mazza that combines Live Coding with Visual Programming in order to streamline the creative process without sacrificing artistic vision. Based on OpenFrameworks, an open source creative coding toolkit founded by Zachary Lieberman, Theo Watson and Arturo Castro, it utilizes a wide variety of programming languages including C++, GLSL, Python, Lua, and Bash. As seen in my patch above, it works by connecting blocks of sonic and visual data, facilitating an easy-to-follow data flow. Because Mosaic allows for more complex possibilities while remaining accessible, it is suitable for both beginners and advanced users who wish to create a wide range of real-time audio-visual compositions.
The diagram above outlines how Mosaic hybridizes Live Coding and Visual Programming paradigms in order to reinforce feedback and promote human-machine interaction. Its page, linked here, quotes Henri Bergson: “The eyes see only what the mind is prepared to comprehend.” For someone like me, whose comprehension of programming is limited, I am grateful for applications like Mosaic that allow me to create projects I can understand.
Having previously worked with MaxMSP, I found Mosaic’s interface buggy but still intuitive and to use. For my project, I wanted to create audio-reactive visuals that convey feelings of nostalgia and loss of memory. I found this old video of my father recording a song he had made for my brother and me. In Mosaic, I navigated to ‘Objects’ and then ‘Texture’ to find all the nodes that could manipulate and export video.
As seen above, I juggled various concepts and imported multiple videos to explore how Mosaic is able to warp and blend textures to serve whatever concept I landed on. I really liked how the video grabber blended my real-time position via the MacBook camera with the singing video of my father to convey how memories stay with us even as we change and grow. Because Mosaic can only play sound and video separately, I extracted the audio file from the video using VLC media player, and started focusing on how I wanted to manipulate the audio to convey a sense of loss.
As seen above, I used the compressor and bit cruncher objects to add distortion to the sound file so that I could lower or amplify the distortion real-time by lowering the thresh and moving the slider. The entire time, I was reflecting on how if I was using with a platform that only allowed for written code, like TidalCycles, I would have to manually write out these effects, but using Mosaic, I could drag and drop the objects that I wanted and simply connect them to achieve the control the audio the way I wanted to.
The most difficult part of my project was figuring out how to connect visual components with audio so that I could manipulate the blended video of myself and my father as I increased or decreased distortion. I really liked this audio analyzer object because, as seen by the yellow input and green output, it allowed me to do just that, and as an additional bonus, it manipulated the video by sound that were playing real-time, so I could speak into the Mac microphone and the video would distort even further. I really liked how this object strengthened the concept of my project, because I could speak about memory loss and the video would distort even further in response.
The audio analyzer object converted the sound data into numerical data that could then be converted back into visual data. And I blended visual distortion with the video of my father by controlling the sliders as seen above. I really loved how accessible the controls were, allowing me to manipulate the video and sound real-time according to the demands of the performance.
The finalized video and audio components of the project can be viewed here and here respectively. Through live manipulating the video and audio with my voice and the Mosaic controls as seen in the footage, I was able to convey concepts like memory loss and nostalgia for the purposes of this project. I really loved the potential for creativity via Visual Programming that Mosaic offers, and I will definitely continue to use this application for personal projects into the future.