I liked how the reading showed that moving between music and visual art can happen naturally. It didn’t feel forced – it made sense that artists would want to use whatever way helps them express their ideas best. I thought it was cool when they mentioned how Paul Klee used ideas like “polyphony” from music in his paintings. It showed how deeply connected the two worlds can be.
The part that stood out most to me was the section on techno and club culture. I liked how clubs became spaces for both music and art, and how computers let artists mix sound, visuals, and performance together. It felt like a real shift in how creative work was happening.
I also agreed with the point that today, art and business are closely tied together. Like the reading said, success and money often decide whether someone is seen more as an artist or a musician. I think that’s just the reality now – everything is connected to capital.
Finally, the idea that energy and passion matter more than technical perfection really resonated with me. I liked how punk made it okay to be intense and imperfect. It made me think that sometimes the strongest art isn’t the most polished, but the most honest.
Overall, the reading made me appreciate how free and open creative work can be when you don’t stick to one label.
I found this reading the most interesting so far because it closely relates to a topic I like reading about in general – the difference between DJs who rely on pre-recorded sets and those who genuinely mix tracks live.
I like Deadmau5 for his honesty about playing pre-recorded sets. He openly admits, “I have no shame in admitting that for my ‘unhooked’ sets I just roll up with a laptop and a midi controller, select tracks, and hit the spacebar.” This transparency contrasts sharply with many DJs today who pretend to mix live when they’re merely pressing play.
An example of this problem is Grimes’ performance at Coachella 2024, where technical issues revealed she likely didn’t prepare her own set and probably didn’t even set up her own flash drive which was used for the set. Issues like BPM mismatches could easily be fixed if a DJ understands basic CDJ functionality. Such incidents undermine DJ culture, raising questions about booking practices at major festivals.
I think if you’re mainly a producer and you’re performing your own music, pre-recorded sets can actually make sense – your main skillset is production, not necessarily live DJing. For Deadmau5 or artists like Fisher, that’s okay because it’s their music their showcasing. But if a DJ who’s just playing other people’s tracks does a pre-recorded set, it’s a huge turn-off for me.
Bailey’s views on improvisation highlight the opposite approach. For Bailey, improvisation involves real-time creativity and spontaneous interaction with the instrument. He argues that instruments offer endless sonic possibilities, with improvisation embracing unexpected sounds or “accidents.” This contrasts with the predetermined nature of pre-recorded sets.
It seems like the article is claiming that electronic music scene lacks spontaneity. However, I don’t think this claim is entirely accurate. Artists like the_last_dj on Instagram, who composes techno live, exemplify a practice similar to live coding, where performance and composition occur simultaneously. DJs like Carl Cox and Richie Hawtin sometimes use synthesizers or modular setups in their sets, which lets them create music live and respond to the crowd in real-time. Another example could be Meute, the German marching band that does techno and house covers with actual live instruments. That kind of stuff keeps performances fresh and genuinely spontaneous similar to live coding. Although it is not as common or even seen at major music festivals, it does exist.
CDJs and laptops themselves can serve as improvisational tools if DJs use them to actively read the crowd, selecting and mixing tracks spontaneously. Even if individual tracks aren’t improvised, the DJ’s ability to adapt and respond in real-time can create an authenticnd engaging experience.
I think performance authenticity and audience interaction matter most. Whether through live coding, spontaneous mixing, or live instrumentation, incorporating an element of spontaneity significantly enhances the performance.
I started this project by focusing on the sound first and then creating visuals to match. I had a bunch of ideas—I wanted some acid 303s, amen break drums, and to incorporate elements from my first demo in class. But I also wanted it to have a techno feel. So, I built all these sounds and then started stitching them together into a composition.
The biggest challenge was that some of the sounds didn’t really fit together. To fix that, I made some ambient sounds and drums that suited the composition better, which ended up making the track slower than I originally planned. I was aiming for more of an ambient techno vibe—something faster. I also wanted to use the amen break throughout the whole track, but it didn’t quite fit, so I just included it right before the chorus.
For me, the defining sound of this piece is the 303s—I’m a huge fan of them. They have this raw, chaotic energy, which is what I love about them. That’s also why I wanted the visuals to feel chaotic. The visuals have three sections, all messy and intense, which was exactly what I was going for. I usually focus more on sound, but this time, I actually had more fun working on the visuals.
Overall I am very happy with the composition. As for the sound, the drop right now feels a little too “stiff” (if that makes sense) but I find it to be a good transition to the superhoover.
The reading prompted me to explore some of Ryoichi Kurokawa’s work, and I found “Re-Assembli” at the ETERNAL Art Space Exhibition really interesting. What’s cool about it is his approach to de-naturing – transforming familiar landscapes of trees and buildings by altering their settings to black and white or inverting their colors, then presenting these transformed scenes through striking, unconventional camera views. As the images move, they often blink rapidly in sync with industrial-like sounds, creating an uncanny, almost synesthetic experience. This synthesis of audio and visuals not only deconstructs traditional notions of nature but also immerses the viewer in a unique sensory journey. Another aspect of “Re-Assembli” that resonated with me was the juxtaposition he created by the two screens placed side by side. On one screen, visuals of trees and nature were played, while on the other, he played images of buildings and interior spaces. This contrast was particularly fascinating because it accentuated the tension between the organic and the constructed, inviting viewers to reflect on how nature and human-made environments coexist and interact. By deliberately placing these two narratives in parallel, Kurokawa challenges our conventional perceptions and encourages us to consider the impact of urbanization and technological intervention on the natural world.
Another thing that I found really cool was Kurokawa’s approach is his choice to work without internet in his studio, even though he uses technology as a tool for his art. This detail made me wonder if he deliberately avoids the internet to minimize distractions or to protect his originality from being influenced by the endless stream of external ideas. I was recently discussing with a friend how ChatGPT can generate creative suggestions that might, paradoxically, lead to a decrease in overall creativity by making us less inclined to think of new ideas on our own. In this light, Kurokawa’s decision to forgo internet access might be a conscious effort to create a focused, unmediated space for artistic exploration, where his creative process remains untouched by the constant influx of digital information.
Also, his indifference toward both old media and the latest innovations highlights his focus on the essence of creativity itself. By working in a fluid, adaptable manner, much like the gradual evolution of nature, he ensures that his artistic process remains open to new ideas and free from the constraints of technological trends. This philosophy not only protects his originality but also allows his work to develop at its own pace, echoing the natural, unpredictable progression of life.
For the research project, the live coding platform that I picked is Motifn. Motifn enables users to make music using JavaScript. It has 2 modes: a DAW mode and a fun mode. The DAW mode lets users connect their digital audio workstation, like Logic, to the platform, by which a user can orchestrate synths into their DAW using JS. The fun mode on the other hand lets you start producing music in the browser right away. I used the fun mode for the project.
The coolest feature about Motifn is that visualises the music for you. Similar to how we see the selected notes in a MIDI region in Logic, Motifn lays out all the different tracks along with the selected notes underneath the code. This allows the user to better understand the song structure and is an intuitive way to lay out the song which makes it user friendly.
To get started, I started reading the examples on the platform. There is a long list of examples right next to the coding section on the website. All of the examples are interactive, which makes it easier to experiment with different things. Since it is right next to the coding section of the website it is also convenient to try out a lot of examples because there was no need to open different tabs to refer to the documentation. Having an interactive, short, and to-the-point documentation enabled me to experiment with different things Motifn has to offer.
After playing around it for a while, I discovered that the platform let’s you decide the structure of the song before you even finish coding the song itself. So, using the let ss = songStructure ({}), I decided to a song structure.
Motifn has a lot of synth options (some of them made using Tone.js) and I am huge fan of synths. So I started my song with that. Followed by the addition of bass in the first bridge, synth + bass + notes in 2nd chorus, bass + hihats in the 2nd bridge, kicks + snare + hihats + bass + chords in the first and second verse, remove the drums in the third chorus and then bring them back in the next one. After that I just take out thre instruments one by one and the song finishes.
There isn’t a lot of information about Motifn online. I was unable to find the year it was developed in or even the founders. I would place this platform somewhere in the middle of live coding and DAW music production. I felt as if there was less flexibility to experiment and make music on the fly as compared to TidalCycles. Motifn seems more structured and intentional. But there are a lot of cool sounds and controls on the platform, like adding groove to instruments making it play either behind (like we read in class) or ahead of the beat by a few ms or modulating the harmonicity of a synth over time. Its integration of JavaScript for music composition makes it accessible to a broad range of users which reflects the live coding community’s values of openness and innovation. Overall, it is a fun platform to use and I am happy with the demo song that I made using Motifn.
In techno, the drum components—particularly the kick drum and hi-hats—are arguably the most foundational elements. They act as a constant foundation, driving the rhythm forward and maintaining momentum.
Groove is highly subjective, but in my opinion, adding microtiming or changing velocity of kick drums or hi hats makes a track sound much better. It introduces subtle variations that prevent the beat from feeling too stiff or mechanical. Robotic rhythms aren’t inherently bad, but over time, they can become predictable or monotonous.
However, microtiming isn’t the only thing that gives electronic music its soul. Another major factor is the emotion it evokes in listeners and the culture: Berlin style underground techno will sound very different from Detroit style underground techno.
For example:
Ambient techno has a different kind of soul—it’s deep, introspective, and atmospheric. Hardstyle has an intense, energetic soul, built around distortion and high-energy kicks. Hardgroove brings a driving, hypnotic pulse that feels more tribal and raw.
Each subgenre carries its own emotional weight, and that emotional impact is just as important as rhythmic complexity. While microtiming can enhance the feel of a track, other elements like sound design, harmonic progression, and energy levels also contribute to the overall experience of the record.
Electronic music doesn’t need microtiming to have soul, but it benefits from it—especially in genres where groove is key.
As a computer science student and a DJ, I find live coding to be very intriguing because it will enable me to do something creative around my passion for electronic music with code – something that is typically never used in creative fields like music production.
Live coding allows anyone to see and understand the process of creating music through code. Unlike traditional DJing, where music is often mixed from pre-recorded tracks, live coding enables real-time composition, making each performance unique and dynamic. This improvisatory nature mirrors the spontaneity of live music while using the precision and power of programming.
What appeals to me the most is the deeply human aspect of all this. The algorave scene, where people come together to dance to music generated in real time through live coding, is a perfect example of how tech can serve us rather than the other way around. It’s not just about writing code—it’s about using that code to create shared experiences, to bring people together, and to foster a sense of connection. Seeing live coding facilitate something communal through algoraves, subreddits, and GitHub pages reinforces the idea that code isn’t just about logic, structure, and money. It can also be a powerful tool for expression, emotion, and collective joy.