Music’s ability to evoke emotions is deeply influenced by the context and culture in which it is experienced, as explored in Microstructures of Feel, Macrostructures of Experience. In the paper, The author discusses groove-based music and historical lineage, emphasizing the role of rhythmic structures and expressive timing in shaping expectations and emotional responses. This implies that emotional reactions to music don’t necessarily come from the inherent qualities of the music or specific notes but are shaped by the cultural and media contexts to which we are exposed as consumers. This idea reminded me of classical pieces that are commonly associated with particular emotions, often without much thought. For instance, Beethoven’s Symphony No. 7, Second Movement and John Williams’ Jaws theme are both iconic and placed in specific cultural contexts.

The paper argues that music functions as a communicative process, harmonizing individuals through shared experiences rather than conveying fixed meanings. Exposure and prior contexts lead us to internalize patterns and associate particular musical elements with emotions. Beethoven’s Symphony No. 7, Second Movement exemplifies this, as it’s often featured in tragic or dramatic settings, reinforcing its melancholic association. If it were placed in a different context, its emotional reception might shift significantly. Another example is the Jaws theme, which illustrates how repeated exposure to a rhythm within a suspenseful context has conditioned audiences to associate it with fear. The paper’s discussion on microtiming and expressive timing in groove-based music helped me understand how musicians manipulate rhythmic delivery to evoke distinct emotions. The accelerating motif in the Jaws theme has conditioned us to connect this rhythm with fear and thrill. Beyond its cinematic origin, this rhythmic pattern has been widely referenced and parodied, which only reinforces its emotional connotation in popular culture.

The study of expressive timing and groove-based traditions shows how music’s emotional power lies in learned associations. Whether through embodiment, memory, or repetition in media, our musical biases shape how we perceive and react to sound, demonstrating that our emotional engagement with music is as much a product of cultural exposure as it is of musical structure.

Music originates from the human body. Every clap of the hands and stomp of the feet creates a rhythm, and these natural rhythms have inspired the development of instruments such as the snare drum and bass drum. Furthermore, the variations in pitch and frequency of notes mirror the complexity of human emotions, allowing music to carry and convey feelings that the audience can intuitively understand. Because music is born from the body and has the ability to transmit the abstract essence of human experience, I believe it comes alive through both those who create it and those who listen to it.

What makes music feel even more alive is the behavior we attribute to it. For instance, the slight delay between the bass drum and snare drum may stem from the natural coordination differences between our hands and feet, yet we accept this as an inherent quality of musical rhythm. Each performer is unique, and as a result, the same piece of music can be played in ways that evoke entirely different emotions. In this sense, music takes on the personality of the musician, becoming a deeply personal and expressive form of art. Thus, music serves as a powerful medium for self-expression in today’s world.

Grooving signifies a “microscopic sensitivity to musical timing.” In this sense, performing a groove of any kind could equate having a developed sense of perception of musical timing. Grooving in no doubt can be easily elicited by good music, which goes to show that good sounds turn on the audience’s subconscious feel of musical timing. The backbeat is “indigenous to the modern drum kit”; the backbeat itself is also regarded as “a popular remnant of […] ancient human musical behavior. This proves that at the very early ages, humans have possessed for themselves the ability to sense and feel musical timing, which is also a form of a human body making music, completing the circle of music experience. This sense of backbeat and music timing is meant to be experienced in a collective setting; good music is meant to be shared within a community.

To me, it feels like a raw, unfiltered conversation with technology—where code isn’t just something you write and execute but something you shape and negotiate with in the moment. It reminds me of DJing or vinyl scratching, where the act of creation is as important as the final output, and every adjustment is part of the performance.

There’s something rebellious about it, too. Most coding environments push precision, control, and pre-planned logic, but live coding thrives on unpredictability, improvisation, and even failure. The screen isn’t just a workspace—it’s a canvas, a stage, an instrument. The audience isn’t just watching; they’re witnessing thought unfold in real time. It challenges the idea that programming has to be hidden, polished, or even “correct.” Instead, it embraces the process, the trial and error, the glitches that become part of the art.

For me, live coding is exciting because it breaks down the usual walls between artist and machine, between logic and emotion. It’s proof that code isn’t just functional—it can be expressive, performative, even poetic. It makes technology feel more human, more alive.

Electronic music has long been entangled in a debate about its humanity—whether it lacks the “soul” that traditional acoustic music embodies. However, as history has shown, electronic music is not detached from human expression; rather, it continuously interacts with historical and cultural narratives, reshaping the way we perceive sound, memory, and identity. From the early drum machines like the Roland TR-808 to modern synthesizer-based music, electronic sounds have evolved from mere functional tools into carriers of nostalgia, cultural significance, and artistic innovation.

One of the most striking examples of electronic music’s transformation is the Roland TR-808 drum machine. When it was released in the early 1980s, it was considered a cheap, artificial alternative to real drummers. The machine’s rigid quantization and synthetic drum sounds lacked the microtiming and organic fluctuations found in human performance. Because of this, many in the traditional music industry dismissed it. However, the TR-808 did not disappear. Instead, it found a second life in genres like hip-hop, house, and techno. For early pioneers in these genres, a cheap yet powerful tool for production was preferred. This historical background introduced them to the drum machine and its futuristic sound.

Another way electronic music gains its “soul” is through the use of samples, where producers incorporate fragments of existing recordings into their compositions. Renowned artists like Daft Punk, Jamie xx, Fred again.., and The Avalanches have mastered this technique. I remembered that in a Pitchfork interview with The Avalanches, the group mentioned their passion for western movies, and adopted the horse sound as a recurring sample in their album Since I Left You. This approach demonstrates how sampling is not merely a technical tool but a form of musical storytelling—one that connects generations of sound and reimagines stories behind the artists.

In techno, the drum components—particularly the kick drum and hi-hats—are arguably the most foundational elements. They act as a constant foundation, driving the rhythm forward and maintaining momentum.

Groove is highly subjective, but in my opinion, adding microtiming or changing velocity of kick drums or hi hats makes a track sound much better. It introduces subtle variations that prevent the beat from feeling too stiff or mechanical. Robotic rhythms aren’t inherently bad, but over time, they can become predictable or monotonous.

However, microtiming isn’t the only thing that gives electronic music its soul. Another major factor is the emotion it evokes in listeners and the culture: Berlin style underground techno will sound very different from Detroit style underground techno.

For example:

Ambient techno has a different kind of soul—it’s deep, introspective, and atmospheric.
Hardstyle has an intense, energetic soul, built around distortion and high-energy kicks.
Hardgroove brings a driving, hypnotic pulse that feels more tribal and raw.


Each subgenre carries its own emotional weight, and that emotional impact is just as important as rhythmic complexity. While microtiming can enhance the feel of a track, other elements like sound design, harmonic progression, and energy levels also contribute to the overall experience of the record.

Electronic music doesn’t need microtiming to have soul, but it benefits from it—especially in genres where groove is key.

Microstructures of feel macrostructures

In this reading, the author mentions that emotions play a huge part in the music. He questions how something as complex as emotions can be conveyed through something as simple as a drumbeat. This reminds me of the famous quote:

“To play a wrong note is insignificant; To play without passion is inexcusable” -Ludwig Van Beethoven.

My music mentors over the years have emphasized how important body language is; one of them would always tell us that people don’t only come to shows to listen to music, but also to watch so your body language has to be performative and in sync with your music. Through mastering the art of body performance, your music gets better. When a musician allows themselves to be immersed in the music, more often than not, they unconsciously start adding accents and decorative elements that enhance the music and give it a richer texture. This could be applied to each instrument in a different way, when it comes to the drums it’s through the motion of the hands and the intensity and speed of the contact with the drum.

Another important idea the author mentions is the importance of the rests. I am once again reminded of the famous quote:

“The music is not in the notes, but in the silence in between” – Wolfgang Amadeus Mozart.

The presence of rests creates rhythm and beats. Without rests there would be no music, only noise. Based on my years as an orchestra chair, I can vouch for the importance of rests. If a rest is miscounted, players would play at the wrong time, the beat would be off, the accents would be on the wrong notes, the emotion invoked in the audience would be completely thrown off and everyone would lose their sync. To create a groovy beat, to invoke feelings in listeners, you have to pay immense attention to the rests. In the context of percussive beats, rests hold the power to make beats more or less interesting, by shifting rests a bit more grooviness can be achieved. This makes me wonder though how it compares to computer generated music, can we manipulate computers to convey the emotions? By randomizing and through the asynchronization of rests and beats, could we achieve the same results as human generated music? This raises bigger questions in my mind about whether human emotions are as simple as calculated tricks that can be programmed into a computer.