Background: What is Pure Data?

Pure Data (Pd) is an open-source visual programming environment primarily used for real-time audio synthesis, signal processing, and interactive media. Programs in Pd are built by connecting graphical objects (known as patches) that pass messages and audio signals between one another. Unlike text-based programming languages, Pd emphasizes signal flow and real-time interaction, allowing users to modify systems while they are running.

Pd belongs to the family of visual patching languages derived from Max, originally developed by Miller Puckette at IRCAM. While Max/MSP later became a commercial platform, Pure Data remained open source and community-driven. This has led to its widespread use in experimental music, academic research, DIY electronics, and new musical interface (NIME) projects.

One of Pd’s major strengths is its portability. It can run not only on personal computers, but also on embedded systems such as Raspberry Pi, and even on mobile devices through frameworks like libpd. This makes Pd especially attractive for artists and researchers interested in standalone instruments, installations and hardware-based performances.

Personal Reflection

I find Pure Data to be a really engaging platform. It feels similar to Max/MSP, which I have worked with before, but with a simpler, more playful interface. The ability to freely arrange objects and even draw directly on the canvas makes the patch feel less rigid and more sketch-like, almost like thinking through sound visually.

Another aspect I really appreciate is Pd’s compatibility with microcontrollers and embedded systems. Since it can be deployed on devices like Raspberry Pi (haven’t tried it out though), it allows sound systems to exist independently from a laptop. This makes Pd especially suitable for experimental instruments and NIME-style projects.

Demo Overview: What This Patch Does

For my demo, I built a generative audio system that combines rhythmic sequencing, pitch selection, envelope shaping, and filtering. The patch produces evolving tones that are structured but not entirely predictable, demonstrating how Pd supports algorithmic and real-time sound design.

1. Timing and Control Logic

The backbone of the patch is the metro object, which acts as a clock. I use tempo $1 permin to define the speed in beats per minute, allowing the patch to behave musically rather than in raw milliseconds.

Each metro tick increments a counter using a combination of f and + 1. The counter is wrapped using the % 4 object, creating a repeating four-step cycle. This cycle is then routed through select 0 1 2 3, which triggers different events depending on the current step.

This structure functions like a step sequencer, where each step can activate different pitches or behaviors. It demonstrates how Pd handles discrete musical logic using message-rate objects rather than traditional code.

2. Pitch Selection and Control

For pitch generation, I use MIDI note numbers that are converted into frequencies using mtof. Each step in the sequence corresponds to a different MIDI value, allowing the patch to cycle through a small pitch set.

By separating pitch logic from synthesis, the patch becomes modular: changing the melodic structure only requires adjusting the number boxes feeding into mtof, without touching the rest of the system. This reflects Pd’s strength in modular thinking, where musical structure emerges from the routing of simple components.

3. Sound Synthesis and Shaping

The core sound source is osc~, which generates a sine wave based on the frequency received from mtof. To avoid abrupt changes and clicks, pitch transitions are smoothed using line~, which interpolates values over time.

I further shape the sound using:

  • expr~ $v1 * -1 and cos~ to transform the waveform
  • lop~ 500 as a low-pass filter to soften high frequencies
  • vcf~ for resonant filtering driven by slowly changing control values

Amplitude is controlled using *~, allowing the signal to be shaped before being sent to dac~. This ensures the sound remains controlled and listenable, even as parameters change dynamically.

Randomness and Variation

To prevent the output from becoming too repetitive, I introduced controlled randomness using random. The random values are offset and scaled before being sent to the filter and envelope controls, creating subtle variations in timbre and movement over time.

This balance between structure (metro, counters, select) and unpredictability (random modulation) is central to generative music, and Pd makes this relationship very explicit through visual connections.

Conclusion

Overall, Pure Data feels less like a traditional programming environment and more like a living instrument, where composition, performance and system design happen simultaneously. This makes it especially relevant for experimental music, live performance and research-based creative practice.

(Somehow I can’t do recording with internal device audio… I’ll try recording it using my phone sometime)

I think the author’s understanding of “feel” in music is significant in the context of computational music. If musical feel emerges from microscopic deviations in timing and intensity, then computational systems expose a clear tension: computers prioritize precision, while groove often depends on subtle imprecision. Strict quantization can therefore erase the traces of bodily presence that make rhythms sound relaxed or “in the pocket”, even when they are metrically correct.

This tension becomes especially clear in live coding. While live coding environments often emphasize algorithmic structure and real-time precision, they also introduce a performative and temporal dimension that reopens space for musical feel. Rather than encoding a fixed groove in advance, live coding unfolds in time, making timing decisions visible and responsive. Small changes in code, like shifting delays, densities or rhythmic patterns function less as abstract instructions and more as gestures comparable to microtiming adjustments in embodied performance.


Reading this article made me rethink what live coding truly is, since I’ve been seeing a lot of live coding performances but never really thought about that. I really like the idea that there is no definition to it, “Live coding is about people interacting with the world, and each other, in real time.” Although I found this explanation sort of general but it really showcases how much inclusivity and possibility live coding could potentially encompass. I also like the point the author made that live coding “asks questions about liveness”, prompting reflections on the fundamentals of computer culture and technology. For me it is just so breath-taking that we could use simply 1 line of code creating a particle system in hydra while it would take a century in p5, and the fact that live coding’s straightforward nature is making music and visual creating much easier than ever before. As for “showing the screen” during performances, at least from my own experience watching the showscases, the process of seeing the performer changing the code on the bigger screen as the visuals is a part of witnessing the magic happen and is exactly what truly make the whole thing “live”.