Gibber is a browser-based live coding environment developed by Charlie Roberts and JoAnn Kuchera-Morin, who worked together with their UCSB team to create an accessible platform for live music and audiovisual programming. The existing live coding tools create obstacles for most learners because of their need to be installed and the requirement to learn specialized programming languages and complete intricate setup procedures. Gibber addresses these challenges by running entirely in the web browser and using pure JavaScript, which enables users to start coding immediately without any installation requirements. The system provides users with brief development tools that create Web Audio API functions enabling them to build oscillators, FM synthesis, granular synthesis, audio effects and musical patterns through minimal coding efforts. The timing system of the system provides two types of timing functions, which enable users to schedule audio at sample level and use musical time values. The platform provides sample-accurate time functions together with Seq and Score tools, which enable users to create both freeform musical patterns and organized musical compositions. The platform combines visual rendering capabilities with real-time code collaboration, which enables performers to edit code together while the system uses CRDTs to ensure performance consistency throughout the collaboration.

Gibber’s design provides multiple benefits which transform it into an effective educational instrument and performance tool. The system allows users to create music through its simplified syntax which enables beginners to achieve musical results yet permits advanced users to conduct complex testing. The application runs in web browsers which provides all users access to its features which makes it suitable for use in educational settings and training programs and online group performances. The integrated graphics system of Gibber enables artists to create audio-responsive visual content which works together with interactive drawings and multimedia shows through a unified coding platform. The software provides users with pattern sequencers and modulation tools and synthesis options which enable them to create music across multiple styles that include rhythmic beat-making and experimental sound design. The collaborative features further distinguish Gibber which enables multiple performers to code together in real time while they share musical ideas through their common code instead of using audio stream synchronization. The software enables users to create music through its flexible design which serves as a learning platform for users to practice and create together with others in the field of electronic art.

Microtiming Studies

The concept of studying microtiming and other techniques often found in African and African-American music in to uncover the patterns that create the groove, rhythm and embodiement felt like looking at the science behind something I’ve always thought of as purely emotional. At the start of the reading I kept questioning whether music, a tool used to convey emotion, can be broken down in terms of technical terms to capture what makes it human and expressive. As someone with a short-lived history with music theory I was aware that it can all be broken down to uncover what makes up what we hear everyday, though never thought about what part of this technical dissection can be used to point out the humanness of it all. The ‘microscopic sensitivity to musical timing’ that is used by African musicians to create ‘expressive timing’ in their music was something that made sense once I read it, yet an attribute that I never thought about. A human can never reach the mechanical perfection of a machine, which sounds like a flaw until you start thinking of it as the foundation that creates expressive and meaningful beat. The emphasis on the fact that these slight shifts aren’t random, they’re embodied and part of a long cultural practice made me rethink how much of musical feel comes from the body and not just intention. Applying that to the live coding that we will be doing in class helped me understand where our personal expression can come into live coding. Evaluating a line or typing in the code for a beat when it feels right, even if it’s slightly delayed or early will contribute to creating a performance that feels personal and expressive rather than mechanic. It’s not about perfecting what we practice or what we had in mind, it’s about feeling what we are performing on a level where embodying the music is possible, giving space for the human body to be integrated into our works.

Introduction to Live Coding

“To define something is to stake a claim to its future, to make a claim about what it should be or become. This makes me hesitate to define live coding.” Starting my understanding of live coding with such a statement really set things in perspective for me: why give a non-traditional form of coding and performance a tradition definition? As someone who is drawn to experimentation and exploring the limitations and possibilities of different programs and techniques, the idea of live coding as something that resists being pinned down feels right. It suggests that live coding is less about arriving at a finished product and more about staying in process. The emphasis on not defining it too rigidly made me think about how often coding is treated as something that must be efficient and goal-oriented, rather than exploratory or expressive. Live coding seems to push back against that mindset by valuing risk and improvisation. I also found myself thinking about the idea of “thinking in public” and how vulnerable that can be. Showing the screen and showing the code as it’s being written means showing uncertainty and mistakes in real time. Instead of hiding behind polished results what live coding seems to do is invite the audience into the process of creation itself. That feels intimidating but at the same time freeing, especially in contrast to how coding is usually preformed behind the scenes.