First of all, I found some interpretations of Live Coding interesting. “Live Coding is shaped by different genealogies of ideas and practices, both philosophical and technological”, so one needs to have a very deep understanding of liveness. At the same time, the article mentions that liveness refers both to nonhuman “machine liveness”, which I think is one reason why people need to have a deep understanding of liveness, since they need to have a deep understanding of “nonhuman”.

Secondly, the author states that Live Coding is not about writing code in advance. However, at the current level, it is almost impossible to be completely on the spot. I remember during the first group performance, our group had a lot of coding came up on the stage. That was a big challenge for me. In performing, like the article mentions, you can’t just focus on one note, instead, you have to generate from a higher-order process. In the groups, I learned a lot that Bato would write notes very casually, followed by more at random. What surprised me was that just by putting them together, even without much manipulation, they could sound great. So I don’t think the statement in the article that “technique doesn’t matter” fits that much for Live Coding with music. I learned Live Coding because I saw a lot of Live Coding performances in New York, and both the art form and the logic behind it appealed to me. I was attracted to the art and, to be honest, the limitations and technology, but was very much drawn to the art form of Live Coding. I’m what the article refers to as “composed improvisation or improvisation with a composed structure.” Live Coding’s liveness is what sets it apart from other forms of code, and it’s what’s most attractive. The liveness of Live Coding is what sets it apart from other forms of code, and what makes it most attractive.

Kandinsky is one of my favorite artists. Kandinsky had the ability to associate (perceptual mixing) and he could hear colors very clearly. This effect had a major impact on his art. He even named his paintings “improvisation” and “structure” as if they were not paintings but musical compositions. This site also features music for Kandinsky’s works. This website made a experiment of Kandinsky’s work called “What if you can hear the sound?”, in which they designed and developed different parts of music clips to divide Kandingsky’s work to many parts and sense.

The article describes the popular genres and development of music or painting in different periods. I like the statement in the article that, due to various factors, sometimes painting develops faster than music, sometimes music develops faster than painting, and sometimes interdisciplinary development occurs, with the two art forms supplementing each other. For the present, developments in AI technology are also impacting the art world, influencing styles of music and painting as well as interdisciplinary trends.

The Notation classity of music practices as stylism, traditionalism and restructuralism. Notation in live coding allows us to play with parameters. It gives more possibilities for people to compose music. Also the author mentions the ephemeral nature of live coding. Some live coders will not save their code after the performance. This led me to ask, since Live Coding is not the same as traditional music production, and since it allows more people to create music through Live Coding, can this form of music production be called “creative”? Recently, Live Ableton released its thirteenth version, with a large amount of AI music generation. One can write a few notes at random and generate a complete piece of music. I think that although more people can compose music in this way, the essence of “creativity” in music has disappeared. Unlike coding itself, art is something that requires a lot of time, and it’s good that people are finding shortcuts to it, but the process of creation should not abandon the “creation” itself. The article makes us consider how live coding challenges traditional understanding of composition and performance. Live Coding questions the nature of creativity.

https://github.com/traaaacy/Composition_LiveCoding

Idea

For the composition project, I’m focusing on the capybara who is cute and stupid. Their charm has also been captured in a capybara song that I adore.

Audio

Start:

  1. To start I try a couple of individual sound to be the hook. The first sound is the sound of a capybara moving quickly. Then a lighter note. Then a trumpet sound is introduced. Then the first part begins.
  2. I used a lot of drums in the first part to make the music more energetic.
  3. The main rhythm for part 1 is: d1 $ slow 2 $ s “808ht:12 808ht:23 808ht:32 <808ht:43 808ht:5*2>” # room 0.7 # gain 808ht was chosen because it sounds make me think it can represent capybara
  4. I use stacking, starting with one rhythm and stacking more rhythms.
  5. For transition i used qtrigger and seqP
  6. After transition i used drums with a stronger beat, and i added runs so that each beat can follow with a slight beat.
  7. if I had more time I would have added more midi, and would have added more code to make different parts switch visual automatically.

Visual

  1. I use initVideo to initate the capybara video. And grid it using scale. The midi sent “2 3 4 5” for the number of row and column.
  2. I use initImage to initate the capybara image . And reverse it. The problem I encountered is the midi can only send value larger than 0, but if make the image reversed I need (1, -1). So for the reversing, the midi notes sent is “2 0”. The hydra code is scale(1, ()=> 1-ccActual[1]).

For me, the intension of Live Coding can be divided into “knowing what want to show and not knowing what want to show”. When I come up with a concept first, but constrained by the lack of knowledge, I often fall into self-doubt: “What did I learn in Live Coding ? At this time, there is no concept, and I am surprised and happy with whatever I make at random. After a month of learning, I often reflect on whether the things I make are just “to comfort myself”. However, after reading this article, one of the important features of Live Coding is “experiment”:Although many live coders acknowledge some formal training in computing, music, or artistic methods, the knowledge of the process required for live coding emerges often through experimentation, through the accumulation of trial and error, and through innumerable versions and iterations, tests, and attempts(261). For IM, a lot of software requires enough acknowledge to be able to produce. So I’m still not very comfortable with the learning process of Live Coding. Learning these skills is not about the anxiety of seeing someone else using the same skills and saying “they are doing it better than me”, it’s about the enjoyment of acquiring a skill at the same time. As the author says “Within live coding, the challenge seems less one of responding with learned behavior or an already rehearsed script than of how to harness the potential unique to every contingent situation”.

gibber is a live coding environment for audiovisual performance, which combines music synthesis and sequencing with ray-marching 3d graphics. Gibber was created by Charlie Roberts, who is a researcher and artist interested in live coding, computer music, and interactive systems. Gibber allows users to use JavaScript. To start with Gibber, there’s no need to install anything; you can begin coding directly in your web browser.There are lots of amazing live coding systems out there, a few things that make Gibber different include:

  • Novel annotations and visualizations within the code editing environment.
  • Unified semantics for sequencing audio and visuals.
  • Support for coding with external audiovisual libraries, such as p5.js and hydra.
  • Support for networked ensemble performances.

One of the strengths of Gibber is its intuitive interface and approach to visualizing and managing sequences for each channel.

It can also intigrate with visual.Gibber allows for the integration of visuals that can be manipulated in real-time alongside sound, offering a cohesive performance of audiovisual art. Visuals can react to the music, providing a more immersive and engaging experience for both the performer and the audience.

To sum up, Gibber provides an intuitive platform for users to explore musical concepts, programming, and audiovisual integration through immediate feedback and visual representation of sequences (even you don’t know music theory!)

In An Information Theory Based Compositional Model, Laurie Spiegel initially explains information theory, a mathematical theory optimizing signals for communication in noisy channels and addressing communication degradation in such environments. The author illustrates a drawback of applying information theory, noting that prolonged exposure may lead to increased listener boredom, as people can predict each note before hearing it.

Subsequently, the author delves into the use of noise in music to enhance its functionality. Introducing unpredictability through noise amplifies uncertainty in each note’s resolution, rendering it more musically interesting. This form of random corruption, distinct from random generation, involves replacing explicitly defined information with random data at random times to counteract redundancy and increase entropy in music. The author asserts that “music is self-referential and sensory rather than symbolic,” and defines music as “an art of sound in time expressing ideas and emotions in significant forms through the elements of rhythm, melody, harmony, and color.”

The concept of randomness has provided creators with limitless possibilities, and an increasing number of music programming software applications are incorporating this randomization utilizing a more diverse set of noises to enable individuals to create music, even without a background in music theory. Although unlike the author’s idea of random, my idea of “random” is more along the lines of “one can make simple music with many kinds of clips that already exist”.