I found this reading particularly interesting because of the mathematical part of it (information theory) but at the same time a little difficult to follow because of the music technicality. What I really found eye-opening was the difference between random corruption versus random generation. For my second assignment, I was initially struggling a lot to create something that sounded nice or meaningful because I do not have any musical background and I do not know which notes to hit, what sounds to use together, etc. Because of this lack of music technicality, I was just trying to generate random patterns I could think of that sounded monotonous after a point. As the author writes, even my music was, “Informative, unpredictable, not conforming to something heard before, but it [fell] short of being a musical composition.”
This reading made me realize how we could create a better, more meaningful musical composition that conveys emotions of anticipation, prediction, surprise, disappointment, reassurance, or return through the usage of noise (random corruption of carefully selected notes/sounds). I now understand what Prof. Aaron meant when he said, “Put in some question marks here and there, use random range, degradeBy, sometimesBy, etc.” I earlier wondered how this could lead to music that would be more pleasurable to hear because it will be so off-pattern, out of sync, uncertain, and not have a clear rhythm but I now understand why this is important and a better move than random generation. This reading has enlightened me to a different approach to composing music. From really struggling to create live computational music, I believe I now have a direction I want to explore – replace defined information with random data at random times, degrading otherwise fully intelligible signals.
I do not know if this will make me a better composer or the next assignment a bit easier but the idea of using information theory in computational music is quite fascinating and I think I will definitely look more into it later 🙂