Music has always negotiated a fundamental tension between its language-like elements or structures and its math-like elements or structures. For a long time in the West, the mathier side was subjugated to the language side as a matter of worldview. Within earlier liturgical music, things like melody were ultimately there to illuminate the Word of God, and its temporal patterns were dictated by those of classical poetry. Which is to say, language was a structuring force. This relation was suddenly overturned by the Notre Dame school at the turn of the 13th century, who basically invented, all in one go, notation, polyphony, and the rhythmic modes that were the forerunners to modern rhythm. Actually, they were necessarily invented together; polyphony required the coordinating system of notation and new forms of rhythmic measure. And you can see how, once Léonin and Pérotin (or whoever it really was) put this system in place, the higher complexities of harmony and composition could almost be deduced (not really, but almost) through working out or playing with its inner logic. Like ornamentation in modernist architecture, language was no longer load-bearing or structural. Language and its meanings could be placed or hung onto a bare musical scaffolding — or at least that was the big idea. In reality, though a little flattened, the contours and forces of language in music never went anywhere, and once again began to popularly rival abstract musical value with the emergence of rap in the 1970s. And since that time, the wide availability of sound-making and sound-editing software (among other things) has created a competitor to the Notre Dame system of musical coordination, that replaces notation with soundwaves and a traditional rhythmic lattice with a real-number timeline. Freed from the need for either notation or periodicity, language (or any action or event) can now as easily structure the music by its own “naturally-occurring” shapes and temporalities.
PRACTICE (Video) ft. The Answer
PRACTICE (Video) ft. The Answer
PRACTICE (Video) ft. The Answer
Music has always negotiated a fundamental tension between its language-like elements or structures and its math-like elements or structures. For a long time in the West, the mathier side was subjugated to the language side as a matter of worldview. Within earlier liturgical music, things like melody were ultimately there to illuminate the Word of God, and its temporal patterns were dictated by those of classical poetry. Which is to say, language was a structuring force. This relation was suddenly overturned by the Notre Dame school at the turn of the 13th century, who basically invented, all in one go, notation, polyphony, and the rhythmic modes that were the forerunners to modern rhythm. Actually, they were necessarily invented together; polyphony required the coordinating system of notation and new forms of rhythmic measure. And you can see how, once Léonin and Pérotin (or whoever it really was) put this system in place, the higher complexities of harmony and composition could almost be deduced (not really, but almost) through working out or playing with its inner logic. Like ornamentation in modernist architecture, language was no longer load-bearing or structural. Language and its meanings could be placed or hung onto a bare musical scaffolding — or at least that was the big idea. In reality, though a little flattened, the contours and forces of language in music never went anywhere, and once again began to popularly rival abstract musical value with the emergence of rap in the 1970s. And since that time, the wide availability of sound-making and sound-editing software (among other things) has created a competitor to the Notre Dame system of musical coordination, that replaces notation with soundwaves and a traditional rhythmic lattice with a real-number timeline. Freed from the need for either notation or periodicity, language (or any action or event) can now as easily structure the music by its own “naturally-occurring” shapes and temporalities.