Same old song
Jul 26th 2012, 19:28 by L.R.
An analysis published in Scientific Reports by Joan Serrà of the Artificial Intelligence Research Institute in Barcelona and his colleagues has found that music has indeed become both more homogeneous and louder over the decades.
Dr Serrà began with the basic premise that music, like language, can evolve over time, often pulled in different directions by opposing forces. Popular music especially has always prized a degree of conformity—witness the enduring popularity of cover songs and remixes—while at the same time being obsessed with the new. To untangle these factors, Dr Serrà’s team sifted through the Million Song Dataset, run jointly by Columbia University, in New York, and the Echo Nest, an American company, which contains beat-by-beat data on a million Western songs from a variety of popular genres. The researchers focussed on the primary musical qualities of pitch, timbre and loudness, which were available for nearly 0.5m songs released from 1955 to 2010.
They found that music today relies on the same chords as music from the 1950s. Nearly all melodies are composed of ten most popular chords. They follow a similar pattern to written texts, where the most common word occurs roughly twice as often as the second most common, three times as often as the third most common, and so on, a linguistic regularity known as Zipf’s law. What has changed is how the chords are spliced into melodies. In the 1950s many of the less common chords would chime close to one another in the melodic progression. More recently, they have tended to be separated by the more pedestrian chords, leading to a loss of some of the more unusual transitions. Timbre, lent by instrument types and recording techniques, similarly shows signs of narrowing, after peaking in the mid-60s, a phenomenon Dr Serrà attributes to experimentation with electric-guitar sounds by Jimi Hendrix and the like.
What music lost in variety, it has gained in volume. Songs today are on average 9 decibels louder than half a century ago, confirming what industry types have long suspected: that record labels engage in a “loudness race” in order to catch radio listeners’ attention. Since digital audio formats max out at a certain decibel level, as the average loudness inches towards that ceiling, songs will lose dynamic range, becoming ever more uniform.
This homogeneity is not just jarring to melomaniacs. It might confuse the popular algorithms for identifying and recommending tracks, like those used by Spotify and other music services. Many of these rely on timbre measurements to sort songs into genres, for instance. Some musicians are bound to respond by confounding expectations with new sounds. Whether audiences wish to be confounded remains moot.