In the age of MP3s, sound quality is worse than ever. That’s the thesis of a recent Rolling Stone article by Robert Levine.
The argument is well supported by quotes, such as this one from Bernie Grundman, mastering engineer.
I cant tell you how many times someone comes in and plays me something he wants mastered and I’ll say, ‘Do you want to make it slamming loud or retain some of this great sound?’ They’ll say, ‘We want to keep it really pristine.’ Then the next day they’ll call me and say, ‘How come mine isn’t as loud as so and so’s?’
You might recognize this as a Prisoner’s dilemma. (I recognize many things, perhaps too many things, as such.) The best outcome would be for mastering engineers to “cooperate” by capturing variation in volume. However, music mastered like this would be dominated by music mastered with the volume turned up throughout. Hence engineers tend to “defect” (often following instructions).
The outcome is suboptimal, even though it results from outcome-maximizing behavior. The outcome being maximized is the attention of listeners, rather than the quality of the music.
A secondary point of the article is that the users in question are listening to MP3s via computer speakers or iPod earphones (the latter, curiously, are never mentioned in the article). So what’s the point of optimizing the music when MP3s are already compressed down from CDs?
I don’t think that the fact that listeners are getting much of their music via something a lot less than hi-fi should be a problem for rock music. For me, the classic channel for such music includes radio waves and a transistor radio. Most rock music should sound good on a lo-fi channel, which is not to say that it can’t sound much better on hi-fi.
There are other points in the article. In fact, my main problem with it is that distinct points get mashed together. But the main point – deliberate lack of dynamic range – is troubling.