Streaming Platforms
So what role did streaming platforms play in the decline of the loudness war? The answer essentially is loudness normalisation. The aim of loudness normalisation is to ensure that all tracks play back at the same level, preventing the user from needing to constantly adjust the volume. While all platforms use slightly different LUFS algorithms and normalise to slightly different levels (see chart below), this still means slamming a track to -8 or 9 LUFS and above will result in a track which doesn't sound any louder than one sitting around -13 or 14 LUFS, in actual fact the quieter one is likely to sound better as it will be considerably more dynamic and there will be a lot less limiting distortion. As you can see from the table below, most of the major streaming platforms are beginning to align in terms of normalisation level, this is quite a recent development and it's nice to see a consistent standard beginning to form.
Many platforms aren't particularly clear about what algorithms they're using or how they process audio to get it to meet these targets, however, Spotify have been quite open about their process. Gain compensation is adjusted across a whole album and not on individual tracks, so some tracks which are softer will remain softer and vice versa. Louder tracks will simply be turned down and quiter tracks turned up. They explain on their website that when raising the volume of a track they will always leave 1dB of headroom for lossy encoding, to minimise errors. This means if a track is uploaded at -20 LUFS with -5dB true peak, they would be able to raise it to -16 LUFS -1dBTP. This suggests to me that uploading ever so slightly above the -14 LUFS threshold at -1dBTP would be a sweet spot, as your track is only going to get turned down slightly, and there is going to be no change in audio quality. What is unclear is the processing applied if you uploaded a track at, say as an extreme case example, -20 LUFS and -1dBTP. The only way they could raise the average loudness here is by limiting the signal, but they only talk of gaining things up and down. Does this mean it would be left untouched as the maximum true peak has been reached? In any case, I think it's important to avoid extremely low loudness levels, as what happens to them on social media or streaming platforms is quite often an unknown quantity and likely to involve some form of limiting.
So what does this mean to mastering engineers. In essence it allows mastering engineers to make the creative choice between pushing loudness of a track or letting it have dynamics and breathe more for aesthetic reasons, whilst still remaining competitive.