In the age of streaming, podcasts and background music, where we consume audio content from multiple sources, standardisation has become crucial. It ensures consistency of sound.
Imagine yourself listening to a playlist: an intimate jazz piece followed by an explosive rock track. Without normalization, you’d have to constantly adjust the volume. This is precisely what audio normalization avoids.
Contrary to popular belief, it’s not simply a question of ‘turning up the volume’, but rather of optimising audio levels in an intelligent way.
So what is audio normalization and why is it different from audio compression?
Audio normalization is a process that adjusts the volume of an audio track to achieve consistent sound levels. This ensures a balanced listening experience without altering the original dynamics or tonal characteristics.
Audio normalization adjusts the amplitude of an audio signal to a target level, correcting for volume inconsistencies within a single track or across multiple recordings. Unlike equalisation or compression, it only affects volume and preserves the original dynamic range.
To simplify things, let’s imagine a recording as a wave of sound. This wave has peaks (the loudest moments) and troughs (the weakest moments). Normalization is used to adjust the overall amplitude of this wave so that it corresponds to a defined standard.
| Peak normalization | Loudness normalization |
Focus | Maximum peak level | Overall perceived sound level |
Use cases | Distortion prevention | Constant playback volume |
Loudness measurement | dBFS (full scale decibels) | LUFS (loudness units) |
Normalization involves analysing the audio to identify peaks and troughs and applying a linear gain adjustment to achieve the target level.
Although they both deal with volume, they have distinct objectives. Normalization ensures that two tracks in a playlist are played at the same volume, while compression equalises fluctuations in the volume of a piece of music.
Features | Normalization | Compression |
Purpose | Adjusts overall volume level | Reduces dynamic range |
Dynamic range | Preserves original dynamic range | Modifies dynamic range (attenuates noisy parts) |
Impact on tonality | No effect on equalisation or tonality | Affects punch, sustain and texture |
Use cases | Consistency of volume between tracks | Controlling erratic peaks in a mix |
After normalization, waveforms appear more coherent in terms of amplitude, but unlike compression, the relative dynamics between peaks and troughs remain intact.
In short, normalization ensures volume coherence, while compression shapes dynamics. Both are essential in audio production, but have distinct roles.
The main aim of normalization is to avoid the need for the listener to constantly adjust the volume according to different tracks or sound sequences. Without normalization, some files may seem too loud while others seem too soft, which is detrimental to listening continuity.
Uniformity of volume is particularly important in public spaces where sound conditions vary (presence of ambient noise, reverberation, etc.). Normalization ensures constant, controlled diffusion, avoiding unpleasant variations.
Audio normalization is based on several units of measurement. These different units are used in conjunction with each other to define and comply with levels appropriate to each type of broadcast.
LUFS measures the loudness perceived by the human ear, taking into account auditory sensitivity curves (Fletcher-Munson curves). This unit is used to evaluate the average loudness of an audio signal over a given period, with greater precision than other measurements.
Usefulness: Standard in mastering and broadcasting, LUFS help to ensure a consistent listening experience between different audio content and platforms.
RMS measures the average power of an audio signal over a time window (around 300 ms). Unlike LUFS, it does not take human perception into account but provides a physical estimate of loudness.
Usefulness: Mainly used in mixing and recording to balance sound elements and avoid distortion.
Decibels measure the peak level or instantaneous variations in the audio signal. They are essential for avoiding clipping (distortion caused by exceeding the maximum threshold).
Usefulness: Essential for monitoring sound peaks and ensuring that the signal remains within the technical limits of the audio system.
Recommended standards and levels vary according to platform and media. Here is an overview of specific LUFS values and best practices for different contexts:
The main streaming platforms have adopted specific LUFS standards:
These values are subject to change and some platforms offer user-selectable normalization options.
Broadcast standards vary by region:
The aim of these standards is to standardise sound levels between different programs and advertisements.
Although there is no specific standard for background music, it is generally recommended to aim for lower levels than for mainstream streaming, so as not to tire the listener over long periods of listening.
Why has Wavespark chosen the EBU R 128 standard?
The EBU R 128 standard offers several significant advantages for consumers of audio content:
These benefits collectively contribute to a more enjoyable and less stressful audio-visual experience for consumers, particularly when watching television or listening to broadcast content.
The LUFS standard was developed partly as a response to the problems caused by the ‘loudness war’ in the music industry.
They better reflect human perception of sound than traditional measures such as RMS or dB peaks. This allows platforms to normalise the volume of content to avoid sudden variations between tracks or videos, thereby improving the user experience. In addition, this standardisation reduces auditory fatigue caused by inconsistent or excessively high sound levels.
Loudness War’ is a phenomenon that has profoundly affected the music industry from the 1960s to the present day. This practice consists of progressively increasing the perceived volume of musical recordings, often to the detriment of sound quality and musical dynamics.
Loudness War began in the 1960s, when producers noticed that songs recorded at higher volumes were more often chosen for jukeboxes. The Beatles, for example, bought a Fairchild compressor/limiter to compete with Motown artists in the burgeoning volume race.
This practice quickly spread to radio and television, which sought to maximise the impact of music and advertising. In the 1990s, with the advent of the CD, the volume war reached its apogee. Producers and mastering engineers pushed the technical boundaries to create ever louder recordings.
The main technique used in the Loudness War is dynamic compression. Compressors reduce the dynamic range of the sound signal by lowering the level of the parts that exceed a certain threshold. This increases the overall perceived volume without exceeding the technical limits of the media.
The Loudness War has had a major impact on the sound quality of the music produced. The constant quest for sound power has led to excessive compression and volume limitation, resulting in a loss of nuance and subtlety in the music. Recordings often become ‘flat’ and lack dynamics, which can be fatiguing for the listener in the long term.
Faced with these excesses, initiatives have emerged to counter the Loudness War. For example, mastering engineer Ian Shepherd founded ‘Dynamic Range Day’ to raise awareness of the importance of musical dynamics. More recently, streaming services such as Spotify, iTunes, Tidal and YouTube have introduced a feature called ‘Audio Normalization’. This technology normalises the volume of tracks to around -13 LUFS (Loudness Units Full Scale), reducing the incentive to produce excessively compressed masters.
Although excessive compression allows music to be heard clearly in noisy environments such as discotheques or public transport, it has major drawbacks. It can lead to faster hearing fatigue and a loss of the original musical experience intended by the artists.
The Loudness War remains a subject of debate in the music industry, with a growing awareness of the importance of preserving the dynamics and sound quality of recordings. New streaming standards and better education of listeners could help to put an end to this Loudness War, in favour of a richer listening experience that is more faithful to the original artistic intentions.
There are several reasons why audio normalization is essential.
Normalization makes it possible to standardize the sound volume between different tracks, which is particularly important for playlists and podcasts. For example, Wavespark uses the EBU R 128 standard. This ensures respect for the artwork and a good balance between background music and announcements.
Whether in the car, on headphones or in a shop, normalization avoids sudden changes in volume that can be unpleasant for the listener. This makes listening more comfortable without having to constantly adjust the volume.
Streaming platforms such as Spotify, YouTube and Apple Music impose normalization standards. For example, Spotify adjusts tracks to -14 dB LUFS, with options for premium listeners to choose between different levels of normalization (-11 dB, -14 dB, or -19 dB LUFS).
Unlike compression, which alters the dynamic range of an audio file, normalization adjusts the overall volume level without affecting the sound dynamics. Normalization is therefore a more transparent and less intrusive processing method, preserving the natural sound quality of the recording.
Peak normalization changes the overall level but not the sound dynamics, whereas perceived volume normalization (loudness) may involve compression, which modifies the dynamics. Normalization, unlike compression, is not intended to correct or improve the dynamic characteristics of an audio file.
Audio normalization is a crucial process for ensuring that audio files have a consistent volume and comply with broadcast standards. This avoids volume discrepancies between different songs or tracks, ensuring a consistent audio experience for the listener. Here’s a comparison of the most popular free and paid audio normalization software, along with their advantages and disadvantages.
Comparison of free and paid software
Software | Price | Description | Pros | Cons |
Audacity | Free | Versatile open-source audio editing software. |
|
|
MP3Gain | Free | Specialised tool for normalizing MP3 files. |
|
|
Auphonic | Paid (Free trial) | Automates normalization and sound levels. |
|
|
Sound Normalizer | Paid (Free trial) | Compatible with MP3 and WAV. |
|
|
Sound Forge Pro | Paid (One-off purchase or subscription) | Professional software package for recording, editing, sound design and mastering |
|
|
WaveLab Pro | Paid (one-off purchase) | Comprehensive audio editing environment Advanced measurement and analysis tools. Modular and flexible output. |
|
|
Adobe Audition | Paid (Creative Cloud subscription) | Part of the Adobe ecosystem. |
|
|
Audio normalization is an essential process for balancing the sound levels of your files. Here’s a detailed guide to normalizing audio using a range of popular software.
Audio normalization is a powerful tool, but its use must be adapted to the type of content and the desired result. Here’s an in-depth guide to special cases, common mistakes and alternatives to normalization.
These elements mainly concern the mixing and mastering phases. The content curator will use them when choosing his tracks. This will enable them to optimise post-nomalisation with the aim of broadcasting music at a uniform volume, in a restaurant for example.
It is a key element in guaranteeing a smooth and pleasant sound experience, whatever the medium or listening environment. Applying good practice and complying with loudness standards not only improves the comfort of listeners, but also the coherence and intelligibility of audio messages.
In public spaces, shops and museums, where the sound environment plays a crucial role, intelligent sound management is essential. Thanks to optimised normalization, our solutions guarantee controlled sound reproduction adapted to each environment.
By integrating audio normalization into your projects, you offer your audience an optimal and effortless listening experience. Discover how Wavespark can support you in this process by facilitating sound management in your professional spaces.