enfrinfo@wsystem.com +33 2 40 78 22 44 - For USA only (800) 875-5907

Audio normalization explained: a complete guide to balanced sound

Waves system > Actualités > Background music > Audio normalization explained: a complete guide to balanced sound

In the age of streaming, podcasts and background music, where we consume audio content from multiple sources, standardisation has become crucial. It ensures consistency of sound.

Imagine yourself listening to a playlist: an intimate jazz piece followed by an explosive rock track. Without normalization, you’d have to constantly adjust the volume. This is precisely what audio normalization avoids.

Contrary to popular belief, it’s not simply a question of ‘turning up the volume’, but rather of optimising audio levels in an intelligent way.

So what is audio normalization and why is it different from audio compression?

What is audio normalization?

Definition

Audio normalization is a process that adjusts the volume of an audio track to achieve consistent sound levels. This ensures a balanced listening experience without altering the original dynamics or tonal characteristics.

Audio normalization adjusts the amplitude of an audio signal to a target level, correcting for volume inconsistencies within a single track or across multiple recordings. Unlike equalisation or compression, it only affects volume and preserves the original dynamic range.

To simplify things, let’s imagine a recording as a wave of sound. This wave has peaks (the loudest moments) and troughs (the weakest moments). Normalization is used to adjust the overall amplitude of this wave so that it corresponds to a defined standard.

Basic principles

  1. Dynamic range: The difference between the loudest and softest parts of an audio signal.
  2. Peak normalization: Adjusting the audio so that the loudest peak (e.g. 0 dB) becomes the reference point. Avoids clipping but can leave quieter sections uneven.
  3. Loudness normalization: Using perceived loudness (measured in LUFS) to balance the average volume of an entire track. Favoured by streaming platforms for uniform playback.

 

Peak normalization

Loudness normalization

Focus

Maximum peak level

Overall perceived sound level

Use cases

Distortion prevention

Constant playback volume

Loudness measurement

dBFS (full scale decibels)

LUFS (loudness units)

Normalization involves analysing the audio to identify peaks and troughs and applying a linear gain adjustment to achieve the target level.

What is the difference between normalization and compression?

Although they both deal with volume, they have distinct objectives. Normalization ensures that two tracks in a playlist are played at the same volume, while compression equalises fluctuations in the volume of a piece of music.

Features

Normalization

Compression

Purpose

Adjusts overall volume level

Reduces dynamic range

Dynamic range

Preserves original dynamic range

Modifies dynamic range (attenuates noisy parts)

Impact on tonality

No effect on equalisation or tonality

Affects punch, sustain and texture

Use cases

Consistency of volume between tracks

Controlling erratic peaks in a mix
Comment les ondes sonores sont-elles ajustées ?

 

How are sound waves adjusted?

  • Peak normalization : Amplifies or attenuates the entire waveform until the highest peak reaches the target (e.g. 0 dB). This ‘lifts’ all parts of the waveform evenly.
  • Loudness normalization: this is the auditory perception of volume. More modern and relevant, it evaluates the intensity perceived by the human ear. It analyses the integrated loudness (averaged over time) and adjusts the gain to match a LUFS standard (for example, -14 LUFS for Spotify). Quieter sections are amplified more than peaks. This is the method favoured by streaming platforms.
  • Normalization of real peaks: this is the adjustment of the maximum level. It takes account of inter-sample peaks (clipping between digital samples) to avoid distortion during analogue conversion. It focuses on the maximum level of the signal.
    For example, if you normalise to -1dB, the highest peak in your audio will be adjusted to -1dB, and the rest of the signal will be adjusted proportionally.

After normalization, waveforms appear more coherent in terms of amplitude, but unlike compression, the relative dynamics between peaks and troughs remain intact.

In short, normalization ensures volume coherence, while compression shapes dynamics. Both are essential in audio production, but have distinct roles.

Why normalization: to make the perceived volume consistent and avoid abrupt variations

The main aim of normalization is to avoid the need for the listener to constantly adjust the volume according to different tracks or sound sequences. Without normalization, some files may seem too loud while others seem too soft, which is detrimental to listening continuity.

Uniformity of volume is particularly important in public spaces where sound conditions vary (presence of ambient noise, reverberation, etc.). Normalization ensures constant, controlled diffusion, avoiding unpleasant variations.

LUFS, dB, RMS: explanation of units and standards

Audio normalization is based on several units of measurement. These different units are used in conjunction with each other to define and comply with levels appropriate to each type of broadcast.

  • LUFS (Loudness Units Full Scale) :

LUFS measures the loudness perceived by the human ear, taking into account auditory sensitivity curves (Fletcher-Munson curves). This unit is used to evaluate the average loudness of an audio signal over a given period, with greater precision than other measurements.

Usefulness: Standard in mastering and broadcasting, LUFS help to ensure a consistent listening experience between different audio content and platforms.

  • RMS (Root Mean Square) :

RMS measures the average power of an audio signal over a time window (around 300 ms). Unlike LUFS, it does not take human perception into account but provides a physical estimate of loudness.

Usefulness: Mainly used in mixing and recording to balance sound elements and avoid distortion.

  • dB (Décibels) :

Decibels measure the peak level or instantaneous variations in the audio signal. They are essential for avoiding clipping (distortion caused by exceeding the maximum threshold).

Usefulness: Essential for monitoring sound peaks and ensuring that the signal remains within the technical limits of the audio system.

Recommended standards and levels for audio normalization

Recommended standards and levels vary according to platform and media. Here is an overview of specific LUFS values and best practices for different contexts:

Streaming (Spotify, YouTube, Apple Music)

The main streaming platforms have adopted specific LUFS standards:

  • Spotify: -14 LUFS
  • YouTube: -13 to -15 LUFS, with normalization to -14 LUFS
  • Apple Music: -16 LUFS
  • Amazon Music: -9 to -13 LUFS
  • Deezer: -14 to -16bps
  • Wavespark: -23 LUFS (EBU R 128 by default) – customisable

These values are subject to change and some platforms offer user-selectable normalization options.

Radio and television

Broadcast standards vary by region:

  • Europe (EBU R 128) : -23 LUFS (±1 LU tolerance)
  • United States (ATSC A/85) : -24 LUFS (±2 LUFS tolerance)

The aim of these standards is to standardise sound levels between different programs and advertisements.

Background music

Although there is no specific standard for background music, it is generally recommended to aim for lower levels than for mainstream streaming, so as not to tire the listener over long periods of listening.

Why has Wavespark chosen the EBU R 128 standard?

The EBU R 128 standard offers several significant advantages for consumers of audio content:

  • Consistent listening experience: It harmonises sound levels between different programmes and broadcast channels, thus avoiding sudden and unpleasant volume variations.
  • Improved comfort: listeners no longer need to constantly adjust the volume of their source, which considerably improves their listening comfort.
  • Better perceived sound quality: By using LUFS (Loudness Units relative to Full Scale), which takes into account human perception of volume, the standard provides a more faithful representation of how listeners actually hear sound.
  • Reduction of excessive sound peaks: The standard limits maximum sound levels, thus avoiding excessively loud sounds that can be unpleasant or surprising for listeners.
  • Preservation of sound dynamics: Unlike previous practices of excessive compression, EBU R 128 encourages better use of the dynamic range, which can improve the overall sound quality for certain types of content.

These benefits collectively contribute to a more enjoyable and less stressful audio-visual experience for consumers, particularly when watching television or listening to broadcast content.

Want to normalize your background music?

Wavespark saves you precious time with its integrated normalization.

Why do platforms use LUFS?

The LUFS standard was developed partly as a response to the problems caused by the ‘loudness war’ in the music industry.

They better reflect human perception of sound than traditional measures such as RMS or dB peaks. This allows platforms to normalise the volume of content to avoid sudden variations between tracks or videos, thereby improving the user experience. In addition, this standardisation reduces auditory fatigue caused by inconsistent or excessively high sound levels.

Loudness War

Loudness War’ is a phenomenon that has profoundly affected the music industry from the 1960s to the present day. This practice consists of progressively increasing the perceived volume of musical recordings, often to the detriment of sound quality and musical dynamics.

Origins and development of Loudness War

Loudness War began in the 1960s, when producers noticed that songs recorded at higher volumes were more often chosen for jukeboxes. The Beatles, for example, bought a Fairchild compressor/limiter to compete with Motown artists in the burgeoning volume race.

This practice quickly spread to radio and television, which sought to maximise the impact of music and advertising. In the 1990s, with the advent of the CD, the volume war reached its apogee. Producers and mastering engineers pushed the technical boundaries to create ever louder recordings.

Techniques used

The main technique used in the Loudness War is dynamic compression. Compressors reduce the dynamic range of the sound signal by lowering the level of the parts that exceed a certain threshold. This increases the overall perceived volume without exceeding the technical limits of the media.

Consequences for sound quality

The Loudness War has had a major impact on the sound quality of the music produced. The constant quest for sound power has led to excessive compression and volume limitation, resulting in a loss of nuance and subtlety in the music. Recordings often become ‘flat’ and lack dynamics, which can be fatiguing for the listener in the long term.

Industry response

Faced with these excesses, initiatives have emerged to counter the Loudness War. For example, mastering engineer Ian Shepherd founded ‘Dynamic Range Day’ to raise awareness of the importance of musical dynamics. More recently, streaming services such as Spotify, iTunes, Tidal and YouTube have introduced a feature called ‘Audio Normalization’. This technology normalises the volume of tracks to around -13 LUFS (Loudness Units Full Scale), reducing the incentive to produce excessively compressed masters.

Impact on the listening experience

Although excessive compression allows music to be heard clearly in noisy environments such as discotheques or public transport, it has major drawbacks. It can lead to faster hearing fatigue and a loss of the original musical experience intended by the artists.

The Loudness War remains a subject of debate in the music industry, with a growing awareness of the importance of preserving the dynamics and sound quality of recordings. New streaming standards and better education of listeners could help to put an end to this Loudness War, in favour of a richer listening experience that is more faithful to the original artistic intentions.

Why normalize an audio signal?

There are several reasons why audio normalization is essential.

Ensuring consistent listening

Normalization makes it possible to standardize the sound volume between different tracks, which is particularly important for playlists and podcasts. For example, Wavespark uses the EBU R 128 standard. This ensures respect for the artwork and a good balance between background music and announcements.

Avoid unpleasant volume variations

Whether in the car, on headphones or in a shop, normalization avoids sudden changes in volume that can be unpleasant for the listener. This makes listening more comfortable without having to constantly adjust the volume.

Conforming to streaming platform standards

Streaming platforms such as Spotify, YouTube and Apple Music impose normalization standards. For example, Spotify adjusts tracks to -14 dB LUFS, with options for premium listeners to choose between different levels of normalization (-11 dB, -14 dB, or -19 dB LUFS).

Comparison with audio compression

Unlike compression, which alters the dynamic range of an audio file, normalization adjusts the overall volume level without affecting the sound dynamics. Normalization is therefore a more transparent and less intrusive processing method, preserving the natural sound quality of the recording.

Effects on sound dynamics

Peak normalization changes the overall level but not the sound dynamics, whereas perceived volume normalization (loudness) may involve compression, which modifies the dynamics. Normalization, unlike compression, is not intended to correct or improve the dynamic characteristics of an audio file.

How do I apply audio normalization?

Audio normalization is a crucial process for ensuring that audio files have a consistent volume and comply with broadcast standards. This avoids volume discrepancies between different songs or tracks, ensuring a consistent audio experience for the listener. Here’s a comparison of the most popular free and paid audio normalization software, along with their advantages and disadvantages.

The best software and tools for audio normalization

Comparison of free and paid software

Software

Price

Description

Pros

Cons

Audacity

Free

Versatile open-source audio editing software.
Offers a sound normalization function with customizable options.

  • Open-source, many audio editing features.
  • Customisable normalization function.
  • Dated interface.
  • Learning curve for beginners.

MP3Gain

Free

Specialised tool for normalizing MP3 files.
Uses the ReplayGain algorithm to adjust volume without affecting file quality.

  • Specialises in MP3 files.
  • Preserves original quality.
  • Easy to use.
  • Limited to MP3 files.
  • Basic and outdated interface.

Auphonic

Paid (Free trial)

Automates normalization and sound levels.

  • Effective automation.
  • Ideal for podcasts and dialogues.
  • Requires a subscription for intensive use.
  • Limited functionality outside normalization.

Sound Normalizer

Paid (Free trial)

Compatible with MP3 and WAV.
Allows you to adjust volume levels manually or automatically.

  • Compatible with several formats.
  • Manual and automatic normalization options.
  • Outdated interface.
  • Features limited to supported formats.

Sound Forge Pro

Paid (One-off purchase or subscription)

 Professional software package for recording, editing, sound design and mastering

  • Professional audio editing tools.
  • Numerous effects and plug-ins.
  • Can be expensive to buy.
  • Requires some expertise to fully exploit features.
WaveLab ProPaid (one-off purchase)Comprehensive audio editing environment
Advanced measurement and analysis tools.
Modular and flexible output.
  • Reference mastering software.
  • Advanced loudness management
  • Primarily focused on mastering, less versatile for other tasks.
  • Can have a steep learning curve.

Adobe Audition

Paid (Creative Cloud subscription)

Part of the Adobe ecosystem.
Powerful audio restoration tools.
User-friendly interface for Adobe users.

  • Integrated into the Adobe ecosystem.
  • Powerful audio restoration tools.
  • Requires a Creative Cloud subscription.
  • May be less suitable for highly specialised mastering workflows.

Wavespark streamlines the management of background music.

Intuitive interface, automatic normalization.

Tutorial: how do I normalise an audio file?

Audio normalization is an essential process for balancing the sound levels of your files. Here’s a detailed guide to normalizing audio using a range of popular software.

Audacity

  1. Open Audacity and import your audio file.
  2. Select the entire track (Ctrl + A).
  3. Go to the ‘Effects’ menu and choose ‘Amplitude normalization’.
  4. In the window that opens, select ‘Perceived loudness’.
  5. Choose the desired level of normalization in the ‘towards (o)’ parameter (generally -23 LUFS for European broadcasting).
  6. Click ‘Apply’ to normalise the track.

Reaper

  1. Import your audio file into Reaper.
  2. Select the track to be normalized.
  3. Use the keyboard shortcut Ctrl + Shift + N to open the normalization window.
  4. Choose the desired level of normalization (generally -23 LUFS for European broadcast).
  5. Click ‘Apply’ to normalise the track.

Adobe Audition

  1. Open your audio file in Adobe Audition.
  2. Select the entire waveform.
  3. Go to ‘Effects’ > ‘Amplitude and Compression’ > ‘Normalization’.
  4. Choose the level of normalization you want (for example, -14 LUFS for YouTube).
  5. Click ‘Apply’ to normalise the audio.

Wavespark

  1. Go to the profile menu
  2. Select ‘Settings’ then ‘Organisation settings’.
  3. Select audio normalization settings
  4. Define your parameters
  5. Go to ‘Titles
  6. Import your files from your computer
  7. Normalization is automatic

Advanced normalization and mistakes to avoid

Audio normalization is a powerful tool, but its use must be adapted to the type of content and the desired result. Here’s an in-depth guide to special cases, common mistakes and alternatives to normalization.

Special cases (podcasts, films, voice-overs, music)

  • Podcasts: For podcasts, aim for a level of -16 LUFS. This will ensure good audibility on most devices while preserving voice dynamics. Adjust the True Peak to -1 dB to avoid distortion when compressing into MP3 format.
  • Films: For film audio, follow the ATSC A85 standard, which recommends -24 LUFS for American television, or the EBU R128 standard, which recommends -23 LUFS for European broadcasting. These standards guarantee consistency of sound between different programmes and commercials.
  • Voice-overs: For voice-overs, a level of -18 LUFS is generally appropriate. This offers a good balance between clarity and dynamics, while leaving room for background music or sound effects.
  • Music: For music intended for streaming, aim for -14 dB LUFS integrated. Spotify, for example, normalizes to this level. Make sure that the actual peak threshold remains below -1 dB to avoid distortion during transcoding.

Common mistakes and how to avoid them

  • Over-normalization: Avoid systematically pushing the level to the maximum. This can lead to a loss of dynamics and a ‘crushed’ sound.
  • Loss of naturalness: Over-aggressive normalization can alter the natural character of the sound. Use your ears in addition to technical measures.
  • Distortion: Be sure to leave a headroom of at least 1 dB to avoid clipping, especially when transcoding to compressed formats.

Alternatives to normalization (compression, advanced mixing)

  • Multiband compression: Rather than normalizing, use multiband compression to control dynamics more precisely across frequencies.
  • Advanced mixing: A good mix can make normalization less necessary. Work on level balance and equalisation to achieve a coherent sound.
  • Peak limiter: To control peaks without affecting overall dynamics, a peak limiter may be more appropriate than normalization.
  • Parallel compression: This technique adds density to the sound while preserving transients and natural dynamics.
  • Volume automation: In some cases, manual volume automation can give better results than automatic normalization, particularly for voice-overs and podcasts.

These elements mainly concern the mixing and mastering phases. The content curator will use them when choosing his tracks. This will enable them to optimise post-nomalisation with the aim of broadcasting music at a uniform volume, in a restaurant for example.

Audio normalization is much more than a simple volume adjustment

It is a key element in guaranteeing a smooth and pleasant sound experience, whatever the medium or listening environment. Applying good practice and complying with loudness standards not only improves the comfort of listeners, but also the coherence and intelligibility of audio messages.

In public spaces, shops and museums, where the sound environment plays a crucial role, intelligent sound management is essential. Thanks to optimised normalization, our solutions guarantee controlled sound reproduction adapted to each environment.

By integrating audio normalization into your projects, you offer your audience an optimal and effortless listening experience. Discover how Wavespark can support you in this process by facilitating sound management in your professional spaces.

Is audio normalization essential to your business?

Discover our solution, in 30 minutes you'll know everything.

F.A.Q

What is the difference between normalization and compression?

Normalization adjusts the overall volume of an audio file, while compression reduces the dynamic range by attenuating loud sounds and amplifying soft ones.

Should I normalise before or after mixing?

It is preferable not to normalize before mixing, as this can reduce track dynamics and complicate gain staging, which is essential for a balanced mix. Normalization is more appropriate after mixing, during mastering, to adjust final levels or meet broadcast requirements. Prioritise good gain staging during mixing, leaving enough headroom to avoid saturation.

Does normalization degrade audio quality?

Appropriate normalization should not deteriorate audio quality. However, excessive normalization may result in a loss of dynamic range.

Is it possible to normalise lossless FLAC files?

FLAC files can be normalized without loss of audio quality using methods such as ReplayGain (e.g. foobar2000). ReplayGain is a non-destructive method that does not alter the audio data. It simply adds metadata to the FLAC file to indicate the level of volume adjustment to be applied during playback.