This post originally apeared on www.dbr-training.eu
Audio has always been something I’ve been passionate about. When I was a student, I even started a sound engineering curriculum and I still have a lot of activity as a live mixing engineer.
This post is the first one of a series in which I would like to share my way of using Adobe Audition to produce audio assets for my Captivate projects. In this particular post, I’ll try to explain what is the normalization process of an audio clip.
Normalizing a sound Clip in Adobe Captivate
“Normalize” is the label of one of those radio buttons found in the "Adjust Volume" dialog box of Captivate. This dialog box is a well hidden gem of Captivate. To access it, you have to click on the "Adjust Volume" button found in the Object audio or Slide audio dialog box.
For the average guy not involved in audio, the expression “Normalizing a sound clip” probably doesn’t mean anything. That’s why Adobe proposes a nice little explanation as seen in the above screenshot. According to that, Normalizing is “Selecting the best volume”.
If you want to know more about it... keep reading!
What is Normalization?
According to Wikipedia, “Audio normalization is the application of a constant amount of gain to an audio recording in order to bring the average or peak amplitude to a target level (the norm)”
I think this is a pretty good definition.
First it explains where the term "Normalize" comes from. When normalizing, we define a certain level of decibel that we consider as being the norm and we bring the level of a given audio clip to that norm. Notice that if the original level of a given audio clip is higher than the defined norm, the normalization process will reduce the audio level of that sound clip. In other words, normalizing is not always boosting the sound.
The second think I like about this definition, is that it introduces the fact that there are different kinds of normalization (...in order to bring the average or peak amplitude to a target level).
Let’s quickly review the different kinds of normalization available.
Peak normalization changes the volume of an entire audio clip in order to bring the highest peak to the norm. Most of the time, we set that norm to the highest possible level that a digital audio system can bear (0 db).
The following screenshot shows the sound wave of a raw Adobe Audition recording. The red arrow points to the highest peak (it means the point that is the further away from the middle line).
No audio process has been applied to the audio file on the above picture. The red arrow represents the highest peak. That is, the point that is the farthest away from the middle line (which is the silence). Note that this point is not necessarily above the middle line. It can be below as well. The important think is that this is the point that is the farthest from the middle line, regardless of the direction (up or down).
Take some time to listen to this audio file.
Let’s say for example that the highest peak of the sound clip reaches a level of -9 (minus 9) decibels. It means that 9 decibels are missing to bring that peak to the highest possible level (0 db), so a boost of 9 decibels is applied to the whole sound clip. This ensures that the sound clip uses the whole available dynamic range without any distortion!
The following screenshot show the same file as above with peak normalisation applied and the norm set at 0db (as seen in Adobe Audition). Notice that the highest peak has been brought to 0 db.
Note that if the highest peak already is at the norm or close to the norm, the normalization process has little to no effect on your audio clip.
Take some time to listen to this version of the audio clip. pay particular attention to the overall volume of the sound clip..
In audition, select the whole clip and use the Effect ->; Amplitude and compression ->; Normalize (process) menu item.
Loudness normalization also changes the volume of an audio clip, but this time, we bring the average volume of an audio clip to the norm, not the highest peak.
For example, let’s say that a given sound recording has an average audio level of -6 decibels. We want to apply a loudness normalization so that the average audio level of that sound clip reaches - 3 db (we define the norm as being -3 db). It means that the normalization process adds a boost of 3 db to the entire clip.
The following screenshot shows the same audio file with a Loudness normalization applied. Notice that the overall audio level of the clip is higher than the original raw file, but the highest peak is not at the maximum.
Once again, take some time to listen to this version of the audio. You should notice a sound level a little less high than on the previous example, but still higher than the first example.
Now, let’s pretend that the above sound clip (that has an average level of -6 db) has a peak level of -2 db. By applying the above loudness normalization to the file, we would bring the peak level to + 1 db, which is 1 db above the maximum possible level. Consequently, clipping will occur and the quality of the sound clip is reduced.
The following screenshot shows the same waveform with an excessive amount of loudness normalization applied. The average audio level of the resulting audio clip is so high that most peaks go beyond the maximum level so that clipping occurs.
To listen to this audio clip, you probably want to reduce the audio volume of your loudspeakers. You should clearly notice the distortion generated by the excessive amount of loudness normalization.
In Adobe audition, loudness normalization can be applied using the Effects ->; Match Volume menu item.
The first thing I always do when recording a voice over audio clip for one of my Captivate projects is to apply a peak normalization with the norm set to 100 % (or 0db). By doing so, I am sure that I use the whole available dynamic range without any distortion applied to the sound. Normalizing such an audio clip has 2 major advantages:
- Because I know that I will normalize the file in Audition afterwards, I can set the sensitivity of my microphone a bit lower than usual during the recording in order to avoid any clipping during the recording. By doing so, I do add a bit of background noise though, but there are some simple techniques to reduce, and even suppress, the background noise. I'll detail these techniques in an upcoming post.
- By making the clip use the whole available dynamic range I make sure that subsequent audio processing (filtering, compression,...) will be much better.
I use Adobe Audition to do this processing, but the Normalization (select best volume) command of Captivate can to the job as well.
In the next post of this series, I will show you how I use the graphic equalizer to filter the audio clip. This will allow us to avoid the proximity effect of the microphone and to have a clearer sound.