Jump to content
IGNORED

Normalizing Is Bad?


sweepstakes

Recommended Posts

 

Normalizing Is Fine.

This.

 

I mean... this thread, you guys... I... damn.

Well that's my view too but I just wanted to get it out in the open and survey opinions without making anyone feel stupid.
Link to comment
Share on other sites

 

 



yeah exactly. it's just a volume boost to a set level. it shouldn't affect the actual audio quality at all. if you're hearing more noise after, it was probably always there and you're only noticing it now cus the whole thing is louder.



It's not actually as simple as it sounds, You're still basically recalculating the entire waveform any time you change volume.


Anyway, I'm pretty much in the same camp as Sweepstakes and Chesney with regard to noise, just pointing out that normalization isn't actually transparent (it isn't going to be audible if you do it once, but if you do it a lot it will eventually start to make audible changes) and also hardly ever necessary so I don't really see any reason to do it personally.

I forget what big name, old school producer said it back in the 90s when CD was really catching on, but when I worked at a record shop a long time ago we had a big poster taped up that had the quote "life has surface noise" on it and it's true.
Link to comment
Share on other sites

I'm almost always doing this stuff for myself, but sometimes I actually do it for work and there have been times when a client has has a specific peak level they need everything to hit, that's a good candidate for normalizing.  Seems like these days everyone I do anything for has figured out that as long as you aren't clipping peaks don't mean much and it's RMS or LUFS that you should really be using as your scale if you're trying to get things consistently loud, so it has been a long time since I've had to worry about a specific target peak beyond "say below -0.2 dBFS" or "stay below -1dBFS" or something).  Crest factor is a whole other thing, but peaks on their own in isolation don't really mean too much as long as they aren't clipping.

 

I used to follow dBFS specs for broadcast (usually -2dBFS) until I realized they really don't care as long as you're below. So now, I have a -5dBFS limiter on my master so that I have headroom to gain the whole mix a few dB up to get -24LKFS. If anyone in quality control would actually give me grief for not hitting -2dBFS, I would need to call them to explain the basics of what their job entails. I've had QC come back to me to say that I barely had anything in the LS, RS and LFE of a 5.1 mix of a docu series that was wall to wall dialogue, narrations and music where I had a day for all editing, SFX and mixing. I had to explain that using the surrounds and LFE was a creative option, not a prerequisite to pass their broadcaster's specs. I'm not going to put stuff in the surrounds and LFE 'just because' if it's not needed. Sometimes QC departments are just drones who try to follow rules without knowing why.

Link to comment
Share on other sites

Normalizing also corrects the DC offset which definitely screws with your waveform.

 

The level function in audacity is the more destructive normalize that messes with the noise floor. Worth messing around with.

 

Squee, it's honestly a subject worth discussing at length. Don't act tough.

Link to comment
Share on other sites

Normalizing also corrects the DC offset which definitely screws with your waveform.

Another potentially stupid question - is there a time when you want to actually keep the DC offset? Certainly there's ways to control CV equipment by amplifying large DC offsets (as long as your DAC doesn't have one of those DC filtering caps I guess) where you'd definitely want to keep that. For normal usage it just needlessly consumes headroom, right ... ?

Link to comment
Share on other sites

Normalizing also corrects the DC offset which definitely screws with your waveform.

 

Far from all audio editors do that! (I actually never seen it?) It is usually two seperate commands in most editors I've seen and if your got DC off set and you normalize without fixing it first the off set error just grows... (and eats headroom, just as sweepstakes guesses)

 

Nowadays I rarely need to correct DC offset, I guess AD/DA-stages are better now.  

Link to comment
Share on other sites

 

I'm almost always doing this stuff for myself, but sometimes I actually do it for work and there have been times when a client has has a specific peak level they need everything to hit, that's a good candidate for normalizing.  Seems like these days everyone I do anything for has figured out that as long as you aren't clipping peaks don't mean much and it's RMS or LUFS that you should really be using as your scale if you're trying to get things consistently loud, so it has been a long time since I've had to worry about a specific target peak beyond "say below -0.2 dBFS" or "stay below -1dBFS" or something).  Crest factor is a whole other thing, but peaks on their own in isolation don't really mean too much as long as they aren't clipping.

I used to follow dBFS specs for broadcast (usually -2dBFS) until I realized they really don't care as long as you're below. So now, I have a -5dBFS limiter on my master so that I have headroom to gain the whole mix a few dB up to get -24LKFS. If anyone in quality control would actually give me grief for not hitting -2dBFS, I would need to call them to explain the basics of what their job entails. I've had QC come back to me to say that I barely had anything in the LS, RS and LFE of a 5.1 mix of a docu series that was wall to wall dialogue, narrations and music where I had a day for all editing, SFX and mixing. I had to explain that using the surrounds and LFE was a creative option, not a prerequisite to pass their broadcaster's specs. I'm not going to put stuff in the surrounds and LFE 'just because' if it's not needed. Sometimes QC departments are just drones who try to follow rules without knowing why.

 

 

 

Yeah, the head engineer at the audio book publisher I sued to do a lot of work for before they got bought out had never used a DAW other than Pro Tools and had never even heard of offline rendering before like 2015, and didn't understand why anyone would ever need it.

 

Anyway, as far as normalizing goes it's not going to do any big harm in practice but in principle it's unnecessary and if you're sending your stuff out to someone else to master they're definitely going to prefer it if you don't normalize.

Link to comment
Share on other sites

 

Normalizing also corrects the DC offset which definitely screws with your waveform.

is there a time when you want to actually keep the DC offset?

 

DC offset isn't Fine.
Link to comment
Share on other sites

if you're sending your stuff out to someone else to master they're definitely going to prefer it if you don't normalize.

OK, so this right here is where my understanding gets wooly. So, to me, audio caveman that I am, if I send something off to someone to master, basically what they're getting is my stuff with random ass peak levels. Which seems almost rude of me to do, but hey, maybe there is some clue hidden in that peak level, or maybe they just don't care about the peak level and are just gonna vibe on where it should be or something.

 

TL;DR - Can I get some clarification on why it's actually preferable to leave it at whatever volume it happens to be at before submitting the end product for mastering? If you want to scoff, that is totally cool, just please explain - if I'm acting the fool here and the net result is a better understanding it's 100% worth it.

Link to comment
Share on other sites

TL;DR - Can I get some clarification on why it's actually preferable to leave it at whatever volume it happens to be at before submitting the end product for mastering?

 

Mastering engineers prefer to process the audio files themselves. Depending on the material, they will usually do somekind of normalizing in the end anyway - but they want to be the ones who do it.

(If they get delivered a not just normalized but also compressed or limited audio file that is even clipped and damaged there is not much that can be done to repair that damage. There is "de-clipping plug ins" but I don't think they can perform magic)

Link to comment
Share on other sites

 

TL;DR - Can I get some clarification on why it's actually preferable to leave it at whatever volume it happens to be at before submitting the end product for mastering?

 

Mastering engineers prefer to process the audio files themselves. Depending on the material, they will usually do somekind of normalizing in the end anyway - but they want to be the ones who do it.

(If they get delivered a not just normalized but also compressed or limited audio file that is even clipped and damaged there is not much that can be done to repair that damage. There is "de-clipping plug ins" but I don't think they can perform magic)

 

 

That's the gist of it, yeah. Headroom for EQ, band saturation, gain stage coloring, limiter coloring, etc.

 

sweepstakes, there are no stupid questions! We can't know everything in audio engineering without playing around or asking questions. A lot of it is learned through experience and then some of those ideas get passed around without knowing the reason or context behind them. And some of those ideas are insignificant or wrong!

 

A good mastering engineer should be able to tell when a mix is simply gained up close to 0dB and when it's limited to 0dB. I usually won't say anything if the mix seems to be bumped up (or normalized) to 0dB and just bump the mix down to -6dB or so. But if it's obviously squashed by a limiter already, I can't do much other than a bit of EQ and a bit of saturation if it works well with it. Sometimes saturation just sounds like clipping instead of getting warm if the mix is already limited.

 

Ideally, final mix downs before mastering should be the best sounding pristine products. Turning up a really well mixed track that sits at -8, -6dBFS on high end balanced monitors in a well treated room should sound balanced, dynamic, clear, and spacious. Then all the mastering engineer needs to do, if need be, is bring the loudness relative to the pool of music being released without losing any of the quality of the 'perfectly mixed' track.

 

But all the different mixing environments sometimes creates imbalances in volume and frequencies, noises, clipping, glitches, skips, etc., so the mastering engineer needs to try to fix the problems if possible (sometimes remixing is the only solution), enhance the strengths, and put their own personal touch with their favorite tools. Headroom gives the means to do that.

Link to comment
Share on other sites

Guest Chesney

Yeah^^^.

 

I actually cheat and it's probably not the right thing to do as Paranerd has asked me to take the limiter off my track for the Monomachine comp even though there was nothing on it haha.

I basically compress, process every track while making the music then when I am sort of happy with the the arrangement and whatnot I set up some basic mastering bits on the master and then fine tune with automation and mixing so the final product is pretty much what I hear, then I take off the master and the level should be just right, but maybe a tad hotter than typical pre master tracks. It's rare I send tracks to people so I usually send this to tape then via some valve pre's and back into DAW for simple mastering. I'm pretty happy with my results but I need to fine tune the process for any upcoming releases.

Link to comment
Share on other sites

Track and mix at 24bit (we're talking about 144dB of dynamic range here...), leave some nice headroom when you record and process audio, and voilà, no need to normalize ever again.

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.