Jump to content

Recommended Posts

Genuine question! The way I see it, gain is a budget, and normalizing just makes you start the game with lots of money in the bank.

 

I get it, you also raise the noise floor and the resolution is diminished a bit, but is that REALLY that big of a deal? Does that signal loss really matter that much? I thought that was the whole point of having much higher bit-rate effects processing was that you just had an order of magnitude less things to give a shit as long as the levels sounded right (and of course your monitoring situation was reasonably well-calibrated).

 

Is this just a pristine audio from soup-to-nuts kind of thing? Is that what everyone is shooting for these days unless you're doing some intentionally, over-the-plate lo-fi witch-house-ecco-vape-goth-rave-seinfeld thing or whatever the kids are calling it these days? 

 

P.S. Not trying to pick on anyone here if we might have happened to have a germane discussion. This is one of those nagging "Am I just really stupid(*) or, is this that thing where everyone is afraid of asking the same thing" questions.

 

P.P.S. Mods, feel free to merge... this was the most relevant thread I could find: https://forum.watmm.com/topic/70902-normalising-tracks 

 

* Also if I am just being stupid please explain. I will not get butthurt about being stupid, I promise.

Link to post
Share on other sites

I personally haven't noticed any negative effects from normalising, at least, within reasonable parameters (ie., not recording so quietly that the noise floor is almost as loud as the intended audio).. then again, I don't really care too much about my recordings being 100% pristine—most of my recent tracks have been made with a game boy, FFS lol

Link to post
Share on other sites

then again, I don't really care too much about my recordings being 100% pristine—most of my recent tracks have been made with a game boy, FFS lol

lol see, that's the thing. You and I have both at least dipped our toes in chip music so I'm wondering if there's this kind of more crusty DIY aesthetic that comes with that, maybe even a culture gap between that and some other prefectures of the electronic isles?

 

Most or all of the featured artists here seem to be more "mid-fi" and I like all of them so I guess I see their generally more off-the-cuff aesthetic more preferable or at least more comfortable. Which is not to say that is the "right" or "best" or "recommended" approach. I guess it depends what your goals are with the thing, and if you even have any beyond enjoying yourself. 

Link to post
Share on other sites

Depends on the source material and in what context you're normalizing it. Generally I think I would normalize, because then at the next stage of processing I will have the most to work with. I can always turn down the gain later on, but if the source clip is too quiet, turning gain up by 1dB will have less effect than with a normalized clip.

I kind of think about it as gain staging - starting from the sound source you want to have the best signal-to-noise ratio without clipping (unless you're specifically going for a digital lo-fi bitcrushing aesthetic), so normalizing almost always makes sense to me. However, if the noise floor is too high, then it's maybe more useful to use compression, to accentuate the parts of the sound that are more important to hear. In a pseudomathematical sense, normalizing is a non-conditional operation - every part of the sound is amplified - but compression is conditional - it lets you decide (with limitations) what parts of the sound you want amplified.

Edited by thawkins
Link to post
Share on other sites

I never ever ever normalise.  Before reading this thread it's never occurred to me to do it lol.  I just make sure I have a good signal level to begin with, nice headroom and away I go.  Considering I try to mix to around -10dBfs anyway I'd only have to be constantly turning shit down anyway.  From a gain point of view, I mix from silence upwards, and not from 0dB down if that makes sense, whacking everything to the max level would annoy me I think.

Link to post
Share on other sites

I personally haven't noticed any negative effects from normalising, at least, within reasonable parameters (ie., not recording so quietly that the noise floor is almost as loud as the intended audio).. then again, I don't really care too much about my recordings being 100% pristine—most of my recent tracks have been made with a game boy, FFS lol

+1
Link to post
Share on other sites

It's not bad if it sounds OK. If the noise is cool, it's OK. If the noise is annoying, it's not. Working with tools like normalization can get creative or useful just like any tool.

Personally, I don't use normalization simply because of the terrible results I've heard decades ago and I've never bothered to revisit the idea. I work a lot like b born droid said above: record and mix around -10dB and polish it in mastering.

Link to post
Share on other sites
Guest Chesney

I rarely normalise, never need to unless it's from another source, i.e. field recoding. It's rare something is so low you cannot get it in the mix with just the level. I do however put comp on nearly everything and I don't care if that is considered bad form. I like to get every track to a certain level in my head and then automate however high or low in the mix it needs to be.

I don't see a problem with using normalisation if the source is really low consistently. 

 

Also I feel that noise, hum, general stuff that is considered bad is the best thing. Music sounds and feels much more real and alive when it comes with a boatload of nasties. It's the reason why I feel no need to get better interface or record at higher rates. Cleanliness sounds clinical and sparse. Gaps between all the track elements might show great production and skill but it does not sound like music that transends human manufacture. All the gaps should be filled with anomalies and noise to build unique timbre.

 

In my opinion of course.

Edited by Chesney
Link to post
Share on other sites

If it sounds good to you then do it! It would be an easy way to start mastering your track, and if it doesn't sound good try a different school of thought. I have been taught to master using limiters and to avoid compression and normalization. I don't know if it's a good way to do it but it's what i've been doing forever now, i had the idea put in my head at an impressionable age. One school of thought is to take your final bounce down to -10 and set the input on the limiter to +10. The result from that can be similar to a more gentle version of normalizing. I typically bounce my final stereo track through limiters multiple times and gently get the signal louder. I use compression on individual tracks but try to avoid it on the final stereo mix.

Link to post
Share on other sites

I normalize my samples when I'm doing decibel SPL layering to make sure that they're all roughly at the same volume for a reference point or if one of my songs peak amplitude is less than -3db

Edited by Entorwellian
Link to post
Share on other sites

Isn't normalizing different than leveling (audacity leveling)?

 

Pretty sure normalizing just brings the loudest sound up to 0dB. Literally no difference than gaining it yourself. Normalizing has no gain reduction, no compression. At least audacity's normalize which is the only one I use.

Link to post
Share on other sites

Isn't normalizing different than leveling (audacity leveling)?

 

Pretty sure normalizing just brings the loudest sound up to 0dB. Literally no difference than gaining it yourself. Normalizing has no gain reduction, no compression. At least audacity's normalize which is the only one I use.

It seems so these days. But there was a time, in some audio editor somewhere, probably Cool Edit or Sound Forge, where normalizing tried to bring everything in a sample to the same volume like it was extremely limited. This is what I remember and the reason why I've always avoided it.

Link to post
Share on other sites

 

Isn't normalizing different than leveling (audacity leveling)?

 

Pretty sure normalizing just brings the loudest sound up to 0dB. Literally no difference than gaining it yourself. Normalizing has no gain reduction, no compression. At least audacity's normalize which is the only one I use.

It seems so these days. But there was a time, in some audio editor somewhere, probably Cool Edit or Sound Forge, where normalizing tried to bring everything in a sample to the same volume like it was extremely limited. This is what I remember and the reason why I've always avoided it.

 

 

I feel like Soundforge had this odd behavior

Link to post
Share on other sites

 

 

Isn't normalizing different than leveling (audacity leveling)?

 

Pretty sure normalizing just brings the loudest sound up to 0dB. Literally no difference than gaining it yourself. Normalizing has no gain reduction, no compression. At least audacity's normalize which is the only one I use.

It seems so these days. But there was a time, in some audio editor somewhere, probably Cool Edit or Sound Forge, where normalizing tried to bring everything in a sample to the same volume like it was extremely limited. This is what I remember and the reason why I've always avoided it.

I feel like Soundforge had this odd behavior

Weird... never heard of this.
Link to post
Share on other sites

Isn't normalizing different than leveling (audacity leveling)?

 

Pretty sure normalizing just brings the loudest sound up to 0dB. Literally no difference than gaining it yourself. Normalizing has no gain reduction, no compression. At least audacity's normalize which is the only one I use.

Normalizing brings the peak amplitude up to a certain percentage of decibel level (not necessarily 0db)  and/or balances the left/right channels. Its levelling with a limit.

Edited by Entorwellian
Link to post
Share on other sites

Yup the way I see it (which I suppose could be wrong!) is that it scans the whole waveform to find the loudest bit. Then it makes it as loud as possible (aka 0dB). So you get maximum headroom out of downstream gain staging; you only have to make things quieter and not louder (the latter potentially introducing noise).

Link to post
Share on other sites

The only possible reason I can think of to skip normalizing is to keep a bunch of stuff from the same source (like if you were making a sample kit) where you want to retain the natural fluctuations in volume. But even in that case I would probably recording anywhere from 5 seconds to several minutes of audio (hardware jams, field recordings, random garbage on Netflix) to my H1, and then I would still probably normalize the whole shebang before I chop it up. 

 

That reminds me, do any popular sample kits (disclaimer: I haven't bought one since buying the lifetime plan of kb6.de classic drum machine WAVs) advertise a certain amount of headroom? In which case they are kind of just saying the samples are quieter than maximum. Although I guess you could argue that fewer surprises is usually better.

Edited by sweepstakes
Link to post
Share on other sites

I rarely normalise, never need to unless it's from another source, i.e. field recoding. It's rare something is so low you cannot get it in the mix with just the level. I do however put comp on nearly everything and I don't care if that is considered bad form. I like to get every track to a certain level in my head and then automate however high or low in the mix it needs to be.

I don't see a problem with using normalisation if the source is really low consistently. 

 

Also I feel that noise, hum, general stuff that is considered bad is the best thing. Music sounds and feels much more real and alive when it comes with a boatload of nasties. It's the reason why I feel no need to get better interface or record at higher rates. Cleanliness sounds clinical and sparse. Gaps between all the track elements might show great production and skill but it does not sound like music that transends human manufacture. All the gaps should be filled with anomalies and noise to build unique timbre.

 

In my opinion of course.

I actually agree with the whole 2nd paragraph for the most part. I do think that sometimes that can be perceived as sloppy or as a copout, but some of my favorite music is dusted with noise and is better for it.

 

I do like to normalize at the lowest scale where it possibly makes sense. I guess, to me, almost every element in the chain has some kind of volume knob, and I want the most potential expression out of every volume knob. It's kind of like buying groceries and then freezing some of them I guess. You do a little extra work and you can have more later, even if the freshness is somewhat penalized.

 

Also my monitors hum like a motherfucker and I'm always paranoid about leaving any (undesirable) noise below the floor of that hum ... HMM maybe that's the real problem :)

Link to post
Share on other sites

It may have already been covered (don't have time to read the whole thread) but at least one place where you don't want to normalize is a mix that's going to be mastered.  If you've normalized your premaster to just below zero, there's no headroom to work with when it gets mastered so the first step in the mastering chain is going to be to turn it down a few decibels anyhow.  So in that situation, normalizing does absolutely nothing other than performing some unnecessary math on your audio. If you're a pristine sound sort can theoretically have a negative effect on it, and even if you aren't is still a waste of a few seconds of your time. I'm not much of a pristine sound person myself - in fact if anything I deliberately try to avoid that, but once I've got stuff in the computer through whatever cheap pedal or piece of crap home stereo equipment I was running it through on the way, I don't like it to be degraded anymore unless it's deliberate - I like the DAW itself to more or less stay out of the way.  This video is a plugin demon, not specifically about normalization or anything, but there's a bit at around 7:30 where he runs a track through a whole bunch of stock Logic gain plugins set to give unity gain at the end of the chain, so the signal is theoretically not being changed but the actual result is pretty noticeable - just like if you had a good analog mixer and patched a signal through a bunch of channels in series, all set flat so nothing was actually being changed but the signal was still going through the circuits, only in this case it's mathematical operations instead of circuits.  Pretty eye opening, really, and easy to replicate in your own DAW: https://youtu.be/wIxnVboa1Fo?t=7m30s

 

If you're mastering without any kind of brickwall limiting or clipping or anything, I  don't see any reason why it wouldn't make sense to master with a comfortable amount of headroom and then normalize your tracks at the very end (in that case it would probably make sense to save the master as 32 bit float and normalize it right before you convert it to 24 or 16 bit fixed point, to avoid an extra pass of dithering, since if you save your  master as, say, a 24/96 file you'll want to dither but then when you normalize that file you'll want to dither again before you save it).

 

Pretty much the only things I ever find myself needing to normalize anymore are samples, mostly individual drum samples or short one-shot things.  Not because I think there's something bad about normalizing, just because there aren't many situations where it's needed.  I'm almost always doing this stuff for myself, but sometimes I actually do it for work and there have been times when a client has has a specific peak level they need everything to hit, that's a good candidate for normalizing.  Seems like these days everyone I do anything for has figured out that as long as you aren't clipping peaks don't mean much and it's RMS or LUFS that you should really be using as your scale if you're trying to get things consistently loud, so it has been a long time since I've had to worry about a specific target peak beyond "say below -0.2 dBFS" or "stay below -1dBFS" or something).  Crest factor is a whole other thing, but peaks on their own in isolation don't really mean too much as long as they aren't clipping.

Link to post
Share on other sites

 

yeah exactly. it's just a volume boost to a set level. it shouldn't affect the actual audio quality at all. if you're hearing more noise after, it was probably always there and you're only noticing it now cus the whole thing is louder.

 

 

It's not actually as simple as it sounds, You're still basically recalculating the entire waveform any time you change volume.

 

 

Anyway, I'm pretty much in the same camp as Sweepstakes and Chesney with regard to noise, just pointing out that normalization isn't actually transparent (it isn't going to be audible if you do it once, but if you do it a lot it will eventually start to make audible changes) and also hardly ever necessary so I don't really see any reason to do it personally.

 

I forget what big name, old school producer said it back in the 90s when CD was really catching on, but when I worked at a record shop a long time ago we had a big poster taped up that had the quote "life has surface noise" on it and it's true.

Edited by RSP
Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    No registered users viewing this page.

  • Similar Content

    • By Joyrex
      The Guardian has an article on OpenAI's latest accomplishment - getting Frank Sinatra to sing a song he never sang:
       
    • By Tim_J
      what do u all use to watch your movies and shows?
      streaming services vs blu-rays? your computer vs blu-ray stand alone players plugged into a tv (what dimensions?)? stereo or 5.1?
      recommendations on region-free blu-ray players are welcome! also, what to have in mind when buying one... i'm leaned to say they're all the same but they're prices can go from 200 to 2000 so what am i missing?
    • By Braintree
      Do any of you develop audio plugins or applications using JUCE?
      I've just started getting serious about it after playing around with it for a couple years. I'm really glad there is an adequate amount of tutorials for different things, but I'm not super enthused with how they're written.
      What is your experience with it?
    • By BaxOutTheBox
      I am brand new here and, look forward to listening to and commenting on your creations.
       
      I made this over the weekend:
       

       
      As noted, these sounds were made with the Arturia DrumBrute analog drum synth:
       
      routings:
       
      kick  - 
      db kick signal > jdx direct drive > thru > hi cut eq
      db kick signal > jdx direct drive > out > mooer ShimVerb 
       
      snare - 
      db snare signal > red panda particle
       
      clap -
      db clap signal > joyo digital delay
       
      mix out -
      db mix out
       
       
      video:
       
      "food dye, glassware and an overhead projector" footage
       
      "drumbrute jam" footage
       
      random altered art from the internet
       
      Adobe Premiere Pro CS6
       
       
    • By Salvatorin
      i just listened to this in full, what in the actual fuck is this? This is like the background noise of thoughts. It is just the most abstract shyt I've ever heard. 16 minutes in and its basically one guy with a digital digeridoo and another guy with a box drum. what the fuck. actual digital jam band. literally the grateful dead but rave. what the fuck
×
×
  • Create New...