Jump to content
IGNORED

The pros and cons of mastering


ZoeB

Recommended Posts

Yeah, and it's called normalization in both cases (peak and average).

 

 

This is what I read about it.

 

https://motherboard.vice.com/en_us/article/ywgeek/why-spotify-lowered-the-volume-of-songs-and-ended-hegemonic-loudness

 

 

 

Unlike RMS, another measure used to determine the average volume of audiovisual productions, LUFS ignores low frequencies, instead focusing on average and high measures above 2 kHz—the most sensitive region for our ears. A scream, for example, carries more volume sensation than a double bass might, although RMS indicates higher numbers for the instrument (and basses weigh heavily with the old measure). This is because the human voice is in the middle region.

 

So in essence, it's more than just normalization. It focuses on a particular part of our hearing spectrum, which is more like filtering. It accounts for the Fletcher-Munson curve.

Link to comment
Share on other sites

So in essence, it's more than just normalization. It focuses on a particular part of our hearing spectrum

Yes, but the spectrum weighting is _only_ used to determine the LUFS. The entire spectrum is then reduced or boosted by the same factor to bring the "loudness" to the target average.

 

What Spotify does simply amounts to the same as turning the volume knob on your stereo. No filtering or compression involved.

 

Heavily compressed music tends to be mid-heavy, and LUFS is designed to punish this practice and encourage more dynamic mixes/masters.

Link to comment
Share on other sites

If it disregards some frequencies, then it isn't the entire spectrum.


It's not JUST normalization. You're not just boosting the highest peak up to -14.


What I'm saying is that it IS a filter.

Link to comment
Share on other sites

Either way, they treat the sound differently, which is worth taking note because if it's a widely adopted practice to leave that option on, then you may want to account for it in your master.

Link to comment
Share on other sites

If it disregards some frequencies, then it isn't the entire spectrum.

 

It disregards some frequencies _only_ when calculating the LUFS. This is where filtering is involved, to determine the perceived loudness to the human ear.

 

It's not JUST normalization. You're not just boosting the highest peak up to -14.

 

No, you're normalizing the average to -14dB, and 14dB then becomes your headroom.

 

Heavily compressed music has a low peak to average ratio and thus will use a relatively low portion of the available headroom - in addition to being reduced in overall volume due to a higher LUFS.

 

This is how this solution "punishes" mad compression and discourages the loudness war to continue.

 

If you have an extremely dynamic master, I suppose Spotify will not boost the average to the target, as that would result in clipping.

Link to comment
Share on other sites

Well, i wouldn't mind some proper €100 per track mastering.

If what i do can be called mastering then yes, i do it as a separate stage/process.

 

Also, mixing to a 2-buss compressor from the start is a forever ongoing discussion at gearslutz

The thing about compression in electronic music is that it often adds to the musicality of things. When you make it "pump" to a beat that's not something you can just ask a mastering engineer to do for you and get it to sound how you envisioned it. I usually turn my compressor on in my master mixer  insert(I don't multitrack) after I have started to develop the beat/melody I want. Do a few takes into my stereo recording device. Then, when pulling a project together, do some final EQing and leveling and maybe some limiting. I think it's important part of electronic musician that they have some control over the dynamics of their music before a final mastering engineer touches it.

Link to comment
Share on other sites

If it disregards some frequencies, then it isn't the entire spectrum.

It's not JUST normalization. You're not just boosting the highest peak up to -14.

What I'm saying is that it IS a filter.

 

It's non-destructive. It's no different than turning the volume knob on your stereo. spotify is just using a special set of ears to decide where to turn the volume knob.

Link to comment
Share on other sites

If you have audio that's run through Spotify, you're running it through, what is essentially, a filter. It's altering audio in a certain frequency band that would not have been altered otherwise. If you just turned your volume knob up, it would have also impacted the lower frequency band, which their process does not.

Link to comment
Share on other sites

Braintree, I think what psn is saying here is that lower frequencies are discarded when the system *listens to* the audio to determine its average perceived volume, but left alone when it *alters* the volume to compensate for it. So if you make a particularly bassy track, it will be technically louder than the others in terms of its literal energy, but won't sound it due to how psychoacoustics work. As in, all the system does to your audio is globally changes the entire volume of each track, but it does more complex things to determine what that volume should be in the first place.

Link to comment
Share on other sites

 

Yeah, and it's called normalization in both cases (peak and average).

 

 

This is what I read about it.

 

https://motherboard.vice.com/en_us/article/ywgeek/why-spotify-lowered-the-volume-of-songs-and-ended-hegemonic-loudness

 

 

 

Unlike RMS, another measure used to determine the average volume of audiovisual productions, LUFS ignores low frequencies, instead focusing on average and high measures above 2 kHz—the most sensitive region for our ears. A scream, for example, carries more volume sensation than a double bass might, although RMS indicates higher numbers for the instrument (and basses weigh heavily with the old measure). This is because the human voice is in the middle region.

 

So in essence, it's more than just normalization. It focuses on a particular part of our hearing spectrum, which is more like filtering. It accounts for the Fletcher-Munson curve.

 

Is that quote right?  I didn't think LUFS ignored frequencies, just applied a weighted filter that accounts for the difference between perceived loudness and actual measurements.  I thought LEQ(m) is the form of measurement that discounts everything below 2kHz, done by Dolby for cinema adverts and trailers.

 

*sends up Paranerd batsignal*

Link to comment
Share on other sites

 

 

Yeah, and it's called normalization in both cases (peak and average).

 

 

This is what I read about it.

 

https://motherboard.vice.com/en_us/article/ywgeek/why-spotify-lowered-the-volume-of-songs-and-ended-hegemonic-loudness

 

 

 

Unlike RMS, another measure used to determine the average volume of audiovisual productions, LUFS ignores low frequencies, instead focusing on average and high measures above 2 kHz—the most sensitive region for our ears. A scream, for example, carries more volume sensation than a double bass might, although RMS indicates higher numbers for the instrument (and basses weigh heavily with the old measure). This is because the human voice is in the middle region.

 

So in essence, it's more than just normalization. It focuses on a particular part of our hearing spectrum, which is more like filtering. It accounts for the Fletcher-Munson curve.

 

Is that quote right?  I didn't think LUFS ignored frequencies, just applied a weighted filter that accounts for the difference between perceived loudness and actual measurements.  I thought LEQ(m) is the form of measurement that discounts everything below 2kHz, done by Dolby for cinema adverts and trailers.

 

*sends up Paranerd batsignal*

 

 

I can't say for sure. I haven't dived that deeply into LUFS (AKA LKFS) and LEQ(m). All I know is to get my Dorroughs and Dolby Media Meter within a certain mean for broadcast to keep them happy. I mainly focus on RMS and Dorroughs for mastering, but I don't let it dictate too much on how loud something is. There's a world of difference in how to treat an Arvo Pärt like track and a Merzbow like track.

 

There is a frequency recognition in LKFS metering where it mainly focuses on frequencies in the dialogue range, hence the term DialNorm. When there's little information the meter deems to be dialogue, it will not be 'anchored' to that frequency range and be metering, I assume, most if not all of the frequency range.

Link to comment
Share on other sites

 

 

Yeah, and it's called normalization in both cases (peak and average).

 

 

This is what I read about it.

 

https://motherboard.vice.com/en_us/article/ywgeek/why-spotify-lowered-the-volume-of-songs-and-ended-hegemonic-loudness

 

 

 

Unlike RMS, another measure used to determine the average volume of audiovisual productions, LUFS ignores low frequencies, instead focusing on average and high measures above 2 kHz—the most sensitive region for our ears. A scream, for example, carries more volume sensation than a double bass might, although RMS indicates higher numbers for the instrument (and basses weigh heavily with the old measure). This is because the human voice is in the middle region.

 

So in essence, it's more than just normalization. It focuses on a particular part of our hearing spectrum, which is more like filtering. It accounts for the Fletcher-Munson curve.

 

Is that quote right?  I didn't think LUFS ignored frequencies, just applied a weighted filter that accounts for the difference between perceived loudness and actual measurements.  I thought LEQ(m) is the form of measurement that discounts everything below 2kHz, done by Dolby for cinema adverts and trailers.

 

*sends up Paranerd batsignal*

 

 

Yeah, I've never heard of that either? As a matter of fact, it makes no sense...

For instance, I often receive crappy recordings that are either full of rumbling low frequency material and that recording might clock in at x dB LUFS. But if I clean it up and remove everything below 50-60hz, I'll end up with a recording that measures at a lower dB LUFS than before. The *actual* perceived loudness is the same though. Now *that* makes perfect sense.

 

It makes sense for the LEG(M) measurement though.

Link to comment
Share on other sites

Thank you everyone, some really useful and thought provoking advice here!

 

When using compression on a whole group or mix, I'll turn the volume down to compensate, and focus on whether that "pumping" sound is better or worse for that context, ignoring the volume gain you can get from reducing the dynamic range.

 

It sounds like these new standards might end the loudness wars, so what was louder music will just become constant-comfortable-volume music.  I can see how more aggressive styles might still like that, but it's nice to think that more dynamically expressive music will rise above as well as dip below consistently aggressive music.  I can see an argument that portable music players allow people to listen to music in harsh environments that encourage reducing dynamic range, so you don't have to ride the volume to hear the quiet bits -- maxed-out tracks to listen to on the subway, sausages for the tube as it were -- but that's more specific to the listener's circumstances than the music's.  I'm guessing players will tend to get their own limiters more in the future as a result...

 

For now, I'll stick to mastering myself, such as it is, as the small numbers I release my music in don't really warrant the budget, which could be spent on remixes or advertising...  But I've now gone digging through my recent-ish tracks, turned off their mastering chains, and rendered them out as 24-bit premasters, alongside the existing 16-bit self-mastered versions that I'm releasing.  So should anyone want to release them more professionally in the future, I'll be able to give them everything they need.  That was a pretty bad oversight on my part to have for so long!

 

Thanks again everyone!

Link to comment
Share on other sites

To be honest, I usually use compressors more like an equalizer, for controlling the tonal balance, than for messing with the dynamics.

 

Do you mean that you'd add compression to some mid-frequency sound if it sounds weak compared to the bass and high frequencies?

Link to comment
Share on other sites

i got my best mastering advice from Terminal 11: group similar sounding tracks together so people keep listening.

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.