Jump to content
IGNORED

mastering discussion 2013


auxien

Recommended Posts

i mean regardless of if he thinks higher than 44.1khz sampling rates are important or not, his resume is ridiculously impressive. I've been interested in him ever since I heard Rushup Edge

Link to comment
Share on other sites

  • Replies 65
  • Created
  • Last Reply
Guest Rambo

For years i thought that mastering was like the music version of owning people in games. I actually thought people were being arrogant when they said they were mastering. Never realised it was a process.

Link to comment
Share on other sites

Guest cult fiction

I'm always shocked to hear how many stupid things can go out of golden 'pro' mouths!

This Mr. Coituston said that in comparison to 44.1k file 176,4k digital file sounds more 'natural' cause

it's >>>high definition<<<.

What are those freqs of wave shapes below 22.05k (a 30 year old can hear up to 18k max) that 44.1k digital file can't reproduce that 176,4k can?!

No knowledge of sampling theory what so ever! He's just another marketing parrot.

Hah...professionals... :facepalm:

 

 

 

There are a whole host of reasons why the nyquist frequency doesn't tell the whole story - it only truly applies to systems for which the beginning/end conditions are known and the waveform of each frequency has two sample points inside it. Obviously this is not true for any recorded audio, as you are always starting to record in the middle of any wave.

 

Frequencies above the nyquist frequency can introduce aliasing artifacts into the perceived frequencies below, you can read here for a description:

 

http://www.daqarta.com/dw_0haa.htm

 

If your source recording contains frequencies above 22khz, it will introduce some amount of audible distortion. You will also frequently produce frequencies about 22khz when multiple waveforms intersect, also introducing audible distortion.

 

A raw 96khz or higher master file should sound "better" with less distortion to trained ears than a 44khz file. You can filter out or cover up with dithering frequencies above 22khz to make them sound much, much closer however.

 

On top of that, you have to take into account that digital sample data is converted to an analog signal to push a speaker in/out.

 

Consider a full-amplitude sine wave at 1khz. If we're working in 16-bit, the values are going to go from 0 to 2^16 over .002 seconds. At 44khz this equates to ~88 samples. Obviously you cannot capture 2^16 unique values in 88 samples. No matter what analog conversion method you employ, you will never be able to reconstruct the original signal with 88 samples in the general case.

 

Furthermore, were the sine wave to be offset such that the peak fell between two of the digital samples, the amplitude would be slightly reduced. Increasing the sample rate does a better job of capturing the signal. Whether a human ear can tell is a bit up in the air but there's no question that the data is different, and that the higher sample rate does a better job at capturing even frequencies much lower than 22khz.

Link to comment
Share on other sites

 

Consider a full-amplitude sine wave at 1khz. If we're working in 16-bit, the values are going to go from 0 to 2^16 over .002 seconds. At 44khz this equates to ~88 samples. Obviously you cannot capture 2^16 unique values in 88 samples. No matter what analog conversion method you employ, you will never be able to reconstruct the original signal with 88 samples in the general case.

 

 

This is the most thorough (and to me, authoritative) article I have read in the debate about higher sample and bit rates:

http://people.xiph.org/~xiphmont/demo/neil-young.html

 

Your statement above contradicts this one:

 

The most common misconception is that sampling is fundamentally rough and lossy. A sampled signal is often depicted as a jagged, hard-cornered stair-step facsimile of the original perfectly smooth waveform. If this is how you envision sampling working, you may believe that the faster the sampling rate (and more bits per sample), the finer the stair-step and the closer the approximation will be. The digital signal would sound closer and closer to the original analog signal as sampling rate approaches infinity.

 

(...)

 

All signals with content entirely below the Nyquist frequency (half the sampling rate) are captured perfectly and completely by sampling; an infinite sampling rate is not required. Sampling doesn't affect frequency response or phase. The analog signal can be reconstructed losslessly, smoothly, and with the exact timing of the original analog signal.

 

Please explain how there can be two completely different views on something as fundamental as this - or why you are right and Monty is wrong.

 

 

 

Furthermore, were the sine wave to be offset such that the peak fell between two of the digital samples, the amplitude would be slightly reduced. Increasing the sample rate does a better job of capturing the signal. Whether a human ear can tell is a bit up in the air but there's no question that the data is different, and that the higher sample rate does a better job at capturing even frequencies much lower than 22khz.

 

 

This seems intuitively correct to me, but doesn't this require a "theoretical" speaker (and signal chain) that is able to shift direction/phase at the speed of the sample rate?

 

In other word, doesn't inertia in the actual speaker cone make sure that the "sine wave" like properties of the original analog signal are recreated?

Link to comment
Share on other sites

Guest cult fiction

Thanks for the article, it was a good read.

 

The article is talking about the "end result" file, whereas I was (mostly) talking about the production stage. As an example of this difference, in the case of 16-bit vs. 24-bit the article explicitly calls out that 24-bit is needed during the production stage due to accumulation of error.

 

When it comes to sample rate, the article admits that aliasing distortion is a problem:

 

So the math is ideal, but what of real world complications? The most notorious is the band-limiting requirement. Signals with content over the Nyquist frequency must be lowpassed before sampling to avoid aliasing distortion; this analog lowpass is the infamous antialiasing filter. Antialiasing can't be ideal in practice, but modern techniques bring it very close. ...and with that we come to oversampling.

He goes on to talk about how oversampling in the digital to analog converter at the end of the chain deals with this. However, suppose you have two source recordings with frequency content above the nyquist frequency that you're mixing together. Whatever anti-aliasing method you use is going to change the signal slightly, and you are accumulating that error/distortion.

 

Where I was wrong was the idea that a higher sample rate file should sound better. In an ideal world, this would be the case, because higher frequency content would stay higher frequency without distorting the lower frequencies. But due to real-world construction of speakers ultrasonics are apparently a major issue. At the mixing phase, however, by mixing in a higher sample rate, you are still producing fewer aliasing artifacts. When you dither down to 44khz you get the best of all worlds - higher precision when mixing the sources together but a final set of frequencies that play well with real world speakers.

 

 

Furthermore, were the sine wave to be offset such that the peak fell between two of the digital samples, the amplitude would be slightly reduced. Increasing the sample rate does a better job of capturing the signal. Whether a human ear can tell is a bit up in the air but there's no question that the data is different, and that the higher sample rate does a better job at capturing even frequencies much lower than 22khz.

This seems intuitively correct to me, but doesn't this require a "theoretical" speaker (and signal chain) that is able to shift direction/phase at the speed of the sample rate?

 

In other word, doesn't inertia in the actual speaker cone make sure that the "sine wave" like properties of the original analog signal are recreated?

 

I need to read up on digital to analog a bit more I guess. It seems like for wave forms that aren't sine waves it becomes a bit more ambiguous. For instance while it's true that a square wave can be represented as the sum of an infinite number of sine waves, you are sampling a lower rate. How can a 44khz discrete capture of a square wave come anywhere near a 96khz capture? The on/off "edge" on the square wave is a much higher frequency(infinite), so it seems like 96khz comes in handy there - and certainly were you to mix two square waves together, you could imagine mixing them at 96khz doing a better job.

Link to comment
Share on other sites

Whatever anti-aliasing method you use is going to change the signal slightly, and you are accumulating that error/distortion.

 

Just think of it as a feature of your music :cisfor: , like some trackers using different or no interpolation settings to emulate a certain lofi sound.

Link to comment
Share on other sites

 

Where I was wrong was the idea that a higher sample rate file should sound better. In an ideal world, this would be the case, because higher frequency content would stay higher frequency without distorting the lower frequencies. But due to real-world construction of speakers ultrasonics are apparently a major issue.

 

Yeah, the intermodulation that ultrasonics cause in less than ideal (ie, insanely expensive and well engineered) playback equipment introduces a greater problem (distortion) than what is supposed to be solved (fidelity).

 

 

It seems like for wave forms that aren't sine waves it becomes a bit more ambiguous. For instance while it's true that a square wave can be represented as the sum of an infinite number of sine waves, you are sampling a lower rate. How can a 44khz discrete capture of a square wave come anywhere near a 96khz capture? The on/off "edge" on the square wave is a much higher frequency(infinite), so it seems like 96khz comes in handy there - and certainly were you to mix two square waves together, you could imagine mixing them at 96khz doing a better job.

 

 

Is it possible for physical sound to go from an amplitude of zero to an audibly louder amplitude in an infinitely small time? Can a speaker cone/air waves/the ear drum move discretely? Or won't there always be a gradual shift/lag when amplitude changes?

 

If I'm correct in that assessment, then a real world square wave will always have sloping "on/off" edges. And since sampling maintains the correct phase of the original signal - will there be an audible difference when this slope/lag is represented in higher sample rate, from when it is implicitly represented by the physical interpolation in the speaker/air/ear chain?

 

I think this is where you can start to talk about the "golden ears" problem. :)

Link to comment
Share on other sites

Guest Adam

I don't know what any of this bit stuff means.

Really? That is really bad. It is IMPORTANT to know all this bit stuff because it makes a HUGE difference to your music. Go read some audio engineering books. It is really important to know these things.

Link to comment
Share on other sites

 

Why master when you can make it sound good from the start?

 

there has virtually never ever been a professional record that didn't warrant mastering

 

 

I wonder if there's any proper mastering in Namlooks more epic ambient stuff? There's little or no mix compression that's for sure.

Link to comment
Share on other sites

The higher sampling rates can be better only in cases of processing the files with lowQ plug-ins cause aliasing can be more easily introduced and THAT'S ALL. About aliasing of plug-ins ask something like the search brother at gearslutz.com...This subject is too big for my time and patience, sorry. The rest is marketing!

But that wasn't something the mastering guy was talking about at all.

 

The net is contaminated with lots of shit, being political of being technical. Places like gearslutz.com are also in most cases contaminated, don't be fooled, but lots of smart people over there.

 

...and for almost 100% of 'non-classical' music 16bit is enough too.

 

Not some kind of proof for the cd sufficiency but remember that vinyl playbacks are at 12-13 bit of digital dynamic range equivalent in theoretical best possible situation, cassettes was 6-7 bit so go figure.

 

Marketing or ignorance! Chose for your self.

Link to comment
Share on other sites

  • 2 weeks later...
  • 3 weeks later...
Guest chunky

before CDs and Mp3s came about, mastering was important for vinyl recordings as the record could skip and waste tonnes of money for the record label and piss off customers etc. for putting out mp3s on bandcamp it's not such a big deal, it's not as if the mp3 will make your ipod skip? but there's still the aesthetic side of things to consider. wall bird already posted a link to a good book by bob katz. if i cared about mastering i would try to go and visit a mastering studio and watch a guy who does it every day go about his work. for music like we make on this site i dont think it's so important, it's all a bit over-rated. if your music was that great it would be put out by a decent label and get mastered by a pro. if your music aint so great that people want to pay for it then mastering aint going to help.

Link to comment
Share on other sites

Yeah, but if you are trying to squeeze a really nice sound out of a DAW it seems like EQ'ing the different elements is kind of important?

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.