Jump to content
IGNORED

FM Synthesis (techniques, anecdotes)


Guest skibby

Recommended Posts

Guest skibby

 

maybe they got embarrassed cause none of their FM synths were technically FM, but PM

afaik, this is so you can do feedback to get sawtoothy sounds. goes funky with fm. it's well accepted that pm is essentially the exact same sound, though.
Oh i agree, but its technically a misnomer because of the implementation thats all. More a trademark right?
Link to comment
Share on other sites

  • 4 weeks later...
Guest skibby

Lazer bass is fun for fm

 

 

yeah, what is the secret behind that thing? why does it sound so smooth?

Link to comment
Share on other sites

  • 2 weeks later...
Guest skibby

recap, update, thru zero thoughts:

 

digital phase modulation/ frequency modulation in DSP is achieved by turning a -1 to 1 floating point value (perhaps of one audio signal) into a delay amount to be applied to the signal one wishes to affect. Pushing and pulling the sound in time, as best illustrated by a 100% wet chorus effect. phase modulation is the physical process, and the unit of measurement for any type of modulation rate is frequency. so, technically what is occurring is phase modulation at the frequency of (x). frequencies can be compared using ratios if one likes, but not all signals have a static frequency ratio, such as a full spectrum audio stream for example. i personally don't believe that frequency modulation is the best term to describe what's happening.

i think through-zero is essentially a small delay (1/2 of the total modulation period) to the dry signal so that the phase modulation is able to virtually go "back in time" 90 degrees rather than simply adding the delay amount to the signal. the through-zero effect might be a dramatic one, because phase can make a big difference when wet and dry signals get mixed back together.

correct me if im wrong.

Edited by skibby
Link to comment
Share on other sites

recap, update, thru zero thoughts:

 

digital phase modulation/ frequency modulation in DSP is achieved by turning a -1 to 1 floating point value (perhaps of one audio signal) into a delay amount to be applied to the signal one wishes to affect. Pushing and pulling the sound in time, as best illustrated by a 100% wet chorus effect. phase modulation is the physical process, and the unit of measurement for any type of modulation rate is frequency. so, technically what is occurring is phase modulation at the frequency of (x). frequencies can be compared using ratios if one likes, but not all signals have a static frequency ratio, such as a full spectrum audio stream for example. i personally don't believe that frequency modulation is the best term to describe what's happening.

 

i think through-zero is essentially a small delay (1/2 of the total modulation period) to the dry signal so that the phase modulation is able to virtually go "back in time" 90 degrees rather than simply adding the delay amount to the signal. the through-zero effect might be a dramatic one, because phase can make a big difference when wet and dry signals get mixed back together.

 

correct me if im wrong.

That could make sense yeah. Maybe it's not necessary to have a delay at all, for example if you have a single wave lookup table you can just move the pointer backwards just as easily as forwards.

Still have no idea how it sounds tho lol.

Link to comment
Share on other sites

Guest skibby

 

recap, update, thru zero thoughts:

 

digital phase modulation/ frequency modulation in DSP is achieved by turning a -1 to 1 floating point value (perhaps of one audio signal) into a delay amount to be applied to the signal one wishes to affect. Pushing and pulling the sound in time, as best illustrated by a 100% wet chorus effect. phase modulation is the physical process, and the unit of measurement for any type of modulation rate is frequency. so, technically what is occurring is phase modulation at the frequency of (x). frequencies can be compared using ratios if one likes, but not all signals have a static frequency ratio, such as a full spectrum audio stream for example. i personally don't believe that frequency modulation is the best term to describe what's happening.

 

i think through-zero is essentially a small delay (1/2 of the total modulation period) to the dry signal so that the phase modulation is able to virtually go "back in time" 90 degrees rather than simply adding the delay amount to the signal. the through-zero effect might be a dramatic one, because phase can make a big difference when wet and dry signals get mixed back together.

 

correct me if im wrong.

That could make sense yeah. Maybe it's not necessary to have a delay at all, for example if you have a single wave lookup table you can just move the pointer backwards just as easily as forwards.

Still have no idea how it sounds tho lol.

If the signal isnt interpolated, then it will sound like static, in otherwords mega aliasing
Link to comment
Share on other sites

 

 

recap, update, thru zero thoughts:

 

digital phase modulation/ frequency modulation in DSP is achieved by turning a -1 to 1 floating point value (perhaps of one audio signal) into a delay amount to be applied to the signal one wishes to affect. Pushing and pulling the sound in time, as best illustrated by a 100% wet chorus effect. phase modulation is the physical process, and the unit of measurement for any type of modulation rate is frequency. so, technically what is occurring is phase modulation at the frequency of (x). frequencies can be compared using ratios if one likes, but not all signals have a static frequency ratio, such as a full spectrum audio stream for example. i personally don't believe that frequency modulation is the best term to describe what's happening.

 

i think through-zero is essentially a small delay (1/2 of the total modulation period) to the dry signal so that the phase modulation is able to virtually go "back in time" 90 degrees rather than simply adding the delay amount to the signal. the through-zero effect might be a dramatic one, because phase can make a big difference when wet and dry signals get mixed back together.

 

correct me if im wrong.

That could make sense yeah. Maybe it's not necessary to have a delay at all, for example if you have a single wave lookup table you can just move the pointer backwards just as easily as forwards.

Still have no idea how it sounds tho lol.

If the signal isnt interpolated, then it will sound like static, in otherwords mega aliasing

 

 

I never fully understood oversampling. I think I get the jist of it (you have to sample at twice the nyquist frequency to avoid aliasing...?), but I never understood the nuts and bolts of it.

 

Like FM syntesis, it's probably one of the many things I'll never get around to fully understanding.

Link to comment
Share on other sites

Guest skibby

 

 

 

recap, update, thru zero thoughts:

 

digital phase modulation/ frequency modulation in DSP is achieved by turning a -1 to 1 floating point value (perhaps of one audio signal) into a delay amount to be applied to the signal one wishes to affect. Pushing and pulling the sound in time, as best illustrated by a 100% wet chorus effect. phase modulation is the physical process, and the unit of measurement for any type of modulation rate is frequency. so, technically what is occurring is phase modulation at the frequency of (x). frequencies can be compared using ratios if one likes, but not all signals have a static frequency ratio, such as a full spectrum audio stream for example. i personally don't believe that frequency modulation is the best term to describe what's happening.

 

i think through-zero is essentially a small delay (1/2 of the total modulation period) to the dry signal so that the phase modulation is able to virtually go "back in time" 90 degrees rather than simply adding the delay amount to the signal. the through-zero effect might be a dramatic one, because phase can make a big difference when wet and dry signals get mixed back together.

 

correct me if im wrong.

That could make sense yeah. Maybe it's not necessary to have a delay at all, for example if you have a single wave lookup table you can just move the pointer backwards just as easily as forwards.

Still have no idea how it sounds tho lol.

If the signal isnt interpolated, then it will sound like static, in otherwords mega aliasing

I never fully understood oversampling. I think I get the jist of it (you have to sample at twice the nyquist frequency to avoid aliasing...?), but I never understood the nuts and bolts of it.

 

Like FM syntesis, it's probably one of the many things I'll never get around to fully understanding.

Actually, interpolation is most needed for when a waveform is stretched, the samples between need to be created, again im talking about DSP and digital PM. Aliasing happens inevitably with PM because the points in the waveform will appear as noise to the DAC.

Edited by skibby
Link to comment
Share on other sites

What I don't get about phase modulation is how phase translates to delay. Like a 90 degree phase shift is going to represent half as much delay for a 200hz signal as a 100hz signal right? And that being the case, calling a delay on a complex (i.e. > 1 harmonics) signal a uniform phase shift seems totally inaccurate.

Link to comment
Share on other sites

limpy, just think of oversampling as increasing the resolution. it won't be a perfect correlation of oversamp. to quality, but it defs has a lot of benefits. it might seem impossible to "force" a signal to be a higher resolution, but when you do it while applying some effects it can sound quite a bit better. you'll want to make sure the upsampling algorithm is decent or you could be doing more harm than good. you're basically doing "extra" processing so chances are you'll be getting some increase in quality.

 

 

this is making me feel dumb

i prolly just need to read up on it

 

 

okay, so like, D/A converters have a LPF at 20khz or thereabouts, right?

so why do you ever need to oversample beyond 2x the nyquist (i.e. ~40khz)?

Edited by LimpyLoo
Link to comment
Share on other sites

 

limpy, just think of oversampling as increasing the resolution. it won't be a perfect correlation of oversamp. to quality, but it defs has a lot of benefits. it might seem impossible to "force" a signal to be a higher resolution, but when you do it while applying some effects it can sound quite a bit better. you'll want to make sure the upsampling algorithm is decent or you could be doing more harm than good. you're basically doing "extra" processing so chances are you'll be getting some increase in quality.

 

this is making me feel dumb

i prolly just need to read up on it

 

 

okay, so like, D/A converters have a LPF at 20khz or thereabouts, right?

so why do you ever need to oversample beyond 2x the nyquist (i.e. ~40khz)?

 

Usually that's because a perfect filter at 20khz is very difficult/expensive to make. If you do 40khz you can use a much shittier filter but still get good results.

 

I don't think that really applies to synths though, with synthesis it's more a question of smoothing out the jagged edges/clicks.

Edited by th555
Link to comment
Share on other sites

 

okay, so like, D/A converters have a LPF at 20khz or thereabouts, right?

so why do you ever need to oversample beyond 2x the nyquist (i.e. ~40khz)?

 

 

if recording through a non-oversampling converter at 44.1k, the signal has to be lowpassed at 20-22k to prevent aliasing. A LPF at this frequency will cause audible phase problems and/or audible high-end rolloff, which is why recording at 44.1/48 sounds worse than 88.2/96 on a non-oversampling (older) converter. by oversampling, the lpf can be set at a higher frequency where its artifacts are inaudible.

 

but for plug-ins it's a different issue. saturation and other modeling algorithms can create harmonics that go above the nyquist point and cause aliasing. by oversampling, the plugin designer can avoid having to lpf the audio (causing artifacts) or leaving the aliasing audible. the actual amount of oversampling would depend on the specific design of the plugin, so it might need to be 2x or 4x or more to avoid aliasing.

Link to comment
Share on other sites

Guest skibby

What I don't get about phase modulation is how phase translates to delay. Like a 90 degree phase shift is going to represent half as much delay for a 200hz signal as a 100hz signal right? And that being the case, calling a delay on a complex (i.e. > 1 harmonics) signal a uniform phase shift seems totally inaccurate.

 

the only way to create a ratio of harmonics is if the oscillators used in the carrier and modulator have a fixed frequency. non modulated phase shift works best with a signal with a static frequency, since complex audio signals don't have a defined wave cycle. modulated phase shift works on its own depth/amount which is arbitrary. with PM/FM synthesis, static waveforms are used and they have a fixed frequency in HZ that can be changed, sometimes in ratio and sometimes by setting an arbitrary frequency. 90 degree phase shift for example can be dialed in with some FM synths like the TG77, and this only defines the beginning/end of the sampled waveform before it reaches the tuning and ratio stage.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.