Jump to content
IGNORED

EQing Reverb and more


Guest mollekula

Recommended Posts

Guest mollekula

Hi guys, been making some soundscapes lately after a big break and a question came up, something ive always been curious about. Ok, so ive made some pads, atmospheres, drones, cellos etc The ambient soup comes out very nice, but i can feel a lot of annoying frequencies there, many irritating frequencies in the mid and high area. I try to EQ the sounds themselves, having EQ as the first plug in each track FX chains. I try to cut some high frequencies, but I dont want to kill the brightness of sounds that would result in a dull mix without air, yet i want some velvet smoothing feel to it ala Steve Roach (haha i must be out of my mind).

 

So, my question is: is it natural to put EQ after Reverb, either on "sends" or "inserts", to remove any frequencies that destroy the velvet feel of the mix or boost some others? Im not a mixing genius or have the necessary knowledge, but while experimenting thats the only way i have found so far to remove the irritating artifacts of the reverberated sounds. Also when i make cuts on many sounds in the 300 - 450 Hz area i feel it makes the mix more pleasant to listen to, if you have some tips on issues of this kind id appreciate that too.

 

To sum up, do you guys (especially those of you who make ambient environments) use any signal processing after Reverb, like EQ/dynamics/panning/LFO or anything else (chorus/flange???), especially if its hooked on "sends", where more freedom is provided to process the signal? If yes, some words about "where and why" would be highly appreciated. If you have other ways please share if you can. Thanks in advance.

Link to comment
Share on other sites

I cut a lot of the high and low frequencies of my reverb (using the native filter in the Ambience VST). I really don't like the way high frequencies sound with reverb, and low frequencies just make things muddy. You don't really lose as much as you'd expect, this way.

 

As for post-reverb effects, I'm a sucker for filter sweeps and phaser after reverb. Phaser in particular ties everything together into a nice paychedelic wash.

Oh, and a nice subtle post-reverb detune LFO is something I abuse quite often as well.

Link to comment
Share on other sites

As long as it sounds good you can do whatever you want to.

But most reverbs have high and low-cut sliders so you shouldn't have to put an EQ before the reverb... but of course, sometimes it's easier to control an EQ than the high- and low-cut on a reverb plugin... sooooo whatever floats your boat, I guess.

 

But as for your question about putting an EQ after the reverb - I wouldn't do that. I would put it before the reverb plugin, so you don't fuck up the signal coming from the reverb.

 

 

Also, I would never ever use any kind of flanger on any of my ambient tracks because I want them to sound as natural and organic as possible. And if there is one thing you can say about flanger then it's it doesn't sound natural at all.

Link to comment
Share on other sites

Guest mollekula

Thanks a lot for the reply folks. I used to EQ before Reverb, and simultaneously tweak native EQ controls of a Reverb if necessary, something that most people Ive asked told me they use. But i always sensed there is a different feeling when listening to ambient soundscapes by Steve Roach or Robert Rich for example. For God's sake in no way am I making comparisons, and there are obviously other equipment/hardware used by these ambient monsters and hardware FX racks. But Im just trying to take a lesson here and try to understand whats going on from the mixing side of things.

 

Sometimes just to experiment, even if removing all the high frequencies from a sound/atmosphere (pre-Reverb), it would reduce the irritating post-Reverb frequencies, but I always felt that the sound is not smooth enough and remains very bright comparing to what i listen from Steve and Rich. There is no doubt a great deal of good mastering involved, but a good mastering needs a good mixing first, so I guess all the frequencies are solid before these guys master their music. Without having read anything about it and acting purely by instinct I tried putting a parametric EQ after the Reverb and applying some high-shelf/high-cut curve and making cuts in various areas of low/mid/high frequency spectrum. I know it needs careful treatment so that the sound is not killed eventually, but i find that both pre-Reverb and post-Reverb EQing is necessary to sculpt a very pleasing sounds, unless Im doing something really wrong when making sounds.

 

So a post-Reverb EQing is used and is not a taboo?

Link to comment
Share on other sites

Guest ryanmcallister

I'd say it really depends on how you are using the reverb. If you are using something like Altiverb for high quality realistic room simulation, anything done after the verb will be tweaking something that has (supposedly) already been tweaked to perfection. If your sound is too bright in the room you are happy with, eq the sound pre-reverb to preserve the high frequency content coming from the reverb itself. If the room happens to have some stuff you don't necessarily like, the best corrective work would be done within the reverb itself, but if that's not an option, post-eq can work too. Consider bussing it in a way you can eq the wet reverb signal independently from the dry signal.

 

On the flip side, if you do what I often do, using reverb as a sound design tool with no boundaries such as realism or whatnot, then do whatever you'd like. I always throw a huge reverb on my pads to stretch them out, or maybe to smooth out some majorly timestretched audio, and that becomes a part of the sound source itself. I treat stuff like this as an oscillator on a synth, throw a filter in afterwards to subtract whatever you want. Sometimes I even resample that and put it into a sampler, where the entire reverb is on par with a waveform in a synth. You can even use 2 reverbs for this, one for creating your pads, and a second one for putting them in a space... ya feel me?

 

Long story short, it is very common to eq after a reverb, and anyone telling you otherwise is just citing personal preference. I'm on board with you though, I rarely enjoy a full reverb, I like things dampened quite a bit to smooth them out.

Link to comment
Share on other sites

Glad I pulled up this thread, it actually answered a few things I have been wondering myself.

 

 

I don't want to start a new thread so I'll just ask this here: can someone explain to me how to effectively apply compression to rogue frequencies? i understand how compression works now but I am still inexperienced with actually applying it, if that makes sense

Link to comment
Share on other sites

Guest mollekula

Can you give examples of the specific sound you're trying to make?

 

 

just uploaded 2 track guys, this is something Ive been working on and the sound im trying to achieve. works in progress and mixing isn't anything special, you might even sense undesired frequencies. I might try to give these tracks a release in the future when reworked, so any feedback would be highly appreciated.

http://soundcloud.com/mollekula

Link to comment
Share on other sites

Guest ryanmcallister

Glad I pulled up this thread, it actually answered a few things I have been wondering myself.

 

 

I don't want to start a new thread so I'll just ask this here: can someone explain to me how to effectively apply compression to rogue frequencies? i understand how compression works now but I am still inexperienced with actually applying it, if that makes sense

Take a look into multiband compression. This will allow you to select a range of frequencies and apply compression independently of the rest of the spectrum.

Link to comment
Share on other sites

Oh, yeah, one of the things that's nice about EQing reverb is when the only low- and hi- cut controls you get are for dampening rather than cutting. Actually it's kind of a bonus, because then you get regular EQ control but you also get control over the dampening behavior, so you get overall spectral content of the reverb as well as its movement.

 

Slightly off-topic: I guess the consensus is that the primary use of EQ is targeting different parts of the body like head, chest, groin, etc. as well as the obvious eliminating annoying frequencies and reducing muddiness, balancing, that kind of thing. But lately I like to think of it as low end as describing sort of mass, and the rest describing distance or detail. Anyone else ever think of it this way?

Link to comment
Share on other sites

Guest ryanmcallister

or get a filterbank and use a regular compressor

definitely, but this is sort of makeshift. i'm not sure of the math behind it, but keep in mind that you'll have to be fairly specific with the crossover frequencies and filter slopes to avoid any dips and/or bumps in your frequency spectrum. it has been my experience that performing frequency splitting by hand has always altered the sound before any effects other than the eq's have been applied to it.

 

410px-Linkwitz_vs_Butterworth.svg.png

 

on the other hand using a multiband compressor, you'll find before dialing in any compression the output will sound identical to the input. in Ableton i use the multiband compressor dry to split frequencies for other processing chains on each frequency band.

Link to comment
Share on other sites

Guest ryanmcallister

But lately I like to think of it as low end as describing sort of mass, and the rest describing distance or detail. Anyone else ever think of it this way?

I like this way of thinking as well. I've done a little film sound design work, and it's quite amazing how EQ can be used for placement in a sound stage. If a sound has more low and high end content, it sounds much closer. Automating the EQ to sweep out the highs and lows you can literally hear the sound moving away from you.

 

Interestingly enough, something a lot of guys aren't aware of is that they can actually make sound move in a 3D field: obviously left to right through panning, front to back through EQ and reverb, but also up and down. Our ears naturally filter the sound in the world around us differently depending on the direction it's coming from, it's how we hear the difference between something behind us and something in front of us. This can actually be mimicked through EQing on a sound if you know these frequency response curves. This is how those "3D sound" things work. In everyday practice it is quite difficult to move the sound all the way behind you, but moving up and down in front of you is quite easily attainable. All thanks to clever EQ. ;-)

Link to comment
Share on other sites

Guest mollekula

But lately I like to think of it as low end as describing sort of mass, and the rest describing distance or detail. Anyone else ever think of it this way?

Interestingly enough, something a lot of guys aren't aware of is that they can actually make sound move in a 3D field: obviously left to right through panning, front to back through EQ and reverb, but also up and down. Our ears naturally filter the sound in the world around us differently depending on the direction it's coming from, it's how we hear the difference between something behind us and something in front of us. This can actually be mimicked through EQing on a sound if you know these frequency response curves. This is how those "3D sound" things work. In everyday practice it is quite difficult to move the sound all the way behind you, but moving up and down in front of you is quite easily attainable. All thanks to clever EQ. ;-)

 

 

David Gibson - The Art of Mixing video speaks about similar approach, after watching it i understood many things about how sound takes place in a 3D Stereo world. What you say sounds very interesting, could you please offer some more details about it and examples if you can, or if you have any links for some material to read that would be great.

Link to comment
Share on other sites

But lately I like to think of it as low end as describing sort of mass, and the rest describing distance or detail. Anyone else ever think of it this way?

Interestingly enough, something a lot of guys aren't aware of is that they can actually make sound move in a 3D field: obviously left to right through panning, front to back through EQ and reverb, but also up and down. Our ears naturally filter the sound in the world around us differently depending on the direction it's coming from, it's how we hear the difference between something behind us and something in front of us. This can actually be mimicked through EQing on a sound if you know these frequency response curves. This is how those "3D sound" things work. In everyday practice it is quite difficult to move the sound all the way behind you, but moving up and down in front of you is quite easily attainable. All thanks to clever EQ. ;-)

 

 

David Gibson - The Art of Mixing video speaks about similar approach, after watching it i understood many things about how sound takes place in a 3D Stereo world. What you say sounds very interesting, could you please offer some more details about it and examples if you can, or if you have any links for some material to read that would be great.

 

Yes, please!

Link to comment
Share on other sites

I always use reverb sends and I usually filter out a lot of the lows and tame the highs a bit.

 

Flangers and Choruses are pretty standard but neat. Try automating the gain and/or tying it to an LFO. Reverbs are great for adding subtle supportive rhythms.

 

Rad thread so far.

Link to comment
Share on other sites

Concerning the 3d audio thing, it reminds me of binaural recording, which is basically recording audio with microphones placed in or on a head (or dummy head), thus recording the reflections inside the skull, cavities etc. This can also be simulated digitally:

http://en.wikipedia.org/wiki/Head-related_transfer_function

http://www.fluxhome.com/products/plug_ins/ircam_hear

Of course there are many other spatializing techniques and devices, but this seems the most realistic one to me.

Link to comment
Share on other sites

Guest mollekula
Flangers and Choruses are pretty standard but neat. Try automating the gain and/or tying it to an LFO

 

Could you please give some more details if possible?

 

 

 

Reverbs are great for adding subtle supportive rhythms

 

Very interesting, what do you usually do within your workflow to achieve this?

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.