Jump to content
IGNORED

Autechre production methods speculation


zlemflolia

Recommended Posts

Guest Morphy

haven't listen to elseq yet but feed1 sounds like it maybe has oscillator driven delay modulation which is something you can do to try and make atonal sound sources more in tune. i recall hearing karplus strong synthesis on Exai. I think a lot of the granular stuff you hear is just delay line wrangling. overall I think the delay based dsp is common throughout.

Link to comment
Share on other sites

feed1 sounds like it maybe has oscillator driven delay modulation which is something you can do to try and make atonal sound sources more in tune. i recall hearing karplus strong synthesis on Exai. I think a lot of the granular stuff you hear is just delay line wrangling. overall I think the delay based dsp is common throughout.

Yeah, they have been doing this kind of delay modulation on almost every release as far back as Tri Repetae. The control of it has just become a lot more sophisticated over the years.

Link to comment
Share on other sites

So do AE use Max/MSP for sound synthesis or just controlling external peripherals? I've been learning Max/MSP and reading on how to make high quality sounding synths and apparently it's riddled with issues including aliasing, scheduler slop dropping events, and things like that. Is this the case? And if so, how are these problems alleviated? Clever programming?

 

Found this snippet from the interview rob did with the quietus right after l-event was released:

 

"This time we decided to try and get the synthesis in-board as opposed to outboard, with a big overhaul of the system and a re-design or a rethink, and that took us a lot of time. We got really busy with it, really deep."

 

Does in-board here mean in max, or just software based? I dunno but pretty sure at one point recently they said everything was max now, can't remember where though. Also wasn't there a point in the aaa where someone said the synths on recent releases sound really thick and analogue, and either sean or rob replied that that was a compliment because it was all digital but they had tried to make it sound analogue. Something along those lines i think.

 

 

feed1 sounds like it maybe has oscillator driven delay modulation which is something you can do to try and make atonal sound sources more in tune. i recall hearing karplus strong synthesis on Exai. I think a lot of the granular stuff you hear is just delay line wrangling. overall I think the delay based dsp is common throughout.

Yeah, they have been doing this kind of delay modulation on almost every release as far back as Tri Repetae. The control of it has just become a lot more sophisticated over the years.

 

 

artov chain is a perfect example of this. The delay just completely envelopes the track.

Link to comment
Share on other sites

 

feed1 sounds like it maybe has oscillator driven delay modulation which is something you can do to try and make atonal sound sources more in tune. i recall hearing karplus strong synthesis on Exai. I think a lot of the granular stuff you hear is just delay line wrangling. overall I think the delay based dsp is common throughout.

Yeah, they have been doing this kind of delay modulation on almost every release as far back as Tri Repetae. The control of it has just become a lot more sophisticated over the years.

 

 

Funny. You can track their delay abuse evolution with boss rsd-10. In Tri Repetae's rsdio they used it just like plain delay module without pitch input usage.

Pen Expers — oberheim dmx goes into rsd-10 and then some of signal goes back into pitch input.

6IE.CR — according to AAA Nord plays tones into rsd-10 pitch input. cr8000 is fed through rsd-10

Link to comment
Share on other sites

So they use Max/Gen psn? From what I've gathered online Gen is a way to compile plugins that then work outside of Max.

I believe in the past Max would deal with things at a buffer level, whereas Gen allows to you tweak things at a sample level. So it's mainly just a bit lower level thus even greater flexibility (I believe it'd be analogous to Reaktor objects vs Core level)
Link to comment
Share on other sites

  • 2 weeks later...

Funny. You can track their delay abuse evolution with boss rsd-10. In Tri Repetae's rsdio they used it just like plain delay module without pitch input usage.

Ha, until you mentioned it I never realised that RSDIO == RSD10

Link to comment
Share on other sites

I believe that they use this one weird trick a lot on elseq:

samples of stereo reverb'd sounds as a source for granular processing..

it's basic, but it's an easy explanation for the spatial-sounding stuff..

 

I mean they must probably have the world's largest collection of recordings featuring mad synth into immense reverb.

can easily throw into some granular thing, profit guaranteed.

Link to comment
Share on other sites

I believe that they use this one weird trick a lot on elseq:

samples of stereo reverb'd sounds as a source for granular processing..

it's basic, but it's an easy explanation for the spatial-sounding stuff..

 

I mean they must probably have the world's largest collection of recordings featuring mad synth into immense reverb.

can easily throw into some granular thing, profit guaranteed.

 

Yeah I was thinking this this morning that lots sounds like it's taking only a wet reverb signal and applying an envelope to it or using it for granular like you said.

 

And that the extreme stereo widening at parts is from dissimilarities in grain clouds in each stereo field and they can modulate wideness by increasing dissimilarity

 

(Sorry if obvious I'm still new to this)

 

Side thought, Sean said they never use randomness because it sucks, but what's wrong with using random LFOs for subtle modulations, and randomness in general within timbres as opposed to sequencing?

Side thought, Sean said they never use randomness because it sucks, but what's wrong with using random LFOs for subtle modulations, and randomness in general within timbres as opposed to sequencing?

Edited by Zeffolia
Link to comment
Share on other sites

My theory is that with random, if you get a sequence you like and haven't recorded it then you're stuck - whereas with an algorithm the sequence would be reproducible

Link to comment
Share on other sites

Actually, if you leave the seed (https://en.wikipedia.org/wiki/Random_seed) unchanged, a random algorithm keeps generating an exactly identical sequence of (pseudo)random numbers. Once you change the 'seed', the output changes as well.

It'd be interesting to know whether Max/MSP, presumably written in C, uses rand() in its source code. For those not familiar with programming, rand() is a C function which doesn't really return "perfectly uniform results", so the output is somewhat biased - especially when used with a modulo operation. This guy tells it as it is:



He later says there's a new random distribution algorithm in C++11 whose output cannot be distinguished from true randomness, but yet these random number sequences are reproducible as long as the seed stays unchanged - here: https://youtu.be/LDPMpc-ENqY?t=17m00s Edited by IOS
Link to comment
Share on other sites

Actually, if you leave the seed (https://en.wikipedia.org/wiki/Random_seed) unchanged, a random algorithm keeps generating an exactly identical sequence of (pseudo)random numbers. Once you change the 'seed', the output changes as well.

 

It'd be interesting to know whether Max/MSP, presumably written in C, uses rand() in its source code. For those not familiar with programming, rand() is a C function which doesn't really return "perfectly uniform results", so the output is somewhat biased - especially when used with a modulo operation. This guy tells it as it is:

 

He later says there's a new random distribution algorithm in C++11 whose output cannot be distinguished from true randomness, but yet these random number sequences are reproducible as long as the seed stays unchanged - here: https://youtu.be/LDPMpc-ENqY?t=17m00s

 

 

 

there are ways to capture loops of random sequences and loop them then reseed and let it wander then capture a chunk of it again and repeat.

 

easy example is a turing machine in eurorack.

 

this demo lays it out in a pretty straightforward way. i'm guessing someone proficient in max could make something like this but for a performance patch.

 

Link to comment
Share on other sites

  • 5 weeks later...

Trigger bangs increasing in frequency, in phase with a rising sawtooth envelope in range [0, 1) applied to pitch and inverse low cutoff

 

Sharp falling cutoff on squelchiness of bass in sync with logarithmically increasing pitch on low bass drum

 

Background shimmery ambiance generated by using the wet signal of a global revert as a grain cloud for a stereo widened grain cloud, with inverse exponential dry signal mixin upon rising complexity of rhythm scheduling

 

Each sequence segment can be seen as fixed length sequence of events (envelopes).  Apply subtle transitions to each envelope over time.  Lock the envelope transitions of some segments into phase with each other.  Create a generic Markov chain enumerator FSM with dynamically changing state based on user input.  FSM transitions are mapping to next event (or event sequence) based on input domain of past N events for some N, with higher N indicating higher adherence to syntactic structure of input sequencing, lower N indicating more sporadicness.  

 

Create entire system's audio and sequencing modules to have a generic "X" metaparameter inlet.  which modulates whatever type of patch it is to be more intense or less intense in the parameters on single parameters.  These modules includes rhythmic and melodic sequence generators, waveform generators, and signal functions (arbitrary functions mapping from one input signal to another).  All of these modules are not only modulated via X but via manual override parameters as well.  Potential also include generic X, Y, and Z metaparameter inlets to allow more fine tuned control by the generative sequencer.

 

Create a 2D grid which you can drag a dot over.  Assign each area of grid a "color" with each color representing a range or set of numerical limits of global constants for the generative sequencer to use as inputs to its metaparameter inlets.  

 

Note, these modules can be connected in very complex ways through manual parameter override inlets (whose behavior can also be controlled via metaparameters modulated by generative sequencing engine

 

Now a track consists of a line drawn through this 2D grid at a constant speed.  Simply pause movement in the same location for a while to keep the track being generated in the same way, or spam it around for a little bit drawing scribbles for a quick insane moment.  It won't sound insane though because NO randomness will be used.  This is very important.  Note that less is more, and level 1 intensity moments should be rare.  Tracks should primarily be in the [0, 0.3) range of intensity, with higher values during sharp attacks and low values during on some elements, and other orientations on others.  They should peak up into [0.3, 0.5) during more intense subsequence transition events, and only ever into [0.5, 1.0) during intense, entire track transition or peak events.  Little upward dips when preparing for this track-changing event can rise midway through this interval to around 0.75.  Generally a track should look like how a skip list would look when allocated randomly over a decent amount of time. 

 

When generating live music have an event and signal logging file (this will end up being large if done in a decent resolution - who cares hard drives are cheap).  Make a simple tool to edit these files to fix any weird live anomalies which don't work well musically when making studio tracks, maybe Max/MSP paused for a moment when it shouldn't have, this shit can happen.  Also make a visualizer that will tell you all metaparameters on patches to help catch the exact initial conditions that lead to an anomaly like this, if it's a really cool weird one you want to keep.  

 

 

--------------

 

Those are as far as my current thoughts have gotten.  

The only exceptions for randomness are in sound generator modules for noise impulses, and in subtle (note: subtle is important here) LFO signal modulators.  

 
Link to comment
Share on other sites

Needless to say most of this will be best implemented as C externals.  Max must have a decent API for input and output from externals, it has to that's kind of the whole point of them.  Presumably externals can work at the raw buffer level so this can also allow sequencing at the signal level, allowing more abstract sequence generating a signal processing than would be allowed easily with only the functionality allowed by Max/MSP objects.  Then again either Sean or Rob said in the WATMM AMA that "zl" was their current favorite operation which does list processing, tbh I could see Max/MSP doing scheduling well with list processing, with all the functionality probably being really similar to Lisp-based list processing.  

 

Allow the system to have all parameters and inputs overridable by live input from drum pad or synth MIDI events or whatever, and have this input data be fed back into the generative sequencer's data in realtime to allow the sequence to be sculpted over time, then the creator can also easily sculpt in real time, the  X, Y, and Z metaparameters as well with only 3 knobs or sliders, or even a 2d touch screen grid for X and Y, and a slider or knob for Z.  

 

This at its core is simple though so simple modulations to the generation of the track aren't enough, even with metaparameters.  The sequencing modules themselves have to modify their own behavior as well independant of the changes related to the metaparameters, based on its past observations of sequence transitions.  This can itself modulate what the metaparameters of a module do, allowing a much higher level of abstraction as well as an OOP style encapsulation of functionality.  

 

The cool thing about this method is that you should be able to mix and match modules anywhere you want, swapping them out in the generative sequencer in realtime even.  Since their input and output inlet interfaces are identical this necessarily should be possible, but the results may not be pleasing.  But that might be a good thing

Link to comment
Share on other sites

In sequencing also make sure that there isn't too much overlap in module's ownership of the sound spectrum, for pre-emptive mastering purposes.  

 

Make sure that you don't let the Markov chains get too wild.  Keep the vocabulary of input previous sequences relatively small to keep a flavor throughout sections of the track.  Note that sequencers in the way I use the term means both rhythmic and melodic sequencing, as well as metaparameter envelope shaping (with a bit of occasional raw parameter overrides by the live artist which are fed back into the sequencers.  

 

Keep a nice symmetry in the metaparameters of ambience and percussive rhythm.  Background ambience has a large psycho-acoustic effect despite less often being consciously recognized by the listener.  

Link to comment
Share on other sites

The Markov chain based generative sequencer above can be modified to just be a generic sequencer including many different types of sub-sequencers.  As long as they follow a similar generic interface as the signal control modules mentioned above they should be easy to swap in and out.  They should follow the format of "output sequence = sequencer(time, input sequence)" and this allows simple step sequencers to be controlled using the same interface as other sequencers like Markov chain ones or even cellular automata based sequencers, a more expansive list being below.  Allow higher ranking sequencers to be masters of slave sequencers (all those a lower rank than it), with override in action upon toggling setting a control signal, each communicating with each other.  This can allow a buildup of complexity depending upon how this sequencer is being controlled by the generative sequencer, but with simpler defaults at the bottom (once again note these sequencers apply to both rhythmic and melodic sequencing):

  • Step sequencers mapping time to events
  • Markov chain sequencers
  • Cellular automata sequencers
  • TODO: moar sequencers
Link to comment
Share on other sites

In the above design each sound generator is isolated which would result in more boringness.  So it needs to add more iddim by allowing parameters to be shared between them globally and arbitrarily.  This will let you be more creative and stuff.  But keep in mind most of that should be done internally and the parameter sharing should be isolated and encapsulated to within some maximum number of parameters, maybe 3 to match the X, Y, and Z metaparameter scheme so it's symmetrical and more recursive and stuff.  Adding a little parameter distribution module there.

 

9Tnw3BM.png

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.