Jump to content
Sign in to follow this  
cyanobacteria

Autechre production methods speculation

Recommended Posts

Can the parameter distributor also distribute Frito-Lay® brand chips and snacks?

Share this post


Link to post
Share on other sites

You tried to implement any of this in your own music zeff? Would be very interested to hear

Share this post


Link to post
Share on other sites

Can the parameter distributor also distribute Frito-Lay® brand chips and snacks?

 

Yes, Frito-Lay®, Fritos®, Lays®, even Tostitos®.  Anything under the PepsiCo® name.

Share this post


Link to post
Share on other sites

You tried to implement any of this in your own music zeff? Would be very interested to hear

 

Nothing as robust as what I described earlier, but I've made some generative + live user input stuff in Max/MSP.  I will make that other thing once I get off my ass and it will be the ultimate Max/MSP generative framework.  I've made some stuff thrown together not following the formal framework earlier, and this method of tying otherwise unrelated parameters together seems to get interesting results so far, requiring little to no user input.  This track here was made by switching between about 6 button objects live as I pleased (and choosing a couple notes on keyboard during the main repeated sequence).  It's boring but kind of cool 

 

 

 

Edited by Zeffolia

Share this post


Link to post
Share on other sites

Another cool idea

 

Sean said that they made their system so playing music is almost like playing a game, like playing GTA and fucking around with the controls and seeing what you can make the game do and glitch it out

 

Well take this generative sequencing engine further and add a visual control aspect too.  Have the central generative sequencer display its upcoming sequencing decision paths to the user visually on the screen, and they can just move a dot around in realtime towards the upcoming areas they want the music to focus on. Not like "choose this one instead of that one" but more fine grained, they could mix in between incoming decision streams and kind of get a mix of both upcoming generative sequencing changes, or stay right in the middle of an upcoming tube and make it 100% that stream. Each patch could have its own color and parameters such as LFOs could look like wobbly lines coming towards you, like guitar hero lmfao but a more complex stream of data coming at you, with all relevant signals grouped into the same sort of tube moving forward.  The user could define certain visual events that occur during certain combinations of sequencing events happening to make the visual interface more robust and context aware of the specific track you're generating.  Okay this ideas pretty dumb but whatever lol

 

I'm just trying to think of the best ways to offload sequencing and generation to the computer and creating high levels of abstraction for the user to interact with to generate the music on a high level, both rhythmically and melodically, and then with regards to envelopes too for metaparameter modification and modulation, with hands alone where they can focus on high level musical elements not little things like ADSR parameters

Edited by Zeffolia

Share this post


Link to post
Share on other sites

 

Can the parameter distributor also distribute Frito-Lay® brand chips and snacks?

 

Yes, Frito-Lay®, Fritos®, Lays®, even Tostitos®.  Anything under the PepsiCo® name.

 

Excellent. I will configure my Markov chains for maximum crunch and flavor.

Share this post


Link to post
Share on other sites

 

 

Can the parameter distributor also distribute Frito-Lay® brand chips and snacks?

 

Yes, Frito-Lay®, Fritos®, Lays®, even Tostitos®.  Anything under the PepsiCo® name.

 

Excellent. I will configure my Markov chains for maximum crunch and flavor.

 

 

Excellent, there's nothing like a maximally crunchy and flavorful bass fart (sequenced by a Markov chain™)

Edited by Zeffolia

Share this post


Link to post
Share on other sites

 

 

 

Can the parameter distributor also distribute Frito-Lay® brand chips and snacks?

 

Yes, Frito-Lay®, Fritos®, Lays®, even Tostitos®.  Anything under the PepsiCo® name.

 

Excellent. I will configure my Markov chains for maximum crunch and flavor.

 

 

Excellent, there's nothing like a maximally crunchy and flavorful bass fart (sequenced by a Markov chain™)

 

Now I just need to EQ out all this Cheeto powder off my fingers. 

Share this post


Link to post
Share on other sites

Anyway I think the core idea is to structure the music architecturally, as Rob has said in interviews

 

Rhythm is delineation

Timbre is building materials

Melody is proportion

 

Make it so that if the music were represented in some visual way, mapping various parameters to shapes, arrangements of shapes, and colors, or something of that nature, it would be visually appealing as well.  This is a loose requirement and it is dependent upon personal biases in aesthetics, but it can still somewhat be applied

Share this post


Link to post
Share on other sites

Share this post


Link to post
Share on other sites

 

man the visual interface of this is so satisfying to watch

Share this post


Link to post
Share on other sites

 

 

awesome!

 

So is this one:

Share this post


Link to post
Share on other sites

In general each of their individual timbres seem to be either single layered or so heavily layered that it creates the illusion of it being single layered.  I've not noticed too many timbres with simple multilayered relationships, they tend to opt for multiple layers being individually sequenced instead of outputted at the same time, and with one main layer being the focus at a time, with this focus and mix being modulatable via some parameter which is enveloped.  None of this is even beginning to get in to the actual sound generation methods which are very far beyond me at this point, people with more knowledge of FM and granular synthesis may be of help on that.  I assume granular synthesis, when done well, must be done with C externals to Max/MSP.  Once a decent core FM and granular synthesis engine are made you can reuse them forever so it would be worth investigating how to make one great from the beginning.  

Edited by Zeffolia

Share this post


Link to post
Share on other sites

In general each of their individual timbres seem to be either single layered or so heavily layered that it creates the illusion of it being single layered.  I've not noticed too many timbres with simple multilayered relationships, they tend to opt for multiple layers being individually sequenced instead of outputted at the same time, and with one main layer being the focus at a time, with this focus and mix being modulatable via some parameter which is enveloped.  None of this is even beginning to get in to the actual sound generation methods which are very far beyond me at this point, people with more knowledge of FM and granular synthesis may be of help on that.  I assume granular synthesis, when done well, must be done with C externals to Max/MSP.  Once a decent core FM and granular synthesis engine are made you can reuse them forever so it would be worth investigating how to make one great from the beginning.  

 

Well it's more about the visuals and the idea of playing music like an arcarde game that fascinated me on that rather than the music itself which is quite simple. I have no idea how Max/MST, PureData, C and alike work but I'm pretty sure people wouldn't bother to learn these complex things if it wasn't worth the effort

 

 

Anyway I think the core idea is to structure the music architecturally, as Rob has said in interviews

 

Rhythm is delineation

Timbre is building materials

Melody is proportion

 

Make it so that if the music were represented in some visual way, mapping various parameters to shapes, arrangements of shapes, and colors, or something of that nature, it would be visually appealing as well.  This is a loose requirement and it is dependent upon personal biases in aesthetics, but it can still somewhat be applied

 

Makes sense to me. I also tend to visualize what I hear. Synaesthesia is an important aspect of listening to music and creating it

Edited by darreichungsform

Share this post


Link to post
Share on other sites

run all yr gear through th farfisa organ controller pls

Edited by Redruth

Share this post


Link to post
Share on other sites

AE_LIVE is by far the best collection of Autechre music for dissecting and attempting to reverse engineer their methods, since they share common motifs and track structures but are also wildly different.  I'm suspecting the differences are the result of modifications to parameter ranges (parameters for timbre and timing) allowed by the generative sequencer, as well as manifestations of different user inputs.  I'm having a hard time deciding whether realtime user input generating events is a good idea, or whether all user input should simply be modifications of the parameters and seeing what happens.  I'm thinking the latter

 

This whole generative sequencing engine I've described so far in this thread can really apply to any genre of music whatsoever.  Orchestral, rock, traditional boring electronic, and advanced weird electronic like Autechre and others.  It all depends which sound generation modules you use.

 

The cool thing about this system is that theoretically if you replicated the timbral and sequencing based functionality of your sound generation modules for another music genre's timbrel sound palette, you could make an orchestral version of AE_LIVE or a version using only pan flutes or dustbin samples.  It wouldn't sound great but it can be fucked around with.  You just have to follow some flexible, generic public API you've designed for each module and they can be swapped in and out for these purposes

 

 

Sorry for sperging out I just love Autechre and am trying to think about their methods and how to apply them to my own music in the future

 

Share this post


Link to post
Share on other sites

The idea is to create emulated instruments kind of like the goal of physical modelling synthesis.  I said "pan flutes" samples earlier and what I really meant is "a large collection of pan flute samples and modifiers, accessible through a simple public API of generic metaparameter inlets X, Y, and Z".  You could maybe assign attack (blow strength) to X, sustain (blow consistency) to Y, and pan flute size and resonant properties to Z.  And of course have trigger, pitch, and other generic inlets as well which are always required, and a sound output signal.  This could even be done without much software sound synthesis, literally just indexing to find the right samples

 

Record yourself playing a pan flute like 500 times, organize each by attack style (sharp blow, slow rising blow) and sustain type as well, and select the right sample to play given the metaparameter values currently set, described above.  

Share this post


Link to post
Share on other sites

I've also noticed quite a bit that there will be a melody held together by multiple different timbres at once.  Say there's a melody 1 2 3 4 5 6 7 8 9, then [1, 4] may be played by patch1, 6 may be played by patch2, [7, 8, 9] may be played by patch3, etc.  And the ADSR parameters of each patch will be wildly different, and the melody may be transcribed up into higher or lower octaves as well.  But thematically it maintains that melody, but it sounds way more complex and layered

Share this post


Link to post
Share on other sites

Then there could be abstract underlying melodic guidelines that could be set and followed or ignored as a patch chooses, based upon timing.  Maybe seconds [0, 1.5] of a sequence needs to be an A, but (1.5, 1.7] = C, (1.7, 1,9] = B, and then the note is determined independently of rhythm so wherever a note trigger event happens its note is predetermined based on when it happens.  These ranges of notes could be modified each sequence by a sequence generator, maybe a simple FSM of some sort of a genetic or cellular automaton based function

 

I need to stop describing this shit and actually try it and see if it sounds good.  Will some day

Share this post


Link to post
Share on other sites

I dunno I've rather enjoyed reading this all, Zeff. A few things made sense with what seemed obvious to me, and some of it is shit I'd never have imagined, some I've barely understood, and some is genius. That second to last bit about a different note of a melody being created using different 'instruments' seems obvious (common in much music, even electronic), but isn't something I think I've ever seen implemented in software. But that's just one small example...but whatever, keep talking dude. I'm down.

Edited by auxien

Share this post


Link to post
Share on other sites

Generating motifs that are actually interesting/engaging is something that needs some clever thinking.

Share this post


Link to post
Share on other sites

zeffolia, this is very interesting, pls continue.

 

btw...would it be easier to implement AND manipulate all of this in max environment or by using something like supercollider?

Share this post


Link to post
Share on other sites

zeffolia, this is very interesting, pls continue.

 

btw...would it be easier to implement AND manipulate all of this in max environment or by using something like supercollider?

 

I don't know anything about supercollider but ae do it all in Max/MSP with a few C externals from what I've gathered in all the interviews I've read.  I mean this can definitely be done in any language really, as long as it's a proper language, it's just a matter of knowing the language and knowing the best way to do this all.  

 

The only slightly hard thing to implement, out of everything I've described above, would be the generic generative sequencer.  As far as I know Max/MSP is really weird and not amazing at storing mutable states of arbitrary data structures.  As I mentioned before Sean said their current favorite Max/MSP object is "zl" which is for list processing, such as compare, delace, ecils, group, iter, join, lace, len, lookup, median, mth, nth, queue, reg, rev, rot, scramble, sect, slice, sort, stack, stream, sub, sum, thin, union, or uniqueecils, group, iter, join, len, mth, nth, reg, rev, rot, sect, slice, sub, or unionwhich are extremely powerful, probably even turing complete in themselves so Max/MSP necessarily can implement anything.  But it may be a hassle and slow as well, but if it works for ae it works for anyone else.

 

Current probably obvious ideas but I will list them anyway:

-Allow the melodic sequencers to have generic interfaces for next-note-choosers, so you could swap in and out next-note-choosers which are either based on a hard coded template that the user determined for their own melody, or are chosen from a mode for example.  Could be interesting to use arbitrary tunings (is that the word?) instead of just the standard A440

 

One other obvious thing I've really been noticing like a lot more recently is subtle frequency envelopes.  Maybe they don't use them and I'm just hearing things but I never really noticed them too much until a while ago.  Everything after Oversteps strategically makes use of starting out as one note and really quickly switching to another to kind of imply a background melody made up of these barely present notes, as well as background drones starting at one and slowly going up to another.  This results in that effect where you can hum a fully fleshed out melody in your head along with the track, but half of the melody you're humming isn't even in the track so you think wtf?  But it actually is there scattered over time so your brain pieces it together and fills in the gaps in future repetitions.

Share this post


Link to post
Share on other sites

Another thing is that ae's music always has room to breathe.  It makes good use of silence and pauses in layers.  You don't feel overwhelmed and this always prevents it from sounding noisy (except in a few tracks lol) despite having so much shit going on.  The shit is organized into regions of each repetition, and if the regions played together too much it'd start sounding noisy but it doesn't

 

This next thing is a bit of a tenuous observations as well but I think it really applies to all good music quite frankly.  The way I think of it, is that every single sound either is an "up" or a "down" compositionally.  If you have too many insane moments where you think "wtf" that is too many ups, it leaves you stuck up on a plateau once things level out a bit.  You can't do that, the piece sounds too meandering and like they're just throwing shit at the wall hoping it's cool.  There have to be some "downs" like either pretty ambient background noises that shine through once the crazy "up" moment ends, or a thick but clear bass tone that takes over and is the only sound for a little moment, or something.  Maybe someone else knows what I mean and could elaborate more.  

 

Can't let the music be a full sprint, it has to slow down and catch its breath sometimes or else it feels fake and unrealistic

Edited by Zeffolia

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
Sign in to follow this  

  • Recently Browsing   0 members

    No registered users viewing this page.

×
×
  • Create New...