Jump to content

zlemflolia

Supporting Member
  • Posts

    6,045
  • Joined

  • Last visited

  • Days Won

    8

Everything posted by zlemflolia

  1. the barbells falling off then the black guy staring at him was gold.
  2. OH MY GOD THIS EPISODE WAS SO GOOD. "I do kickers, I do fuckin twisters" fucking lol
  3. you're supposed to get a poster with casette off planet mu bleepstore?
  4. Vodka is clearly a remix of this song or at least uncannily similar, probably already obviously known (I recognized it from Happy Gilmore)
  5. The tracks are named and tagged wrong, gay The "track" fields and filenames don't match up
  6. At the end of the day threads like this are relatively useless It's like studying guitar academically and expecting to be able to play it the way (insert guitar player here) plays it just because you can speculate on the individual techniques they use (and are probably way off anyway) Still fun
  7. Then also granular on samples of transients, with extra bursts of release drum-echo-sound and reverb upon the last sharp transient grain before the release of the tone, gives the mental illusion that the preceding sounds were thicker and fuller when you hear it end. And granular in general makes it extremely easy to modulate stereo width without dumb tricks. You can literally just make the selection of grains diverge in each channel, and it becomes maximally stereo widened without any weird EQ or phase based tricks. Can do this really subtly with envelopes on stereo width where a synth stab starts as mono and sharply increases in stereo width, fizzling out into a thick reverbed and echoed tail release. This seems really common in recent ae_live/elseq unless it's a different method I'm hearing
  8. Lots of sounds seem like granular synthesis on a sample of a transient, then compressed so it becomes like a drone of transient grains or a farty noise. Then you can very easily modulate this sound by moving the location in the sample from which the grains are being selected, forward in time to be less farty and more droney. A subtle LFO on grain selection location would produce more depth. Then add in really subtly mixed reverb or resonance. Also noticed that really sharp and heavily compressed attacks quickly fading into the normal waveform can give the illusion of really heavy synth slabs of sound. But this is obvious.
  9. There's lots of sounds that sound like deep clicking noises or like something grinding over a washing rack. Used all over c7b2 I'm assuming this is just heavily distorted low frequencies as opposed to repeatedly triggered sounds? Anyone know what I'm talking about?
  10. Not as cool, but kind of reminds me of this iPad app called Musyc. It let's you do lots of physics based stuff, but alas with preset sounds. They had said MIDI support was coming but that was awhile ago. https://www.youtube.com/watch?v=MKhEDEeAP-Q Things like this are really cool. Making the graphics more intense could make it be like real-time generation of abstract music videos to accompany the sounds I think they're too chaotic to create interesting music though and they don't allow enough control over what's going on
  11. Actually sorry that additional layer method is kind of retarded, you can just be really picky with which motif generator parameter vectors you label as true (good sounding) when training the initial SVM and you'd get the same result of good soundingness.
  12. Yeah this is an issue My thinking is that to start out you could create a main melody, then manually create a motif you want as well. And then create a motif generator which generates that motif in a generic way that can be applied to other melodies as well. Then use the motif generator on top of a different melody and see if the result is desirable You could even turn this into a machine learning classification problem (this one's a little advanced) -Design a generalized motif generator to be controllable in how it generates motifs via N main parameters -Use a support vector machine (SVM) binary classifier taking vectors of length N. Let's call this SVM A Note, binary classifier support vector machines are methods of classifying input vectors of data as being one category or another. So you can give it a vector of values and assign it a tag of category1 or category2, then after sufficient training it should be able to make the decision for itself of whether a vector is in category1 or category2. We will use this to train a motif generator -Take M melodies you made by hand, and for each one -Generate N random motif generator parameters -Apply the motif generator with those parameters to each of these M melodies -For each output -Listen to the result, and if it's pleasing to your ears give it a pass (true), and if it's not give it a fail (false) -Train SVM A on this vector of N motif generator parameters, with a tag of your true or false designation (category) Keep doing this over and over on more input melodies and more random generations of parameters, and then presumably this SVM will be trained to categorize motif generator parameters as good or bad, based on your manual training earlier listening to the input I won't go into more details but basically you can keep doing this over and over in more layers, taking accepted motif generator parameters from your SVM A, and feeding them back into another SVM B but this time being more critical of which motif generators you accept, and you can filter out the worse ones each generation. This is kind of a genetic algorithm method and it's probably really roundabout (people with more knowledge of ML could easily refine this into a better system probably using some other classifiers) but in the end you will get motif generators that should work on arbitrary input melodies and sound good. Careful not to over-train though or they will become generic and bland (this is called over-fitting, which I also referenced earlier in my suggestions to avoid letting your Markov chains have too small of a vocabulary and too high of a degree because then it just starts to copy your input melodies exactly. Let them be a bit crappy for cool unexpected results.
  13. One other obvious thing I've really been noticing like a lot more recently is subtle frequency envelopes. Maybe they don't use them and I'm just hearing things but I never really noticed them too much until a while ago. Everything after Oversteps strategically makes use of starting out as one note and really quickly switching to another to kind of imply a background melody made up of these barely present notes, as well as background drones starting at one and slowly going up to another. This results in that effect where you can hum a fully fleshed out melody in your head along with the track, but half of the melody you're humming isn't even in the track so you think wtf? But it actually is there scattered over time so your brain pieces it together and fills in the gaps in future repetitions. Not sure i understand this bit. You got any specific examples? The broken apart melody thing is really apparent in draft and oversteps, especially in st epreo and subtle frequency envelopes seem prevalent on O=0. Having trouble finding concrete examples, sometimes it's easy to be under the illusion of a frequency envelope when it's really just cutoff. But I know I heard tons when high last month
  14. Another thing is that ae's music always has room to breathe. It makes good use of silence and pauses in layers. You don't feel overwhelmed and this always prevents it from sounding noisy (except in a few tracks lol) despite having so much shit going on. The shit is organized into regions of each repetition, and if the regions played together too much it'd start sounding noisy but it doesn't This next thing is a bit of a tenuous observations as well but I think it really applies to all good music quite frankly. The way I think of it, is that every single sound either is an "up" or a "down" compositionally. If you have too many insane moments where you think "wtf" that is too many ups, it leaves you stuck up on a plateau once things level out a bit. You can't do that, the piece sounds too meandering and like they're just throwing shit at the wall hoping it's cool. There have to be some "downs" like either pretty ambient background noises that shine through once the crazy "up" moment ends, or a thick but clear bass tone that takes over and is the only sound for a little moment, or something. Maybe someone else knows what I mean and could elaborate more. Can't let the music be a full sprint, it has to slow down and catch its breath sometimes or else it feels fake and unrealistic
  15. I don't know anything about supercollider but ae do it all in Max/MSP with a few C externals from what I've gathered in all the interviews I've read. I mean this can definitely be done in any language really, as long as it's a proper language, it's just a matter of knowing the language and knowing the best way to do this all. The only slightly hard thing to implement, out of everything I've described above, would be the generic generative sequencer. As far as I know Max/MSP is really weird and not amazing at storing mutable states of arbitrary data structures. As I mentioned before Sean said their current favorite Max/MSP object is "zl" which is for list processing, such as compare, delace, ecils, group, iter, join, lace, len, lookup, median, mth, nth, queue, reg, rev, rot, scramble, sect, slice, sort, stack, stream, sub, sum, thin, union, or uniqueecils, group, iter, join, len, mth, nth, reg, rev, rot, sect, slice, sub, or union, which are extremely powerful, probably even turing complete in themselves so Max/MSP necessarily can implement anything. But it may be a hassle and slow as well, but if it works for ae it works for anyone else. Current probably obvious ideas but I will list them anyway: -Allow the melodic sequencers to have generic interfaces for next-note-choosers, so you could swap in and out next-note-choosers which are either based on a hard coded template that the user determined for their own melody, or are chosen from a mode for example. Could be interesting to use arbitrary tunings (is that the word?) instead of just the standard A440 One other obvious thing I've really been noticing like a lot more recently is subtle frequency envelopes. Maybe they don't use them and I'm just hearing things but I never really noticed them too much until a while ago. Everything after Oversteps strategically makes use of starting out as one note and really quickly switching to another to kind of imply a background melody made up of these barely present notes, as well as background drones starting at one and slowly going up to another. This results in that effect where you can hum a fully fleshed out melody in your head along with the track, but half of the melody you're humming isn't even in the track so you think wtf? But it actually is there scattered over time so your brain pieces it together and fills in the gaps in future repetitions.
  16. Then there could be abstract underlying melodic guidelines that could be set and followed or ignored as a patch chooses, based upon timing. Maybe seconds [0, 1.5] of a sequence needs to be an A, but (1.5, 1.7] = C, (1.7, 1,9] = B, and then the note is determined independently of rhythm so wherever a note trigger event happens its note is predetermined based on when it happens. These ranges of notes could be modified each sequence by a sequence generator, maybe a simple FSM of some sort of a genetic or cellular automaton based function I need to stop describing this shit and actually try it and see if it sounds good. Will some day
  17. I've also noticed quite a bit that there will be a melody held together by multiple different timbres at once. Say there's a melody 1 2 3 4 5 6 7 8 9, then [1, 4] may be played by patch1, 6 may be played by patch2, [7, 8, 9] may be played by patch3, etc. And the ADSR parameters of each patch will be wildly different, and the melody may be transcribed up into higher or lower octaves as well. But thematically it maintains that melody, but it sounds way more complex and layered
  18. The idea is to create emulated instruments kind of like the goal of physical modelling synthesis. I said "pan flutes" samples earlier and what I really meant is "a large collection of pan flute samples and modifiers, accessible through a simple public API of generic metaparameter inlets X, Y, and Z". You could maybe assign attack (blow strength) to X, sustain (blow consistency) to Y, and pan flute size and resonant properties to Z. And of course have trigger, pitch, and other generic inlets as well which are always required, and a sound output signal. This could even be done without much software sound synthesis, literally just indexing to find the right samples Record yourself playing a pan flute like 500 times, organize each by attack style (sharp blow, slow rising blow) and sustain type as well, and select the right sample to play given the metaparameter values currently set, described above.
  19. AE_LIVE is by far the best collection of Autechre music for dissecting and attempting to reverse engineer their methods, since they share common motifs and track structures but are also wildly different. I'm suspecting the differences are the result of modifications to parameter ranges (parameters for timbre and timing) allowed by the generative sequencer, as well as manifestations of different user inputs. I'm having a hard time deciding whether realtime user input generating events is a good idea, or whether all user input should simply be modifications of the parameters and seeing what happens. I'm thinking the latter This whole generative sequencing engine I've described so far in this thread can really apply to any genre of music whatsoever. Orchestral, rock, traditional boring electronic, and advanced weird electronic like Autechre and others. It all depends which sound generation modules you use. The cool thing about this system is that theoretically if you replicated the timbral and sequencing based functionality of your sound generation modules for another music genre's timbrel sound palette, you could make an orchestral version of AE_LIVE or a version using only pan flutes or dustbin samples. It wouldn't sound great but it can be fucked around with. You just have to follow some flexible, generic public API you've designed for each module and they can be swapped in and out for these purposes
  20. In general each of their individual timbres seem to be either single layered or so heavily layered that it creates the illusion of it being single layered. I've not noticed too many timbres with simple multilayered relationships, they tend to opt for multiple layers being individually sequenced instead of outputted at the same time, and with one main layer being the focus at a time, with this focus and mix being modulatable via some parameter which is enveloped. None of this is even beginning to get in to the actual sound generation methods which are very far beyond me at this point, people with more knowledge of FM and granular synthesis may be of help on that. I assume granular synthesis, when done well, must be done with C externals to Max/MSP. Once a decent core FM and granular synthesis engine are made you can reuse them forever so it would be worth investigating how to make one great from the beginning.
  21. I didn't like this last episode much it was kind of boring. It didn't have heavy impact like the others it was just kind of gay. When it ended I just thought "that's all? where are the good skits?"
  22. Anyway I think the core idea is to structure the music architecturally, as Rob has said in interviews Rhythm is delineation Timbre is building materials Melody is proportion Make it so that if the music were represented in some visual way, mapping various parameters to shapes, arrangements of shapes, and colors, or something of that nature, it would be visually appealing as well. This is a loose requirement and it is dependent upon personal biases in aesthetics, but it can still somewhat be applied
  23. Yes, Frito-Lay®, Fritos®, Lays®, even Tostitos®. Anything under the PepsiCo® name. Excellent. I will configure my Markov chains for maximum crunch and flavor. Excellent, there's nothing like a maximally crunchy and flavorful bass fart (sequenced by a Markov chain™)
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.