Jump to content
IGNORED

Sean talks on technology at The Simply Superior podcast


Guest 2062

Recommended Posts

After getting some fresh air (sorry, I am grumpy because Spring has not yet Sprung here) I entertained the possibility that I had, in fact, been a bit of a twat in this thread.

 

Allow me to indulge in BoC-subforum levels of didacticism for a moment, which may ultimately prove even more twatty, but which I think could lead the discussion in more interesting directions:

 

Here's what we know, as far as I understand:

1) Sean said he's spent the last year "programming".

1a) He said he hasn't done this much "programming" in a while, or hasn't programmed this deeply before. Can't remember because I haven't listened to the podcast since the day it was release. LOL so much for didactic rigor.

2) Andrew discussed his past life "programming" for a security firm. I think it's safe to assume that he meant the type of programming that involves a general purpose programming language.

3) Sean also mentioned apps, specifically Jasuto and SunVox (which incidentally are on both iPhone and Android) and how they reminded him of the Spectrum days

4) He did also mention Max/MSP patching, though I can't remember what he said exactly except that -

4a) - I seem to remember "encapsulation" being used in that context, as xox indicated on page 4.

(5) I believe - NOT certain on this - Autechre used Max/MSP on the last tour. I remember them talking in Oversteps-related interviews about their homemade sequencers, and how they had made these before in the past (I think they used these all over LP5 through Confield and possibly Gantz Graf though I understood Draft to be more Digital Performer-oriented) but that they were unstable and not suitable for live use, and that they'd just recently figured out how to make their homebrew sequencers stable. I also believe they used the word "built" instead of "programmed" to describe the creation of these sequencers but I could be wrong.

 

#1 combined with #4 seems to be why a lot of people think that he was referring to Max/MSP when he said "programming".

#1a combined with #2 and moreso #(5) is why I don't think he was. If he hasn't done programming that deeply or in a long time, but they used Max as the centerpiece of their last tour, I believe he means programming using a general purpose language, or at least, something other than Max.

#3 is why I mentioned Objective C (for making iPhone apps) and Java (for making Android apps). Not because I'm a cool kid. Trust me. I am a fat, hairy loser and I am not in denial about it.

 

Hope this clears things up a little.

 

P.S. Sorry that I said Max & Reaktor aren't "real" programming languages. I still think that's accurate but I could've come up with a less rude way to say that.

Link to comment
Share on other sites

  • Replies 158
  • Created
  • Last Reply
Guest RadarJammer

He said you could do thing's with programming that you can't do with chameleon software (like max) so its pretty self evident that he was talking about code rather than graphical programming. Also the podcast was recorded in early december2011 so he has supposedly been working on tracks for 4+ months now so maybe we will get a new album this year.

Link to comment
Share on other sites

I'm not THAT fat, but I still wear black shirts a lot, because they're very slimming. Maybe pick up some tony black Polo shirts and pop the collar a little, you'll feel like a million bucks.

 

P.S. Sorry that I said Max & Reaktor aren't "real" programming languages. I still think that's accurate but I could've come up with a less rude way to say that.

 

By the way, I don't necessarily disagree with this statement, I just feel like, yeah. I dunno. A lot of times when people point out shit like this it's all about getting Internet arguing points and not really because it's an interesting or rewarding direction to take the discussion. I agree that they were talking about "real" programming, not patching.

 

I think because I had just listened to the podcast this talk rubbed me the wrong way because this sort of overly-prescriptive "well how DARE you misuse some tool to do something it's not THE BEST ELITE CODE BRO TOOL for doing!" was SPECIFICALLY something they complained about in the very podcast we are discussing. Subvert your tools, live dangerously, etc. This was a pretty solid theme of the discussion in the mp3.

 

I think there something interesting to this discussion though. All my point of view is: just like you don't have to drop 3k on a full Machinedrum & Monomachine & Apogee setup to make good tracks, you also don't HAVE to use a real "big boy" programming language to get funky with DSP. Hell I think it's sort of the opposite (and this too was a theme of the podcast)—fuck learning "the basics" just for the sake of learning the basics, and fuck spending 6 months building up some streaming library API for yourself just so you can get C++ to play a sine wave. Use the tool that hits the sweet spot of flexibility and ease-of-use that you personally desire and don't worry about if it makes you a "REAL GUY" or not.

 

On the subject of, you can't easily do X or Y in Max or Reaktor, I totally agree. But for almost every "real" programming language you could come up with such degenerate cases where it's a super huge pain in the ass to do X. I guess I just don't see a solid distinction here other than "cred". I mean, someone was dissing SuperCollider earlier. Wat? SC is basically a fully object-oriented language that makes it so you don't have to do all the annoying work of talking to your audio hardware.

 

I'm not a programmer, so I'm probably just bullshitting. What the fuck is a phasor by the way? Why would one want one?

Link to comment
Share on other sites

I think there something interesting to this discussion though. All my point of view is: just like you don't have to drop 3k on a full Machinedrum & Monomachine & Apogee setup to make good tracks, you also don't HAVE to use a real "big boy" programming language to get funky with DSP. Hell I think it's sort of the opposite (and this too was a theme of the podcast)—fuck learning "the basics" just for the sake of learning the basics, and fuck spending 6 months building up some streaming library API for yourself just so you can get C++ to play a sine wave. Use the tool that hits the sweet spot of flexibility and ease-of-use that you personally desire and don't worry about if it makes you a "REAL GUY" or not.

 

 

i like this attitude and i completely agree. a lot of the time when i'm making max patches i'll get in over my head with some dsp shit and realize i'm probably not doing it the "right" way at all any more, but if i keep going in the direction i've started, and balance further research with further experimentation (always with an emphasis on just getting nice sounds), i usually end up with something i love.

 

i started reading this DSP textbook today and i loved this quote in the intro: "Learning digital signal processing is not something you accomplish, it's a journey you take." i think this is really important to remember when doing any kind of programming or patching - you're on a journey and constantly gaining new ground, and it's often foolish to assume you've gotten as far as you can get in any one direction.

 

haven't listened to the podcast yet but i'm about to put it on and go for a walk. cheers everybody :beer:

Link to comment
Share on other sites

I'm not THAT fat, but I still wear black shirts a lot, because they're very slimming. Maybe pick up some tony black Polo shirts and pop the collar a little, you'll feel like a million bucks.

LOL thanks for the pro tip, cuz. Good lookin' out! :cool:

 

Yeah I think all those environments are great and I totally agree that it's way better to just hit the ground running than do it the "right" way. I didn't mean to discourage people from doing it the "right" way although I see how what I said came across that way. But I also think it's important to consider context in these situations: that you would use a different word to describe the same activity in one conversation as another. When talking to a coder, building in Max/MSP is probably "patching", and maybe to a non-coder you might say "programming", although I personally wouldn't. Maybe "farting around on the computer trying to find some cool sounds", etc.

 

What the fuck is a phasor by the way? Why would one want one?

 

I think I picked up this terminology when I was messing around w/ PD in 2003/2004. It may not be standard. Some documentation used this to describe a ramp/saw wave. The difference between a phasor and a normal ramp being that a phasor is not interpolated or bandlimited because it's not intended for used directly as an audio signal (they sound kinda nasty) but rather as a modulation source, especially to read a data table.

 

Think about how samples are read from a table or array - this is a very useful thing.

 

Modulate the playback read "head" position by the ramp, and its slope (as opposed to either the amplitude or frequency by itself, this is a function of both) determines the speed of playback. Assuming the ramp's default amplitude rises steadily from 0 to 1 and then returns to 0 each cycle, you can multiply the value of the ramp by the difference between the loop start and end point, and add it to the loop start point, and this value will describe exactly what position in the array/table needs to be read.

 

You can even use floating points here to weigh/mix between adjacent table values, which I think is what Reaktor does automatically.

 

In fact this was exactly what I was trying to do in Reaktor: to emulate my old Tascam 424mkII 4-track by creating a 4-channel sampler/recorder/player with variable speed. But the interpolation got fucked up when I tried to write to the table faster than the sample rate. I am not aware of any solution to this in Reaktor. So I did sorta wipe my frustration at Reaktor on this thread. My bad, yo.

Link to comment
Share on other sites

In fact this was exactly what I was trying to do in Reaktor: to emulate my old Tascam 424mkII 4-track by creating a 4-channel sampler/recorder/player with variable speed. But the interpolation got fucked up when I tried to write to the table faster than the sample rate. I am not aware of any solution to this in Reaktor. So I did sorta wipe my frustration at Reaktor on this thread. My bad, yo.

 

I've been listening to a lot of Prince lately and it's sort of become a pet peeve of mine that there isn't awesome, rock-solid "varispeed" built into like, every audio application. But I see what you're driving at.

 

I try to avoid audio tables because as you hint at they are kind of a pain in the dick. But I guess yeah you'd need to latch the audio write to whatever "effective sample rate" the tape deck was currently spinning at. And you'd only get "effective sample rate"s worth of fidelity when recording to a track on the deck.

 

I find Reaktor's "core level" somewhat incomprehensible, but if you haven't tried it out recently, I'd recommend it. You can definitely get multiple "cpu cycles" worth of calculation inside a single "sample cycle" of a core cell, in fact that's one of the huge pains in the dick of it is latching the cell boundary back out at the sample rate. One of the very first core tutorials is very similar to what you're saying, btw. Building a rough un-bandlimited saw oscillator using a bunch of flops and stuff. It really made me think of things differently.

 

I'd be interested to know if you could build the deck with some core cells on the front end. You could even probably build some sort of oversampling into a front end so that you are always sampling at 44.1 (or whatever) and then interpolating down to the "effective sample rate" that the table is currently running at. That would probably get closer to a true analog varispeed. Otherwise you're basically downsampling with no interpolation, and it's going to sound like a decimator if it's running at a speed that is not a multiple of the current project sample rate.

 

But for real, fuck audio tables.

Link to comment
Share on other sites

Yeah I've just resigned to using the actual 424 itself for varispeed. I'm using it as my main mixer anyway (yeah, my setup probably looks incredibly wack to most peeps). Ironically my intense interest in varispeed tapered off after I realized I couldn't do it in Reaktor. Even more ironically, I just realized I could actually do varispeed in Reaper, which has been my main DAW since like 2007.

 

This tapering off of interest tends to happen after I figure out how to do some cool shit. Maybe I am thinking of music too much like a goal/accomplishment/destination and not like a journey. The journey thing seems cool.

 

Maybe that core shit will actually help solve my problem. Worth a shot!

Link to comment
Share on other sites

I hear ya. One thing I find encouraging about people like Ae (and "makers" in general, to use a really shitty Boing Boingy term) is that if you do enough cool shit, eventually your brain will start making good interesting cross-disciplinary connections and then you can start being really interestingly creative. That's the theory I guess.

 

Yeah you could probably do it without core, I reckon. I mean the issue is that there's no such thing as an "interpolated write", right? You have to write to an integer location in the table? So you could just write at the nearest location for whatever sample rate is currently running and sound all decimated and shit, or you'd have to do some sort of interpolation of your own and then write sort-of weighted values on demand. Which is a land of somewhat-serious DSP knowledge. It'd be a fun challenge though.

 

I'm curious as to how you think a "real" programming language would have solved this problem though. There's still limited resolution in your audio buffer and you still have to decide, if your interface is at sample 44,100, but your "tape deck" is currently helpfully playing sample 120,643.72861, where to write the incoming sample to. You still have a finite audio buffer in either Reaktor or C++ or whatever. What specifically about Reaktor gave rise to this problem?

Link to comment
Share on other sites

I hear ya. One thing I find encouraging about people like Ae (and "makers" in general, to use a really shitty Boing Boingy term) is that if you do enough cool shit, eventually your brain will start making good interesting cross-disciplinary connections and then you can start being really interestingly creative. That's the theory I guess.

 

For sure. This is why I like noise, it's like a wide open field for the cross-referencing part of my brain to roam free.

 

Being "multidisciplinary" can lead to dilettantism, though, I think. That's where I feel I've been for a while. I guess this isn't really a bad thing except for the existential malaise and lack of satisfaction from actually accomplished something impressive. Spending time on WATMM isn't helping though :P

 

you could just write at the nearest location for whatever sample rate is currently running and sound all decimated and shit

 

I actually got this working and that's exactly what it did when I tried to write faster than the sample rate. It was crusty and sloppy in a cheap sounding way that I didn't like. Sort of "sandy" I guess.

 

Yeah you could probably do it without core, I reckon. I mean the issue is that there's no such thing as an "interpolated write", right?

 

I don't see why not. I mean if you can do an interpolated read, you should be able to do an interpolated write.

I'm curious as to how you think a "real" programming language would have solved this problem though. There's still limited resolution in your audio buffer and you still have to decide, if your interface is at sample 44,100, but your "tape deck" is currently helpfully playing sample 120,643.72861, where to write the incoming sample to. You still have a finite audio buffer in either Reaktor or C++ or whatever. What specifically about Reaktor gave rise to this problem?

 

I don't think the floating points are the problem, I think that is actually the easier thing to deal with and in fact Reaktor tries to take care of that already.

 

The problem is, when recording at speeds faster than the table is supposed to be read at, what do you do with the between values? I think the simplest way is linear interpolation. You just treat each position sample indicated by the phasor like the endpoint of a line, so that for each point of the phasor a line is drawn from the last one to the current between the samples represented by those points. Create a linear transition between those values. This is something you can't do in Reaktor as far as I can tell.

 

In a programming environment that offered for-loops, that's exactly what I'd use - I'd iterate (wrapping, of course, for loops) between the previous table position and the current table position, creating a linear interpolation. I'm sure I'd have to do something funky with floating point values but I just had a beer and I don't want to think to hard about it.

 

Another problem that just occurred to me today is that if the phasor's frequency is faster than nyquist, there's no way to tell which direction it's going in, and not only that you could end up overwriting huge swatchs of samples if you miscalculate the direction. So you could just set nyquist as the maximum phasor frequency, or better yet nyquist-1 or something.

 

The only reason that I bitched about doing this in Reaktor is that I had actually tried it in Reaktor because I had the impression that it would be a good environment to try this in.

Link to comment
Share on other sites

Oh, RJ with the fanciness. I really need to start keeping up with the Reaktor UL again.

 

While you were doing that, I was curious so I checked. The write pointer of the audio table always rounds down. So I imagine if you used it as a varispeed it would be like decimating your signal on input if it was at a weird pitch? But maybe not, I should try it.

Link to comment
Share on other sites

Guest mafted

yeah, sorry but Max/MSP and csound are definitely considered programming.

 

That is like saying that C++ and Delphi isn't programming.

 

looks funny coming from that avatar.. like totally dude.. lol

Link to comment
Share on other sites

Second listening of the podcast brought me a different view on the programming subject.

 

Sean did criticize object-oriented programming in general AND encapsulation WITHIN Max.

He also mentioned how surprised he was by the amount of work required for just one oscillator of filter.

 

All that makes me even more wonder what language he use... Could it be something like C? Is he that crazy?!

Link to comment
Share on other sites

Guest sickboy

said he was having problems with (m,emory)leakage at one point so yeah almost certainly

why is it crazy

Link to comment
Share on other sites

why is it crazy

 

 

Too much work I guess! Nothing else.

 

Yeah, isn't C what you generally use to make externals for max/msp ?

 

 

Yes. Good point!

Link to comment
Share on other sites

  • 11 months later...

After listening to it again...Interesting talk. Sean mentioned "proper DSP programming", memory leakage and something about how he's surprised how much work is required just for basic things like an oscillator "...like 500 sentences..." so he's definitely NOT talking about Max, SuperCollider or CSound. But what then? We can only guess... Not that I have to know.

Link to comment
Share on other sites

This podcast had potential--I'm very interested to hear about my favorite artist's non-musical side-projects, but that old man was fucking annoying. I hate when people talk *at* you: and he clearly does this to everyone around him. I accidentally listened to the 03 September 2011 episode (because it mentioned Autechre), and he did the same thing there, too.

 

"I was there and saw Hilary Clinton!" For fucking fucker's fuck face sake!

Link to comment
Share on other sites

Guest pixelives

It's just nice to hear a Sean that is very different than the apprehensive guise he puts up in most interviews (which is understandable considering how journalists can twist words). Has lots to say, that one, quality.

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.