Jump to content
IGNORED

XHK - XH HX


auxien

Recommended Posts

XHK_3 (tracks 81-120, 1hr 4min) - this one starts off very well, both sound/visuals in 81-90.

 

tumblr_pi9ob7y97m1xq095ko1_400.png

I keep coming back to this one so far, couple listens. #96 :catbed: takes me back to ye old oz soap.

 

*for those loathe to waste their time decoding images and would much rather listen to these research notes, but hats off to you fellas, keep on geekin' on

Edited by Roo
Link to comment
Share on other sites

 

XHK_3 (tracks 81-120, 1hr 4min) - this one starts off very well, both sound/visuals in 81-90.

 

tumblr_pi9ob7y97m1xq095ko1_400.png

I keep coming back to this one so far, couple listens. #96 :catbed: takes me back to ye old oz soap.

 

*for those loathe to waste their time decoding images and would much rather listen to these research notes, but hats off to you fellas, keep on geekin' on

 

 

 

#98

:datboi:

Link to comment
Share on other sites

#108 :music: I think that is one of the keepers.

 

The longer tracks seem longer for a reason (the quality #90 is the previous one to go over 4mins), as if there is some sort of manual routing capture going on (we'll let this groovy one play out a bit longer, tread it out casual like). Or at least they offer more immersion.

Link to comment
Share on other sites

in addition to the encrypted images, it would be interesting to discover the relationship between these and the sound: there are obvious synchrons, and since I doubt editing has been done for thirteen hours, I am inclined to believe that the sound reacts to images thanks to some mysterious algorithm...

 

i don't believe one is the direct source of the other, they are most likely just generated by the same patch with some interconnection.

 

i mean what correlations image->sound do we have for sure? some kinda UPIC thing is out of the equation, colors don't seem to be crucial, neither is pixel density or distribution.

 

on the other hand sound obviously isn't generating images directly either as these are from external sources. so the question is how does it affect the way those clips are played back?

 

well volume is turning them on / off obviously but apart from that... it's hard to say.

 

sometimes it seems like more drastic / sudden frequency changes affect image selection (like 94 - 0:50), but other times sound is going wild and the movie seems to just go on like before...

Link to comment
Share on other sites

sometimes it seems like more drastic / sudden frequency changes affect image selection (like 94 - 0:50), but other times sound is going wild and the movie seems to just go on like before...

If the original video soundtrack is involved then that could explain the latter, but could you point to an example or two where it happens? I haven't really seen it, but wasn't looking for it either.

Link to comment
Share on other sites

 

sometimes it seems like more drastic / sudden frequency changes affect image selection (like 94 - 0:50), but other times sound is going wild and the movie seems to just go on like before...

If the original video soundtrack is involved then that could explain the latter, but could you point to an example or two where it happens? I haven't really seen it, but wasn't looking for it either.

 

like 3 - 0:23: there's a rather "stabby" section, but those individual spikes don't seem to be recognized, just the inital switch into said section. also only when they completely stop another clip seems to be loaded (yellowish, blue before) despite the fact the same cluster as before seems to be playing. of course it could also be the same clip as before that just changed scenery in the meantime... dunno.

 

i think changes in the lower end have the most striking impact... can't check properly tho atm.

Link to comment
Share on other sites

https://www.youtube.com/watch?v=9ixjqKB31t8&index=248&t=0s&list=PL5ow3ZyXAhopA4NUdSIuAfTCvWTemihut

 

the first few seconds of this one just rocked my boat quite majorly.

 

also another example being quite lively and irregular in parts without apparent visual correspondence.

 

3 days of listening to hardly anything else. death, pierce me.

Link to comment
Share on other sites

 

in addition to the encrypted images, it would be interesting to discover the relationship between these and the sound: there are obvious synchrons, and since I doubt editing has been done for thirteen hours, I am inclined to believe that the sound reacts to images thanks to some mysterious algorithm...

 

i don't believe one is the direct source of the other, they are most likely just generated by the same patch with some interconnection.

 

i mean what correlations image->sound do we have for sure? some kinda UPIC thing is out of the equation, colors don't seem to be crucial, neither is pixel density or distribution.

 

on the other hand sound obviously isn't generating images directly either as these are from external sources. so the question is how does it affect the way those clips are played back?

 

well volume is turning them on / off obviously but apart from that... it's hard to say.

 

sometimes it seems like more drastic / sudden frequency changes affect image selection (like 94 - 0:50), but other times sound is going wild and the movie seems to just go on like before...

 

 

you have expressed my question clearly, point by point.

We know that the images have the 80s video as their source, so we must exclude any generative process on the false line of the effects of the windows player, so to speak. At the same time, often, video variations are perfectly coordinated with variations in sound, even very short accents. In order for this to happen, in theory, the original source ( the 80s videos) should be edited for a change of frames and colors in unison with the sound, but nobody wants to belive that there is a human being on this earth assumed a similar burden, for 13 hours. It would be a shocking revelation about the profound nature of the universe; I would come out of it changed. It is also true that, from time to time, the video freezes, as if it were in pause, and the sonic storm goes on.

 

For my part, even if I understood how the color beams were generated, I still can not understand how you managed to extract frames that contained the entire sequence, to be contracted, until a complete sign of meaning appeared. It's the way you manage to get the reverse process to leave me in the darkest mystery. And even less it is clear to me how, from the barely decipherable relics of a frame, you have been able to trace the original videos: infinitesimal details of compilation of advertising almost forty years old that not only google, but even HAL9000 and it's twins would not be able to intercept. Together with the images, do you get any codes? This fact amazes me more than the videos themselves. I feel the knowledge of the obsolet (me).

Link to comment
Share on other sites

 

 

I feel the knowledge of the obsolet (me).

 

 

I know that feel too AE35unit.

 

 

:shrug: all I can do is wait for someone to reveal to me the arcana at the end of the story, in the meantime, to deceive the fact of being useless, I listen with a grim expression these 13 Hours of Autechre with a touch of 7 Minutes Of Nausea   :music:

Link to comment
Share on other sites

 

 

 

you have expressed my question clearly, point by point.

We know that the images have the 80s video as their source, so we must exclude any generative process on the false line of the effects of the windows player, so to speak. At the same time, often, video variations are perfectly coordinated with variations in sound, even very short accents. In order for this to happen, in theory, the original source ( the 80s videos) should be edited for a change of frames and colors in unison with the sound, but nobody wants to belive that there is a human being on this earth assumed a similar burden, for 13 hours. It would be a shocking revelation about the profound nature of the universe; I would come out of it changed. It is also true that, from time to time, the video freezes, as if it were in pause, and the sonic storm goes on.

 

For my part, even if I understood how the color beams were generated, I still can not understand how you managed to extract frames that contained the entire sequence, to be contracted, until a complete sign of meaning appeared. It's the way you manage to get the reverse process to leave me in the darkest mystery. And even less it is clear to me how, from the barely decipherable relics of a frame, you have been able to trace the original videos: infinitesimal details of compilation of advertising almost forty years old that not only google, but even HAL9000 and it's twins would not be able to intercept. Together with the images, do you get any codes? This fact amazes me more than the videos themselves. I feel the knowledge of the obsolet (me).

 

 

 

Maybe the sound is the video data. Not entirely unreasonable to assume some max/msp magic could handle the rigorous manual labour of what you suggested. Video signal (as stream of numbers) simultaneously sending to Jitter for video processing and MSP for audio processing. Similar atonal noise is made from loading up images into audacity.

Edited by Embers
Link to comment
Share on other sites

Yeah I'm definitely still interpreting this as scan lines from the videos controlling parameters of the sound, if only because we know for a fact the video isn't synthesized from scratch but the audio might be.

 

In post #61 jaderpansen cites number 3 as an example where sound and image diverge but I don't see it. The brighter sounds starting at 00:23 seem tied to the onset and offset of the part where the line goes dashed (and the smeared pattern becomes striped). Whereas the solid blocks of color seem to "pull out all the stops" or almost act like a dry/wet mix knob.

Link to comment
Share on other sites

For my part, even if I understood how the color beams were generated, I still can not understand how you managed to extract frames that contained the entire sequence, to be contracted, until a complete sign of meaning appeared. It's the way you manage to get the reverse process to leave me in the darkest mystery. And even less it is clear to me how, from the barely decipherable relics of a frame, you have been able to trace the original videos: infinitesimal details of compilation of advertising almost forty years old that not only google, but even HAL9000 and it's twins would not be able to intercept.

modey's realization that it was a slit-scan type thing was very clever, but once he figured that out it was all pretty straightforward - extracting the images is relatively trivial, and it obviously said SUPERSUN HOLIDAYS 84 in there, so you dont really need a Google level AI to figure out it's old TV

Link to comment
Share on other sites

Yeah I'm definitely still interpreting this as scan lines from the videos controlling parameters of the sound, if only because we know for a fact the video isn't synthesized from scratch but the audio might be.

 

In post #61 jaderpansen cites number 3 as an example where sound and image diverge but I don't see it. The brighter sounds starting at 00:23 seem tied to the onset and offset of the part where the line goes dashed (and the smeared pattern becomes striped). Whereas the solid blocks of color seem to "pull out all the stops" or almost act like a dry/wet mix knob.

 

Yeah on second thought the images controlling the audio to a certain degree makes alot more sense from a working process perspective.

 

"Yo, so we got this organ noise generator thingy, let's feed it with data from our old VHS collection for a while."

 

seems more likely than

 

"Yo, we just did 444 random organ noise jams, let's check how they all look in this jitter patch of ours."

 

But my point stills stands it only seems to control it in a rather crude / macro manner (or the audio has a rather tight set of parameters), since different images seem to lead to similar results, while other times audio goes wildly out of the ordinary while images remain somewhat stagnant.

 

Dunno, did you check out 247 from above? Another good example in my book.

 

Anyway the "smearing" seems to be happening as an effect just on top of it all.

Edited by jaderpansen
Link to comment
Share on other sites

^yeah

 

My initial thoughts are that the effects (reverb, delays, panning, etc) are pretty well directly tied to the visuals but the underlying synths are being generated separately as part of the program. There's moments where there's busy melodies that seem to have no relation to the visuals. And then there's 'glitchy' parts where the effects are constraining the underlying synths/tones, and the glitchy parts are seemingly only when there's stark visual contrastthings, lines and bars of color on white, shit like that.

 

I hope that makes sense.

Link to comment
Share on other sites

its possible that the sound is mostly the original audio of the source videos being processed in an extreme way.

 

so just as the original video is being stretched & smeared, the original audio is being strectched & smeared as well. some sort of extreme granular / vocoder sort of effect, with the source audio being vocoded against their organ patch, which is itself reacting to the color / brightness of the original vids.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.