Jump to content
IGNORED

Clocolan - Incide (August 17, 2023)


Recommended Posts

spacer.png

 

    
01. hexis 04:13
02. projectors-introjectors 03:28
03. perfect little tulpas 01:17
04. recurse 05:29
05. thought forms 02:08
06. i want 04:20 video
07. protoGAN 02:16
08. simulacrimosum 02:03
09. rascal 03:34
10. making a mind 01:34
11. the Chinese room 01:13
12. phenomenal states 01:11
13. hypercathexis 03:44
14. archetypes += n 04:18
15. love me back 00:53
16. tensor[girl] 03:04
17. i am not here 04:30
18. blackbox 04:45
19. ex sui 02:15
20. meta-egregore 02:56
21. agency 00:42
22. if we can still feel 04:10
23. incide 01:02

https://clocolan.bandcamp.com/album/incide

  • Like 1
Link to comment
Share on other sites

This release is available in full if you purchase through bandcamp.

 

Bleep writeup:

clocolan presents his latest album Incide, an active internal landscape generated by the friction between human and machine, and the increasingly blurred line separating the two. Having explored the deepest crevices of the mind and psyche through conceptual and narrative releases, clocolan turns his sights towards the equally curious other that has defined this decade’s conversations so far.

Similar to clocolan’s previous album Empathy Alpha, a hybrid sci-fi soundtrack / tense audio play, eerie spoken word passages slink throughout the myriad tracks. Yet on Incide, the machine sings: operatic vocals wail and fade into the night on the opening ‘hexis’, introducing a rusty, dystopian atmosphere where obliterated shards of rhythm and the lyrics “I am you and you are me” glitch in unison. Each song is interposed by vignettes of cold ambience and lurching sound design, and vulnerable piano pieces like ‘perfect little tulpas’ that seem to detune as they play, ranging from romantic to funereal.

The intriguing world clocolan has developed is embedded deep within every sound: stuttering machine babbles and industrially sparking beats, AI whalesong recorded in pixelated depths, and illegible whispers sneaking beneath a mixture of robotic beatboxing, bells, and chimes. Sighing vocals manifest machine dreams on ‘i want’, with intricately detailed percussion chopped on a factory line. ‘rascal’ features heaving metallic rhythms and chirping electronics, and fragments of audio material converging while ominous tones disperse.

Through abstracted synths, energetic rhythms, and virtual Ai voices: clocolan makes his visceral cyborgian visions known.

Link to comment
Share on other sites

  • 2 weeks later...

sgood no? smell teh glass?

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

windowlicker // i want dele/to/i back elektronauts teh delete toyca toise meh toule mond hwha? how fandom can de internets not be? sfumn

Link to comment
Share on other sites

It's definitely a darker, more discordant record. He posted some interesting info on social media the other day:

Quote
'incide' was a look at how #AI invents identity but also blurs and corrupts it through iteration: prompt an #AI engine with an input and you get an output; prompt it again using that output and you get another, slightly different output—a variation. Which one is the real McCoy?
Likewise, our virtual identities are also inventions, endlessly blurred and corrupted through iteration and variation. Those identities feel strangely artificial—in the same way that #AI generations do.
What is "real"? (And does it matter whether the inventor is real or not?)
OK, not to get all Academia on this...
My initial strategy was to explore [then still nascent] #AI technology and hopefully find novel ways to express that idea: to use it as a sound treatment tool that might somehow obfuscate the "identity" of real voices and instruments.
That took me down several rabbit holes...
The first discovery was Magenta DDSP, a DAW "timbre transfer" plugin for which you could train your own models on Google Collab. Training models is tedious, trial-and-error work but I began noticing some behaviors in the "errors".
The little audio artifacts and playback oddities ended up being useful as a way to control performance (more on this below). Crucially, I got good at sussing out what would make a blah model vs an interesting one: high vs low variation in the training data was key.
I wrote a lot of dialog about identity and reality for the album. Those recordings produced a model that I used on several tracks. On 'hexis' and 'i want', rather than standard vocal doubling, I used the AI model, tweaking pitch, tone, and timing.
I discovered that you could control how the plugin responded to incoming audio: routing a Moog into it while altering attack, tone, and vibrato. The plugin reacted to changes by generating predictable elements in the model—vocoder-like. See 'if we can still feel'.
This was also used as a rhythmic device: by first routing the incoming audio through a filter with a square wave LFO, the plugin would react to changes in tone at tempo. This led me to create a model using the Moog itself and applying it as a rhythmic element (same track).
Concurrently, I had been experimenting with audio spectrograms, first converting audio to image via @convertingnow's Audio Spectrogram Creator and back again with this convertor: https://nsspot.herokuapp.com/imagetoaudio/. Essentially as a way to recast audio with accumulated errors.
I used the spectrogram output in several ways. One was to append all its variations to the original recordings and have Kontakt chop it up during playback. 'projectors-introjectors' is a good example of this:
But one of the most interesting applications was using @harmonai_org's #DanceDiffusion Collab to interpolate the spectrograms with the original audio—essentially isolating shared frequencies. (Reminded me of the spectral shapers in Tom Erbe's Soundhack.)
Interpolation was a chore and most of it was unusable but the speech elements in 'archetypes += n' and 'hypercathexis' is all interpolation using this method.
(The opening bars use timbre transfer between FM tones and a tenor voice model.)
The above example used another element that became central to 'incide'. I had latched onto to the idea of using piano resonance as a way to color instruments and reverb. I also recorded variations on these: singing into the piano sound board while holding cluster chords.
All of the piano works use one or more of the piano resonance IRs but 'simulacrimosum' is an example of the vocal/sound board resonance effect (triggered here by snippets of dialog).
#DanceDiffusion also became my noise generator: plugging in noise samples like hiss, vinyl, or just white noise produced very interesting and usable material. Without a noise model the engine seemed to default to the closest matching timbres it could find.
These would most often be approximations/combinations of rain, hail, wind, water turbulence, ice, fire, electronic glitches, and unidentifiable ones. If #AI is good at anything it's generating noise with no discernible identity.
Using the output as the input was effective here.
Musika by @marco_ppasini initially seemed like a plaything rather than a creative tool but as I worked with it I discovered that drum sounds generated by #AI have a thickness and fullness like old analog tape. It got me thinking about rhythm applications.
Then I stumbled on @tensorpunk's MACE. #AI's facility with noise and the sound quality of generated drum samples made an #AI-powered drum sampler the perfect vehicle—sonically and narratively. A few compound meters later and I knew it was right for 'incide'.
I struck up a conversation with Jordan Davis, the brains behind MACE, and we talked #AI and music. Jordan's approach was to favor uniqueness over fidelity; do we want *accurate* percussion and effects or *interesting* ones? MACE is all over this album:
Likewise, I soon discovered that Harmonai's engine was very good at generating interesting drum samples. Thick kick drums and noisy snares with impossible tails ('if we can still feel'). And variations on piano resonance IRs:
Tuning was another approach for obfuscating identity: Is the note an A or an Ab? Or something in between? I used discrete, scale-based detuning across the board, most noticeable in the piano+Moog interludes and pads.
In @bleep's write-up for the album they write "on 'incide', the machine sings".
And the machine is just as real.
Honorary mention: The #minidisc release of 'incide' wouldn't really have been possible without Stefano Brilli's (@thecybercase) excellent Web MiniDisc Pro interface which made duplication almost effortless. Grazie!

 

Link to comment
Share on other sites

  • 11 months later...

 

'monday kid' is a selection of early sketches, alternate recordings, and unrealized ideas from 'incide'.

  • Thanks 1
Link to comment
Share on other sites

9 hours ago, silentvision said:

 

'monday kid' is a selection of early sketches, alternate recordings, and unrealized ideas from 'incide'.

Thanks, will check this out!

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.