Jump to content

justanotheruser

Members
  • Posts

    57
  • Joined

  • Last visited

1 Follower

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

justanotheruser's Achievements

Enthusiast

Enthusiast (6/14)

  • Reacting Well
  • One Year In
  • First Post
  • Collaborator
  • Conversation Starter

Recent Badges

47

Reputation

  1. Hello everyone! I've created a bunch of experimental digital oscilloscopes, which I tend to run into Stable Diffusion for more intricate and interesting results. Here are a couple of demonstrations for you to enjoy: You can access this bad boys to embelish your productions through my Patreon page: [url]https://linktr.ee/uisato[/url]
  2. A few small fragments of live-improvisations, testing these new real-time audioreactive pointclouds I've been working on. [Which btw, you can access as of today through my Patreon profile. Nine (9) new presets in total.] For more experiments, project files and tutorials, you can head over to: https://linktr.ee/uisato
  3. Bridging the realms of sound, live performance and literature, I've created a system in TouchDesigner that transforms real-time audio data into unique AI generated poetry being recited, also in real-time. Through the use of AI and data sonification techniques, this artistic installation aims to give life to a digital bard to accompany musical compositions in a hopefully-beautiful and meaningful way. I’ve been working on this for the past several months and let me tell you, there have been a significant amount of interesting moments while using this system to accompany my live-performing. Hope both examples shown in this post give you a glimpse of what I’m writing about. In short, process happens as follows: Real-time Audio Source ➨ Frequency Analysis ➨ Poetic Element Mapping ➨ Poem Generation [ChatGPT API] ➨ TextToSpeech [ElevenLabs API] Filled with excitement, I can’t stop thinking about possible cool usages for a system with these characteristics: from using it as a member in live-performing band, to a 24/7 online radio or a beautiful interactive installation. PS: I’ve recently got my plane tickets to Europe for this fall, and I’d love to bring Auratura with me. If you are interested in hosting this installation, or know of cool places/venues, please do get in touch. For more experiments, tutorials, and project files, you can head over to: https://linktr.ee/uisato
  4. Hey. Just wanted to share my latest system with you. https://www.youtube.com/watch?v=gZprUqsLOyQ https://www.youtube.com/watch?v=z8wgWWOHiqM A couple of examples of what happens when you combine the last particle oscilloscope I shared, with Stable Diffusion. I still can’t believe how good this tool is. We’re living in crAIzy times, folks. Be sure to fasten those seat belts. You can access this oscilloscope, plus quite a few more TouchDesigner project files, tutorials and experiments through: https://linktr.ee/uisato
  5. Hey! I think some of you might find this topic interesting. Here're a couple of experiments of my own with Stable Diffusion and video-recordings [some of my own, some well known videoclips] 1) Radiohead - Lotus Flower 2) Stimulus - [Recording of my own] What do you think? I'd love to read your thoughts on these.
  6. I've just made available my newest experimental oscilloscope, in addition to my latest MIDI generative system [DALL-E 2 generated images to MIDI] and a completely free sample pack. Enjoy! https://linktr.ee/uisato https://www.youtube.com/watch?v=rSvhhgd-gxw
  7. Through the use of OpenAI’s DALL-E 2 API and TouchDesigner, I’ve managed to create a, let’s say, MIDI sequencer that captures RGB data incoming from AI generated images in real-time, and uses it to trigger MIDI events in Ableton Live. Said signals are then feeding, among other things, two wavetables in the first two examples, and my lovely new Prophet 6 in the other ones. In short: The idea was to have a system that, given certain concepts (a.k.a prompts), generates chord progressions from the RGB incoming data resulting from the DALL-E generated images. Concept [prompt] ➨ AI generated image [DALL-E 2] ➨ capturing RGB data in real-time [TouchDesigner] ➨ using that data to trigger MIDI events [Ableton Live] For more experiments, tutorials, and project files, you can head over to: https://linktr.ee/uisato
  8. A couple of example results from my latest system, a video player with an audioreactive playhead. There’s a couple of things more to it, but I’ll leave it to my patrons to explore. You can access this system plus many more in my Patreon profile: https://linktr.ee/uisato
  9. I've made a little walkthrough for Samplebrain, Aphex Twin's new sound design tool. Hope you all enjoy it.
  10. Hello everyone. I present before you my latest work: Transforming NASA's asteroid data in real-time into MIDI to feed my Pulsar 23:
  11. A couple of hopefully-interesting patches for the Pulsar 23. I also took the opportunity to dust off my real-time ASCII filter [with which I’ve been preparing some cool things for next month.] You can freely access the entire WAV files of this sessions [+ some bonuses] for sampling in here.
  12. A couple of hopefully-interesting patches for the Pulsar 23. I also took the opportunity to dust off my real-time ASCII filter [with which I’ve been preparing some cool things for next month.] You can freely access the entire WAV files of this sessions [+ some bonuses] for sampling in here.
  13. Hello everyone! I'd like to share with you a few excerpts of me playing my new chaotic pointcloud system, in which the particles are being audioreactive to the incoming audio signal. You can find these and quite a few more project files in my Patreon page: https://linktr.ee/uisato
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.