Jump to content

thawkins

Members Plus
  • Posts

    2,014
  • Joined

  • Last visited

Posts posted by thawkins

  1. What I tell for street cred: "Oh, don't but a Prophet, I broke one when cleaning the PCB with alcohol."

    What actually happened: "I finally managed to make a sound that was not a fucked up helicopter, got carried away and spilled craft beer all over the ADSR section."

     

    Edit: obligatory GAS: CF to SD adapter and fat pads for the MPC1000.

    • Haha 1
  2. On 8/10/2022 at 1:50 PM, mcbpete said:

    Was gonna make a joke about sending bang messages with Metro and Uzi ..... but it was a shitter... Just imagine it was something witty.

    How about some realtime sysexin, moses?

  3. Thanks, this may be useful. I feel like the problem with any kind of alternative communication methods is that within Ableton, the audio and MIDI are handled generally as first class citizens - nobody would be using Live if there were any issues with audio or MIDI losing connection inside Live itself during playing. But hand-rolled M4L devices - especially this kind of workarounds - are not first class. Setting up a TCP communication server or other side-channels is kind of sketchy to me, even though it probably works well in 99% of the cases. I think it's my day job is to do with troubleshooting and debugging connection problems and I want none of that in the music hobby part of my life. ?

    By the way, I did a test drive with the HarmoTools Chord-ChordFilter devices on the live stream yesterday and I have mixed feelings.

    Initially, the devices work fine, and it's probably more of my problem that my chord knowledge is so sparse that I messed up and did not play the correct chords. However at some point I think the devices lost connection with each other and I had the "controller" device stuck in C maj no matter what I played. This may be related to some stuck MIDI notes on that channel, but it's annoying that there is no simple "clear state" button on the device itself.

    I also think that my idea is to be more chord-scale agnostic. I play a chord of some 4-5 notes and the other tracks would just repitch the notes to the nearest matching ones (with the exception of the bass part that should be capped below some pitch to keep the lows in their place).

    I think I need some more experimentation to see if I can just use those devices or I need to roll my own.

  4. 14 hours ago, psn said:

    I did a workaround once for the MIDI limitation of the M4L devices.

    I think I ended up running a separate Max patch outside of Ableton, which was exchanging MIDI with the M4L device via TCP/IP. It was all running on the same computer, so it was low latency and easy to set up via localhost. 

    Not at the computer at the moment, so that's just the gist of it from memory.

    Yeah, if it's a separate patch, it's already close to something that I had with the standalone Pure Data patch I had for turning note information into CC.

    10 hours ago, auxien said:

    This looks like something I could use, thanks!

  5. Hello Max heads, I am trying to figure out a new paradigm changing M4L device (yeah, right :eyeroll motion:), I thought I would ask this thread for tips.

    The basic idea is that I need to have a MIDI device running on a track in Ableton that is looping some MIDI (or just accepting input from a keyboard or whatever source). Let's say this is the "carrier" track.

    Then I need to have that MIDI device (on the "carrier signal" track) receive incoming MIDI from another Live track. Let's call this one "modulator".

    The practical use case I have in mind is recording a MIDI loop on the "carrier" track, and then transposing/forcing all the pitches to be what I play on the "modulator" track. I have hacked something like this together with a janky Pure Data thing - and its super fun to play - but now its time to up the level and do it all in M4L.

    I know that a M4L device can only accept MIDI from one channel, so I guess the main technical blocker for this thing is figuring out some way to have the "modulator" and the "carrier" devices communicate with each other. Well, as a dumb workaround I think the "modulator" can just send the pitch values in a reserved range like the C-2 octave, or I could figure out some hacky hack to convert the incoming noteons to CC messages that the "carrier" decodes to a set of pitches.

    The 2nd difficult part to figure out is how to repitch the notes on the "carrier" track. In my previous experiments I used the Scale device to force everything into the "correct" notes. I do not really want to reimplement that device in M4L, so I wonder if its possible to somehow make my "carrier" device just be paired with a Scale device by way of some automation mapping hooks or something.

    I am also like 99% sure someone already built a thing like this, but I can't find anything like this online (and of course I have not really tried either ? ).

     

    Thanks in advance for any pointers.

  6. On 7/19/2022 at 10:18 AM, dcom said:

    That would be a useful feature, and as a programmer I started to immediately think about how something like that could be implemented; the easiest way would be to define the length of the placeholder, then define the group of subsequences to use, and the method of insertion (sequential, random, whatever). If the subsequence's steps don't match the placeholder length, shorter subsequences could loop (polymeter) or stretch (polyrhythm), longer ones could cut (to length) or squeeze (polyrhythm). Things start to get a bit harder when you have nested subsequences, but not at all impossible.

     

    On 7/19/2022 at 2:47 PM, TubularCorporation said:

    Yeah, I didn't make it clear but I meant specifically a hardware sequencer.  CSound can do everything I talked about easily, too, and thinking about it now that's probably where I first got the idea from. 

     

    The way I invisioned it is basically the same as how you make pattern chains in the Octatrack/303/many other step sequencers. Back then I'd had an MPC for less than a year and never actually owned or used a hardware step sequencer, only a couple linear hardware sequencers and software so I didn't really have a frame of reference, so the main think I was thinking about was a very fast workflow.  No display at all, just an x0x style row of buttons. Press any two steps and immediately oen a new sub sequence, with a single button press of some kind to return to the previous level.  now I realize that would be TOO fast and you'd be doing it by accident all the time, and it would get too complex to navigate quickly, so I'm thinking the fastest workflow would be:

    -One button press to enter the sub-sequence mode

    -select range in current sequence (or press any key in an existing range to open a menu that would let you change its size, delete it, or open its associated sequence)

    -A list of all sequence locations opens on the display, where you can select any existing or empty sequence for editing

    -some kind of simple one or two button command to return to the parent sequence (not necessary but it would probably be useful live)

     

    That's it.  It would really just be a pair of pointers in a sequence, one elling it to jump to a new child, the other telling it to return to the parent sequence after one repetition of the child sequence, and the distance between the two determining the speed that the child sequence would play at.

     

    That's it.  Could be added to just about any step sequencer UI (and could probably work jsut about as well with no display, instead of a list on a display it could go into the sequencer's pattern select mode, whatever that was, immediately after you select a range and you could choose a pattern to be your sub-sequence (I should have probably been saying "pattern" instead of "sequence" in retrospect, but too late now).  Maybe a bit easier to get lost that way but it'd work.

     

     

    I guess on a more big picture scale I just want a hardware step sequencer with the depth you can get out of something like a monome+norns, except with a less abstract UI.

     

    If I could program I'd try to branch the MIDIbox SEQ and add what I'm talking about to it, since it's already pretty close to everything I want in a step sequencer, but I've got way to many other things I'd rather do at this point in my life. It actually already has sub-sequences a bit like what I'm talking about implemented, but I still haven't messed with them and IIRC you have to call them manually.  They definitely don't scale to a range, so they're more like fills than like anything w're talking about, but it's a start.

    You should take a trip into the TidalCycles thread, because at least in the software level this sort of thing is implemented there.

    However, as a programmer, I found out that I can't stand to look at code in my free time.

  7. Yeah that's why the industry standard solution is to build a RAID system where the data is automatically duplicated between many HDDs. Regular SMART checks ensure that if a drive starts to fail, you get a warning and can replace the failing drive in the array. The system handles rebalancing the data automatically. If two drives fail simultaneously, you might be screwed, but a) that's much more rare than 1 drive failing and b) you can set up a RAID that can tolerate 2 drive failures.

    Also a good rule of thumb is: if you did not test recovering from your backups, you might not really have backups once shit hits the fan.

    And yeah echoing the strategy of keeping the backups physically separate with a few plans B.

  8. I think the point to keep in mind for SSD long time retention is that if you do weekly-daily backups, it's likely a good measure against this degradation. On the other hand, old school spinning drives are much much much more cheaper for the size, so setting up a managed NAS with RAID will get you a nice reliable backup solution that will keep on trucking and let you know if some disk is about to fail or not.

    Or you can just pay a service like Backblaze to take care of it. ?

  9. I feel like if you have a reasonably modern computer, the samples that you actively use in the project will be loaded into memory, so no need to set a special hard drive for samples and libraries.

    Basically for me the best way boils down to this: if you have a SSD system disk and you have an external SSD (Thunderbolt or USB 3), then it makes sense to assign the external drive to audio recording and file caching. 

    All the other setups are probably more useful if you have an absolutely massive User Library and your system disk is not big enough to have space for it.

    And yeah finally backups are always important, but this can be handled by your operating system too - just get a solid Western Digital external HDD and have Windows or Macos handle the rest.

    • Like 1
  10. If this is a standalone patch, you could just run it in Max and integrate with Live through routing MIDI and audio inside your computer.

    What operating system are you on? For Macos, the keywords for doing this are "IAC driver" (built-in virtual MIDI routing) and Soundflower (routing audio between apps using a kernel module).

    This is what I would do anyway if I had a standalone Max or Pure Data patch that is too much hassle to port into M4L.

    • Like 1
  11. 3 minutes ago, dcom said:

    I do mostly that, but lately I've had a bad habit of checking out new(ish) hardware. Today the ratio was 7:1 in favour of music.

    Sometimes it's inspiring to check out used hardware too, even if you don't end up getting anything, browsing used listings at sub 100-200€ can reveal some amazingly crappy but inspiring stuff.

  12. 10 hours ago, dcom said:

    Oh my, I think I do need (not really, but WANT) to get them all. It's not like I'm doing heroin, right? I really should not listen to synth reviews/tutorials in the background while working.

    Listening to inspiring music beats listening to gear reviews IMO. Especially if you already have a pile of gear that can do a lot of stuff.

  13. 1 hour ago, TubularCorporation said:

    I didn't think of it earlier, but I bet the improved bandwidth and timestamping will make wireless MIDI a lot more pactical than it is right now. 

     

    Imagine something like the CME WIDI line, except instead of only being a Bluetooth device it could just connect to any wifi network, transmitted sample accurate MIDI with insignificant latency and sample accurate, with the ability to route MIDI between any other dongle or computer on the same network.  Maybe a deluxe version that also offered MIDI event processing.  USB dongles for instruments with a host port. Maybe a multi-port entry point with a few DIN and USB connectors, so if you have a home studio and a smaller live rig, for example, you could use physical cables to connect all of your studio hardware to one entry point and all your live hardware to a second entry point, so you could have your live rig set up in a case ready to take out when you had a show, but you could also have it fully integrated into your studio and it would automatically connect as soon as it was in range.

    All of the configuration could be done over wifi or bluetooth from whatever phone or computer or tablet you wanted to use.  Asignable device numbers so you could use is completely standalone. Assign each dongle you owned had a pair of device numbers (one for input, one for output), and all devices with matching IDs will automatically directly connect over Bluetooth so you could just power up all your gear and it would all automatically connect however you had its IDs set. 

    So it would be like having a full hardware MIDI patchbay except all happening on a little Bluetooth mesh network instead of a separate hub device, and as many nodes as the network could handle (which would be a LOT since MIDI isn't exactly a bandwidth hog). Basically the sort of stuff CME is already doing but with a featureset more like audio-over-network protocols like AVB and Dante (except for MIDI).

     

    If something like that, with the bandwidth and timing of MIDI 2.0 (or at least not intruducing any additional jutter or latency when using it with MIDI 1.0 hardware) came to market and worked well I would definitely start buying them and converting my whole studio over to wireless MIDI.  With MIDI 1.0 I don't want to add any more variables since it's shaky enough as it is (and I also wouldnt' want to invest that kind of money in 1.0).

     

    But if nothing else, I've just convinced myself I should get a pair of those current CME WIDI Master dongles onto my MPC and a free pair of ports on one of the old MOTUs I use as MIDI patchbays, so I can use the MPC for sequencing from anywhere in the room without having to worry about any cables except power.  If that works well I'll do the same with the Octatrack. 

    I think current WiFi technology can handle the latency and throughput, but bluetooth definitely is a standard that has mostly a bad rep when it comes to reliable implementations and connection quality and pairing. I think that CME WIDI will work, but probably the latency won't be comparable to a physical cable.

     

    This 2 years old video is proving me wrong badly though...

    And here am I with my wireless Apple Magic mouse that can't keep a connection over 1 meter.

  14. 23 hours ago, xox said:

    Yall mad ?

    5000 words about power conditioners and obscure midi shit

    Who do you think will be spared from execution in the coming water wars - the guy who knows how to make a techno sound on a Monomachine using param locks or the guy who can sync two Akai Timberwolves over TRS MIDI? Choose wisely!

    • Like 1
    • Haha 1
  15. Also we might be at a point where an increasing amount of old gear will a) be copycatted and released by behringer (with newest tech additions) or b) virtualized/modeled entirely from the circuit board up, so it's more and more possible just to have your things talk in whatever way necessary, and MIDI and CV is just the barest fallback that you have using physical cables.

    • Like 1
  16. 18 hours ago, Taupe Beats said:

    This all sounds nice but I'd definitely not call it "easy" in any sense. This is all a ton of manual back work to code all that transposition that would end up being timely and/or expensive (or someone would be a saint and share their own work). 

    And to the 2nd paragraph, I promise that's not true. There may not be a ton of outliers but I promise there are plenty of synths who use all the "standard" cc's for stuff that has nothing to do with the intended use. 

    It needs to be a foundational change for future implementation first. Then the idea of retrofitting is more realistic. We have learned to be patient for MIDI 2.0 so I'm more than happy to wait and keep my fingers crossed that the new spec is greatness.

    I don't doubt that it will be a lot of manual work, but I feel like it is a more realistic scenario to expect volunteers over the world share mappings to their gear (just look at the public Max 4 Live patches library for an example) compared to expecting manufacturers correctly implementing a complicated specification (just look at Bluetooth audio for example).

    Having said that, my knowledge of MIDI 2.0 is 2 minutes of googling and seeing that the official spec mentions protocol handshakes, which to me means it would be a nonserial protocol and that's already way more complicated than existing MIDI. Ditto for the other advanced functions.

    The main issue is probably that there's not a lot of MIDI enthusiasts who care about this AND don't already have their gear sorted out in a way that works for them. So why should manufacturers care about implementing this new standard in a compatible way?

    I feel like big synth makers today are more likely to adopt deliberately incompatible standards so that KORG has some fancy thing on all their gear that Roland does not and vice versa.

    Happy to be proven wrong though. ?

    • Like 1
  17. On 6/15/2022 at 5:28 PM, Taupe Beats said:

    Now, you're going to tell me, "But Taupe Beats, that's exactly what standard MIDI files are designed to do!" While I understand this is the theoretical process, it is not a realistic one. Not even 7/10/74/71 are truly universal MIDI CC's, to use the most obvious examples. You get where I'm going...

    If the main blocker is harmonizing MIDI implementations between different types of gear, I think this is easily technically solvable by having some interpreter layer(s) where you say "OK this MIDI file that I am about to load is meant for a Korg MS2000, but what I have is a Roland JV1080" and then it'll translate the CCs to rough equivalents. This won't ever 100% work for stuff like SysEx, but it will definitely get to the same ballpark.

    It will get murky anyway at the point where one synth may have 2 filters-oscillators and another has 3, but messing around with my gear I have the impression that basic CCs like volume, pan, attack, release, cutoff, resonance are quite standard.

    • Like 1
  18. 9 hours ago, th555 said:

    33232.png

     

    As I probably mentioned before I used three of these before I had an actual mixer.If you have 2 you can make stereo mixes ?

    That's pretty much what the Koma Field Kit does already, except it has some extra stuff too. I really should set it up somewhere, because I have such a wonderful pile of small gear to play with: Volca Sample, Yamaha QY100, Bastl Kastle, nanoloop (on an iPhone). I also have a crappy Zoom bass multieffects unit to add some atmosphere to all of it.

    th?id=OIP.kpEUm0l4Myc8n9e7cIuWeQHaFe%26p

    • Like 1
  19. 6 minutes ago, dcom said:

    Looks like an interesting piece of kit, but if I get a proper mixer it'll be a relatively big desk, at least 8+ mono inputs and a couple of stereos, with two AUX sends or more.

    The Studiolive 24R is basically something like that, except not in desk format. It's just the inputs and outputs and for all the rest you have to mess with in the Universal Control application or get an iPad like all the pro sound guys. So it's great for setting up mixes and then playing forever, but not for messing around with faders and eqs while playing.

  20. Bringing this topic back into the GAS mode though, I kind of want a standalone mixer that would give me a dialled in submix and it would be easier to hook up all the small loosely joined things. I think this would be nice to have in improvised jam sessions.

    I think I owe it to the Koma Field Kit to try it out as a mixer like this. It's got that perfect small form factor and some other fun stuff too. And since it's only a mono output, it can't get too complicated either. Let's see if I get further with this plan or it just remains a post I made.

  21. 5 minutes ago, dcom said:

    Nope, those tracks are over 20 years old, made with then-named Fruity Loops, no hardware. With the exception of the AIRA TR-8 and TB-3, everything in the picture has been acquired during the last couple of months. I wanted many small things loosely joined, so I can vary the setup and the connections (sync, MIDI, DAW, or any combination thereof).

    That's cool. My experience with small things loosely joined is that most of the small things end up in the cupboard gathering dust, while the two things I use are hooked up like that permanently. Every 6 months I make a half assed attempt to recable something, but then the setup usually ends up back like it was: a Roland XV5080 and Korg MS2000R rack unit going to a Presonus Studiolive 24R, and getting all the MIDI from Ableton Live.

    • Like 1
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.