Jump to content

vkxwz

Members Plus
  • Posts

    222
  • Joined

  • Last visited

Everything posted by vkxwz

  1. Good point, seems like the only way is to have AI generated images be treated as if they are "sampling" everything in its dataset, that makes sense to me. https://en.wikipedia.org/wiki/China_brain I should have linked this in my other reply oops. I feel like it's difficult to discuss this without having a good definition of understanding, I think that its best defined as having a model of the thing you understand, which is predictive of its future states, and in this way these ML models do have a type of understanding. The early layers of the visual cortex take in receptive fields that are like "pixels" from the retina, and detect lines of different orientations, each subsequent layer uses the features computed by the previous layer to detect more complex features (you can read about the functions of the different areas here https://en.wikipedia.org/wiki/Visual_cortex), this can also be observed in the filters of convolutional neural networks:
  2. Isn't the human that uses the system to copy accountable? where is there ever not a human in the loop for this process. If you put in a prompt that specifies the style of a particular artist and then you make money off of posting that then the accountability is on you, its just a tool. How are we so different from a chinese room? individual neurons are not conscious, imo the takeaway from the chinese room is that consciousness and intelligence are substrate independant, not that there is some magical property of biological neurons that imbues the signal processing with true understanding. As for the black box argument, I don't think that is really the case anymore, we have a pretty good understanding of the fundamental principles of how they work, and the tools for investigating how trained neural networks work internally are a lot better now (partly thanks to max tegmark I think), especially for convolutional NNs, which break down data into high level and low level features just like our visual cortex.
  3. Can you elaborate on this? And also how is this situation different from humans copying another artists style, and why are those differences such a big problem?
  4. Not sure if you're serious, but one of the researchers high up in openAI speculates that these sort of systems will eventually become very useful for therapy. I would liken it more to a person that has the first 10 prime numbers memorized rather than searching google. It was trained on text from the internet but no longer has access to that data, all the knowledge used to produce it's output is stored in the weights of the neural network. It is an open question of how much it really understands though, since it's seen so much text that you could argue that almost every question it's been asked, it's seen asked on the internet before, but It's able to do things that prove it's doing something more impressive than that imo, such as being able to summarize large amounts of text, extract conclusions from parts of scientific papers that weren't in its training data etc.
  5. So it's not actually running the code, that output it produced was answered in a similar way to how it'd answer if you ask it normally for those prime numbers(it's seen them enough times online to know). For the virtual machine part, I think the best way to conceptualise it is that chatgpt is dreaming, it's not actually running a real VM, it's not actually accessing the internet or itself through the internet, it's really just guessing what output should follow the commands you give it. And it's so convincing because it is a really sophisticated model that has learnt from reading pretty much all the text on the internet.
  6. I'm surprised nobody has mentioned chatgpt here, it came out 10 days ago and it's really impressive. I've already used it for understanding topics I'm not familiar with, summarizing large pages of complicated text full of jargon, etc. Has the potential to be a very useful resource for a lot of people. Of course people will find a couple of things that any human can do and that it can't and call it useless, but that's completely missing this point. https://chat.openai.com/auth/login And it can also do this: https://www.engraved.blog/building-a-virtual-machine-inside/ Which is awesome About the doom predictions; on one hand too much of the media is just spouting stupid doomsday shit to get clicks while clearly understanding very little about the current state of machine learning. But on the other hand some of the stuff coming out is actually scary good, even surprising the experts in these fields, and I don't think anyone can comprehend how the world will change when we get the first AGI, it's something that definitely scares me.
  7. I think the only way this could lead to improvement of Twitter is if somehow having it private gives twitter the freedom to do something useful that they wouldn't have had the freedom to do when it wasn't private, which feels plausible but this is a very rocky start so who knows which way this will go. And also importing talent from openAI could also help. And don't worry that hoax is still getting cable TV news shows who should really know better. This whole thing is very entertaining but slightly concerning to me, it's good to see he at least has a line drawn *somewhere* in the sand, not allowing Alex Jones back on the platform. And its not clear to me which is worse for society; Trump being on twitter, or having his own platform, but I guess he will have both now...
  8. You remember? From what. And yeah it doesn't matter how you get there in the end as long as you are editing it to match what you want it to be. Seems like sean from ae does a lot of "aimless" experimentation and that curation is a big part of how he makes it to the final product.
  9. Yeah I've had this issue before too but now I think the solution is to just listen to tracks in all different situations, then your mind will differentiate the musical content from whatever else is going in your life. And agreed about straight up dramatic, I actually found it jarring at first with a few tracks especially F7 but I love it now.
  10. Still growing on me. Gave sign a full listen through the other day and it's better than ever, track 1 is still my favourite by a decent margin. Although I feel like it's something I can only put on when I have the time to give it my full attention so that's limiting my number of listens. What's this autechre effect you are talking about?
  11. Well if you're humming stuff in your head that's 12tet then the dissonance is just your expression, no problem there. And if it's an unintentional effect of forcing your expression conform with 12tet then there is always microtonical for that (easier said than done obviously).
  12. There is evidence for learned information being passed from one generation to the next through epigenetics, so that sort of thing isn't inconceivable. I think it's possible that each of us can be seen as an entity that switched bodies at the time of conception, and only retained the most "general" information known before that changover, call this information deep truths about reality or the priors for more general intelligence, or patterns of energy, whatever. Anyway it's just an idea, I tend to try to match the strength of my beliefs to the strengths of the evidence for them. About reading music; I don't think that the standard musical notation is the only way or the best way to do this, the goal is compression of information, from sound in your head to "something" that is easily remembered and translated into your medium of choice, and if you're working with synths then musical notation can't really store much info about patches. I think this can be done in a lot of ways and maybe doing it intuitively is best, practicing getting something into and out of this compressed format should help more than any amount of learning what minims and crochets are.
  13. I agree that this intuitive side of generating music in your own mind is very useful, it's a much faster and effective way to come up with material you enjoy, compared just banging on keys and recording any parts that sounded interesting. But of course this then requires becoming skilled at being able to recreate something you've heard internally. I think there are a lot of different possible techniques for getting this going, but starting by looping something simple in your mind and evolving it over time can get some really interesting results, and it's often frustrating that it's so difficult to recreate in a daw or otherwise. One technique I've found effective this: start with the first few seconds of a composition, then listen to it and then imagine how you want it to go after that small part ends, then try to recreate it, once you get that added part right, move on to listening to the whole track(it's important to hear the whole thing) including that part. just keep looping this process until you're happy with the length of the track, this should result in pretty much a whole track that's come straight out of your imagination, in the same way you can get those imagined tracks going on internally. Rolli can you elaborate on various lifetimes please?
  14. Yeah it is, and its a pretty cool project but imo its not very interesting to listen to after a few minutes. From the videos explaining how it works it seems to run without human input and has a lot of randomness, I'm talking about an approach with no randomness
  15. I've been thinking more about generative processes for composition, but have been very unimpressed with most of what I have heard, most of it is soulless, aimless, and generally not very interesting. However Autechre seem to have solved these issues completely, so I have attempted to figure out how this is possible: There are 3 main issues I see. Firstly, randomness; in order to add variation a lot of people introduce randomness, this results in music that at least has variation and can morph over time, but there is no good reason for any of these changes, so its meaningless and uninteresting. The second issue is that most of the generation I've seen is only loosely tied to what happened before; people create large systems of routed LFOs and midi tools that spit out sounds, but it seems rare that the logic to create the sound for the current time step is actually reacting directly to the output from the recent time steps in the past, which is what we humans do to create music. The third issue is human input; the systems with no human input seem particularly uninteresting, but incorporating that input seems like a big challenge, if it's in real time then the user must effectively learn to play their system like an instrument, so traditional sequencing should be an option, but this seems rare too. The solution to these problems is to effectively create an automaton, which takes input from the past, and uses a set of rules to generate the current time step. The user input should fill the place of the starting state, and a sequencer should be included so that the user can micro edit the substrate that the automaton reads from and writes to. The next question is what exactly the substrate should be, my proposal would be in a continuous timeline which can be filled with abstract "event" objects. For example a drum hit would be an event, as would a synth note, a parameter change, etc. With a system like this you should be able to construct deterministic worlds by setting up the rules of the automaton, and then effectively create stories within this world by setting the starting conditions and adding specific events along the way. So my questions are, has anyone here experimented with this approach? If so what were the results? Are there examples of this that I can find online that have been explained / open sourced? What software would best be used to construct such a system?
  16. I used to feel that way about Acroyear2, I thought the melody was cheesy in a similar way to how I saw that melody in Autriche. But now it feels more like an essential part of the track that that I love, its like an anchor for the percussive / noisy / timbre changing melody that expresses so much. They seem to reinforce and synergize each other in a way that wasn't obvious to me early on, great track. And of course Rae is great, though quite a depressing track to me, starts out with so much energy and then seems to get sickly and crippled, something like that anyway.
  17. They seem to have deleted their bandcamp stuff as well as SoundCloud which is strange, but anyone that wants the flacs can find them on slsk. I'm a huge fan of this release, bird compartment is the highlight for me and the whole album is filled with really good stuff. Sweet melodies, very original with lots of personality. Highly recommended. https://youtu.be/h4bFCOzm0e8
  18. Only 5:1? Seems like 50:1 for me. I agree through
  19. Could be Paul Nicholson? The guy who created the aphex logo and danced on stage at his live shows. Edit: just read his ama again and it's definitely not him
  20. Probably. I actually thought that maybe that final scene would lead straight into Walt walking in and meeting Saul for the first time, but I went back and checked and in breaking bad Saul's outfit is different for that first meeting compared to the end of this episode.
  21. If anyone is actually affected by this issue google sci-hub
  22. "Rob's a big geology fiend, always talking about rocks" Really enjoyed this ama, was nice how he actually answered questions pretty much in order as they came in the chat, not skipping any, at least early on.
  23. This sounds like a threat
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.