Jump to content

vkxwz

Members Plus
  • Posts

    175
  • Joined

  • Last visited

1 Follower

Profile Information

  • Location
    The Internet
  • Interests
    music

Previous Fields

  • Country
    Not Selected

Recent Profile Visitors

1510 profile views

vkxwz's Achievements

Collaborator

Collaborator (7/14)

  • Very Popular Rare
  • Dedicated
  • Reacting Well
  • First Post
  • Collaborator

Recent Badges

112

Reputation

  1. I started a couple of years ago on chess.com too, was more fun than I remember but I'm still too lazy to learn any real openings, it's a beautiful game imo
  2. Are most people around here confident in who they think it is? Lyff acid is clearly Aphex to me but the rest of the tracks feel more like Squarepusher. If they are truly all made by the same person I am confused. Oh and boxenergy > mangle. Only just though, box is more fun and mangle has a more interesting atmosphere.
  3. I don't think it's better than a human in it's explanations of topics when the human is an expert on that topic and a good teacher, but I think it's already extremely useful. I was copying large chunks of text from machine learning papers into it and asking it to briefly summarize those sections. It did a great job of this but one of these times, there was a complicated concept I had never heard of before in the summary it gave me, so I asked it to explain it. The first response was correct but not super detailed, so I asked it for more detail on a specific part of the explanation it gave, you can do this recursively on exactly what parts you want explained further. Once I'd had all my questions answered, I typed out my understanding of the concept and asked it to tell me if I was getting it right, it did this well too.. Next, I asked it to explain how that concept fits into the original summary of the paper and it did so. This was on a fairly niche and not so simple topic and it actually helped me understand something quicker than if I had googled it.
  4. Good point, seems like the only way is to have AI generated images be treated as if they are "sampling" everything in its dataset, that makes sense to me. https://en.wikipedia.org/wiki/China_brain I should have linked this in my other reply oops. I feel like it's difficult to discuss this without having a good definition of understanding, I think that its best defined as having a model of the thing you understand, which is predictive of its future states, and in this way these ML models do have a type of understanding. The early layers of the visual cortex take in receptive fields that are like "pixels" from the retina, and detect lines of different orientations, each subsequent layer uses the features computed by the previous layer to detect more complex features (you can read about the functions of the different areas here https://en.wikipedia.org/wiki/Visual_cortex), this can also be observed in the filters of convolutional neural networks:
  5. Isn't the human that uses the system to copy accountable? where is there ever not a human in the loop for this process. If you put in a prompt that specifies the style of a particular artist and then you make money off of posting that then the accountability is on you, its just a tool. How are we so different from a chinese room? individual neurons are not conscious, imo the takeaway from the chinese room is that consciousness and intelligence are substrate independant, not that there is some magical property of biological neurons that imbues the signal processing with true understanding. As for the black box argument, I don't think that is really the case anymore, we have a pretty good understanding of the fundamental principles of how they work, and the tools for investigating how trained neural networks work internally are a lot better now (partly thanks to max tegmark I think), especially for convolutional NNs, which break down data into high level and low level features just like our visual cortex.
  6. Can you elaborate on this? And also how is this situation different from humans copying another artists style, and why are those differences such a big problem?
  7. Not sure if you're serious, but one of the researchers high up in openAI speculates that these sort of systems will eventually become very useful for therapy. I would liken it more to a person that has the first 10 prime numbers memorized rather than searching google. It was trained on text from the internet but no longer has access to that data, all the knowledge used to produce it's output is stored in the weights of the neural network. It is an open question of how much it really understands though, since it's seen so much text that you could argue that almost every question it's been asked, it's seen asked on the internet before, but It's able to do things that prove it's doing something more impressive than that imo, such as being able to summarize large amounts of text, extract conclusions from parts of scientific papers that weren't in its training data etc.
  8. So it's not actually running the code, that output it produced was answered in a similar way to how it'd answer if you ask it normally for those prime numbers(it's seen them enough times online to know). For the virtual machine part, I think the best way to conceptualise it is that chatgpt is dreaming, it's not actually running a real VM, it's not actually accessing the internet or itself through the internet, it's really just guessing what output should follow the commands you give it. And it's so convincing because it is a really sophisticated model that has learnt from reading pretty much all the text on the internet.
  9. I'm surprised nobody has mentioned chatgpt here, it came out 10 days ago and it's really impressive. I've already used it for understanding topics I'm not familiar with, summarizing large pages of complicated text full of jargon, etc. Has the potential to be a very useful resource for a lot of people. Of course people will find a couple of things that any human can do and that it can't and call it useless, but that's completely missing this point. https://chat.openai.com/auth/login And it can also do this: https://www.engraved.blog/building-a-virtual-machine-inside/ Which is awesome About the doom predictions; on one hand too much of the media is just spouting stupid doomsday shit to get clicks while clearly understanding very little about the current state of machine learning. But on the other hand some of the stuff coming out is actually scary good, even surprising the experts in these fields, and I don't think anyone can comprehend how the world will change when we get the first AGI, it's something that definitely scares me.
  10. I think the only way this could lead to improvement of Twitter is if somehow having it private gives twitter the freedom to do something useful that they wouldn't have had the freedom to do when it wasn't private, which feels plausible but this is a very rocky start so who knows which way this will go. And also importing talent from openAI could also help. And don't worry that hoax is still getting cable TV news shows who should really know better. This whole thing is very entertaining but slightly concerning to me, it's good to see he at least has a line drawn *somewhere* in the sand, not allowing Alex Jones back on the platform. And its not clear to me which is worse for society; Trump being on twitter, or having his own platform, but I guess he will have both now...
  11. You remember? From what. And yeah it doesn't matter how you get there in the end as long as you are editing it to match what you want it to be. Seems like sean from ae does a lot of "aimless" experimentation and that curation is a big part of how he makes it to the final product.
  12. Yeah I've had this issue before too but now I think the solution is to just listen to tracks in all different situations, then your mind will differentiate the musical content from whatever else is going in your life. And agreed about straight up dramatic, I actually found it jarring at first with a few tracks especially F7 but I love it now.
  13. Still growing on me. Gave sign a full listen through the other day and it's better than ever, track 1 is still my favourite by a decent margin. Although I feel like it's something I can only put on when I have the time to give it my full attention so that's limiting my number of listens. What's this autechre effect you are talking about?
  14. Well if you're humming stuff in your head that's 12tet then the dissonance is just your expression, no problem there. And if it's an unintentional effect of forcing your expression conform with 12tet then there is always microtonical for that (easier said than done obviously).
  15. There is evidence for learned information being passed from one generation to the next through epigenetics, so that sort of thing isn't inconceivable. I think it's possible that each of us can be seen as an entity that switched bodies at the time of conception, and only retained the most "general" information known before that changover, call this information deep truths about reality or the priors for more general intelligence, or patterns of energy, whatever. Anyway it's just an idea, I tend to try to match the strength of my beliefs to the strengths of the evidence for them. About reading music; I don't think that the standard musical notation is the only way or the best way to do this, the goal is compression of information, from sound in your head to "something" that is easily remembered and translated into your medium of choice, and if you're working with synths then musical notation can't really store much info about patches. I think this can be done in a lot of ways and maybe doing it intuitively is best, practicing getting something into and out of this compressed format should help more than any amount of learning what minims and crochets are.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.