Jump to content

vkxwz

Members Plus
  • Posts

    222
  • Joined

  • Last visited

Everything posted by vkxwz

  1. For me at least it's more like I have to be in the right mood for it, and when I am then the complexity just feels like depth of expression, all form and function rather than throwing the kitchen sink in just for the sake of it. When not in the mood it's not as satisfying, and if I zone out for a few seconds or start thinking about other things I miss how everything connects. Ofc someone who makes music can appreciate how technically impressive the music is moreso than someone who doesn't, but I don't think that's important or the point of the music.
  2. Why? Without the intention to sell or show off to other people the price is irrelevant imo.
  3. I suppose it's all taste isn't it, I still find these sorts of criticisms strange though, "too much drums", like the ratio of different elements is more important than the form they create together. Like if you read a book and complained that the frequencies of the occurrence of different words weren't to your taste or something. Also there is the ambient version and you're asking for 1 or 2 drum machines, I see a solution for you...
  4. I know it's just an opinion but come on hahaha. At least with the way I hear it every bit of percussion fits together to create something greater than the sum of it's parts etc. Seems strange to pick out specific parts and go, I don't like that, it's unnecessary, like looking at a portrait and going, I don't like mouths, he shouldn't have painted a mouth.
  5. Great video, I think what she's saying is spot on and it's definitely amplified by social media
  6. What are you referencing here? But yeah imo neuroscience has such little understanding of what's really going on in brains that it's laughable, source: I did a neuroscience degree. Neuroscience is like trying to understand how birds fly by looking at feathers under a microscope. And most machine learning is based on a simple abstraction of a neuron from neuroscience way back in the 50s, anyone that says neural networks are doing the same thing as the brain is insane.
  7. 99% of media content related to machine learning is dogshit, there is a lot of demand from the public for information right now and very little expertise in the media so they are just pumping out whatever bullshit they can.
  8. I was thinking the other day that because of this they could be doing all their live shows remotely, from home. I'm just imagining rob on the couch with his laptop while fans lose their minds in the darkness thinking he and sean are in the room with them when they're actually on the other side of the world.
  9. Fuckers opened ticket sales 5min early...
  10. I'll admit I don't know much about the topic, my views are formed pretty much entirely from talking to people that actually experience gender dysphoria but I recognize that the reasons are probably different for different people so I can't generalise to everyone. I am curious about what your experience is/was but don't want to pry, the few trans people I do know irl don't want to discuss this kind of thing with strangers. I agree it doesn't matter, and I don't know about the social contagion theory but it seems quite plausible to me that these things are largely nurture not nature. If you feel like I'm presenting incorrect information I would like to know how and be proven wrong if possible. This "misinformation" is just my best understanding which I don't claim is the ultimate truth and I'm not gonna just adopt whatever opinion is the flavour of the month unless I fully work through it and understand it. If it'll make you happy I'll add a disclaimer; I'm an idiot who's not trans or an academic in this area and I'm mainly just curious about how the whole thing works and I care about the issue because people close to me have had struggles related to the topic. And I'd argue it isn't all figured out and that there is actually still mystery remaining but if that's not the case, could you provide links to some information?
  11. This whole topic is a mess imo, @Summon Dot E X E I understand where you're coming from completely and agree in a lot of ways in terms of the the age of consent stuff. And to @zlemflolia the lack of "normal" healthcare is obviously a bad thing, and the bigotry that's rooted in disgust/religious beleifs/resistance to things that go against ones world view needs to go. If I'm talking about what I think should be allowed in terms of transitioning in the current society, I think that it really is something that has pros and cons that need to be weighed up, obviously the risk of regret does exist to some extent(altho small) and there are health risks involved too, but the other side of the story is the positives you get from it, being the alleviation of dysphoria and generally higher quality of life and life satisfaction in many cases. For a long time I did feel that there was just something wrong about transness, since I see gender as something similar to sex stereotypes, and then transitioning is identifying with the stereotype of the sex that doesn't match your own and then modifying your body to match that sex more(which is very odd to me if you use analogies to explain the concept). but in the end for many people in our current society the transition actually seems like a good option when you weighs the pros and cons. The real interesting part to me is what actually causes the dysphoria and gives people the motivation to want to transition, my opinion is that it's a societal / cultural issue where people that don't match the stereotype of their sex are viewed in a negative way. There will always be a wide range of behaviours and personalities and bodies within each sex and viewing certain kinda of configurations as wrong or bad is where the issue stems from. But also you can do whatever you want with your body so even if somehow the culture "fixes" this issue I think people should be allowed to cut off their balls etc.
  12. The neural networks are just chains of matrix multiplication, fairly straightforward math so we understand what operations are being done. But the amount of computation happening is so massive and the training process creates emergent structures in the operations(by modifying them automatically in the training process, gradually over time). The issue is that it's quite difficult to identify and understand this emergent structure even though we can see that it works and know every step of the math done to compute the output. it's like trying trying to understand how a human brain generates thoughts just based on data that is the firing patterns of all neurons in the brain at all times. There is a whole science around how to set up the neural networks like an environment that the emergent functions can "evolve" in, but much less understanding about the nature of those functions.
  13. But why do you think that brains in particular rely on those properties(assuming they are not able to be simulated, which I disagree with). Obviously some things that happen in physical space can be simulated digitally at the level of abstraction needed to accomplish that task, for example we humans can multiply numbers together with a pen and paper, and this process can be simulated digitally at a level of abstraction that means it still gets the same end result. why is intelligence not something similar to the multiplication in that example. So about the simultaneous processing part, it doesn't really matter if you're doing it step by step, an algorithm with 0 parallel computation (which modern computers actually do nowadays anyway) can still compute the next state in a way that you might call simultaneous processing, think of something like the game of life, where 2 gliders in different locations are updated simultaneuously as you go from one step to the next. And if you argue that simulating in steps like this cannot yield intelligence I think you're going to need a better reason to support that claim than the fact that we dont fully understand particle physics. I think we are on the same page about the complexity of the brain though, I think that when people claim it's digital they are talking about neurons either firing or not firing, but even when this is the case, the timing isn't "digital", so the frequency of firing is again an analogue thing and the phase of a signal matters and you get interference etc.
  14. What's your reasoning for this? All things we've called intelligent up until this AI stuff is analogue obviously but I don't see any clear reason why we wouldn't be able to create it digitally, if you believe that intelligence arises from the interactions of physical matter then if we were able to adequately simulate that physical matter would that not yield intelligence? Anyway this another case of not having a definition for the word intelligence.
  15. Great video, I was thinking of linking some of this guys stuff before, I think stating your own definitions like he does is pretty important for this topic otherwise you just end up disagreeing about what words mean, like a lot of this thread..
  16. I feel like you could say this bit about attributing nothing to the ai itself to most other technology. Why attribute anything to jet engines, the creation of them rests upon all these manufacturing and resource gathering processes so they don't really do anything themselves. But jet engines still produce something we value, flying, and I think it's the same with LLMs, of course a lot has gone into them but they produce something that's valuable just like flying is. I think the issue with our discussion here is that we obviously don't agree on a definition of intelligence, my personal definiton is the ability to make models of a domain that are predictive, and chatGPT definitely has aquired models that allow it to solve the tasks required to predict the next word, based on a huge dataset. Also the comparison to markov chains is super reductive, these systems compute the "meaning" of words in context in a recursive kind of way and then use this to compute a new algorithm to guess the next word. The only way it matchs a markov chain is that it calculates probabilities of the next state(word) based on its current state(prompt / all the previous tokens). I would arge that humans can be reduced to this sort of definition too, we simply compute the next state(the next contents of conscious experience + actions) based on the current state(current contents of conscious experience), like a state machine. There is definitely a valid argument here about wheher or not text is enough, but I think its clear you can learn to model a domain even if you have a super limited and compressed stream of input from it (see othelloGPT, humans). People are putting these models in observation, thought, action loops to give it agency which is pretty cool and possibly scary: (green is gpt generated)
  17. from that article: "it's still a computational guessing game. ChatGPT is, in technical terms, a 'bullshit generator'. If a generated sentence makes sense to you, the reader, it means the mathematical model has made sufficiently good guess to pass your sense-making filter. The language model has no idea what it's talking about because it has no idea about anything at all." this is a bad take that seems to be spreading among people who have little to no understanding of the field. There is strong evidence that these systems contain internal models of the world whichs data they are trained on, here's a nice article on one example of this: https://www.lesswrong.com/posts/nmxzr2zsjNtjaHh7x/actually-othello-gpt-has-a-linear-emergent-world "But despite the model just needing to predict the next move, it spontaneously learned to compute the full board state at each move - a fascinating result. A pretty hot question right now is whether LLMs are just bundles of statistical correlations or have some real understanding and computation! This gives suggestive evidence that simple objectives to predict the next token can create rich emergent structure (at least in the toy setting of Othello). Rather than just learning surface level statistics about the distribution of moves, it learned to model the underlying process that generated that data." But I do think it is very important to acknowledge the fact that the GPT models do hallucinate, using in in the place of google is a bad idea. It's better used as a sort of reasoning engine.
  18. ChatGPT is only in a primitive stage but I personally have learnt a lot from it, especially using it in conjunction with a text on the topic you want to learn about. You can engage in a back and forth conversation where you can ask for more detail on whatever you don't understand and also ask it to verify if your understanding(as you explain it) is correct, like a 1on1 tutor. I'm certain it'll be an extremely valuable learning tool in the future as it improves. As for the therapist part I agree that it feels dystopian, but I've already witnessed someone use it in a therapy like way when they were in a dark place and it genuinely helped them when they had no access to a professional(there is a shortage of them where I live). Imo it's in extremely early stages and is not meant to be used for therapy at this stage though. I'd want to hear you expand on what you're saying about manufactured decisions though.
  19. I think that in the medium term the promise of AI is A) a great teacher that is an expert in every topic at a level of the best human expert, B) an excellent therapist on par with some of the best in the world, and C) able to help you as an individual make more informed and thought out decisions in general life choices through it's own reasoning and information gathering skills, all for a low price compared to comparable ways to get a tiny fraction of these benefits in the current world. While this doesn't directly work on societal issue such as universal housing, climate change etc, by giving people this power it will indirectly lead to these areas being improved. Everyone already working on these issue will be more effective. AI can be thought of as the meta problem, problem solving problem solving, if you have strong AI then your capacity to solve other problems is magnified. And while I agree there is a lot of bullshit surrounding the safety discussion, I wouldn't rule out the risks of AGI, if you have a simple solution to the alignment problem let us know. OpenAI is capped profit and is controlled by a non profit, obviously this doesn't mean that what you're talking about can never happen but it helps. And if it does go that way, it's not a 0 sum game, the general population can stuff get huge benefit from this stuff even if the "owning class" gets access to better models / sooner
  20. Gpt4 is very good at teaching things and is also quite good at coding (as good as the average human on leetcode, a website where users submit small programming challenges for others to solve). So I've been using it for writing a video game I've been working on for a couple of years, and honestly I haven't been this motivated in a long time to write code, it writes decent code quickly to your specification, can debug issues with it, and will suggest libraries and how to install them and integrate them in the project. The other night I spent a couple of hours working on a piece of code without ever running it, just responding to it's output with "ok good, now I want it to be able to do this, and also can you change this other thing behind I want xyz". And when I'm writing code and want to know more about the library functionality I'm using I can just ask it and get good explanations. I also told it to write a python script to count the lines of code in all files of a certain type in a folder and it worked first time. It's like having a tutor / pair programmer that never tires and is always up for more exploration/teaching/code writing. And this is a great article about what these large language models actually are, comparing it to a simulator (or a dreaming engine imo) rather than thinking about it as an "agent": https://www.lesswrong.com/posts/vJFdjigzmcXMhNTsx/simulators
  21. Haven't seen gpt4 mentioned here yet, more powerful than gpt3.5, can write code pretty well in simple cases, less prone to lying. Can pass the bar exam, and a lot of other exams like AP biology. At what point does this stuff start disrupting the workforce? I'd say pretty soon for some jobs. oh and it takes image inputs and can explain jokes / memes now.
  22. Someone once told me it sounded like a lawnmower and to turn it off.
  23. I started a couple of years ago on chess.com too, was more fun than I remember but I'm still too lazy to learn any real openings, it's a beautiful game imo
  24. Are most people around here confident in who they think it is? Lyff acid is clearly Aphex to me but the rest of the tracks feel more like Squarepusher. If they are truly all made by the same person I am confused. Oh and boxenergy > mangle. Only just though, box is more fun and mangle has a more interesting atmosphere.
  25. I don't think it's better than a human in it's explanations of topics when the human is an expert on that topic and a good teacher, but I think it's already extremely useful. I was copying large chunks of text from machine learning papers into it and asking it to briefly summarize those sections. It did a great job of this but one of these times, there was a complicated concept I had never heard of before in the summary it gave me, so I asked it to explain it. The first response was correct but not super detailed, so I asked it for more detail on a specific part of the explanation it gave, you can do this recursively on exactly what parts you want explained further. Once I'd had all my questions answered, I typed out my understanding of the concept and asked it to tell me if I was getting it right, it did this well too.. Next, I asked it to explain how that concept fits into the original summary of the paper and it did so. This was on a fairly niche and not so simple topic and it actually helped me understand something quicker than if I had googled it.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.