Jump to content

Satans Little Helper

Supporting Member
  • Posts

    850
  • Joined

Everything posted by Satans Little Helper

  1. Thanks for posting. He's indeed a bit cringy, but he's saved by being able to inviting relevant experts and having a good conversation with them. I appreciate the sober and professional take on AI from both experts. I'd hope people like these get to advise governments, because their advice will only be taken seriously by private companies if it's in their interests to do so.
  2. Thanks for the burger. And since were all adults: https://www.ft.com/content/6eccea8b-6d81-45df-a922-574b3249e0d1
  3. And the risk is about the ability to create and spread misinformation, right? Just to be clear we’re not talking about a terminator scenario here.
  4. respectfully disagree. the comparison makes no sense ?
  5. In a way, this album impresses me. But at the same time, it's not really my cup of tea. Not sure why. But listening to the songs, it becomes its own satire pretty quick. Every track is lots of reverb and lots of distortion. Some slow beats entering half way. Lots of atmosphere. Lots of drama. Over the top drama. To the point of being as over the top as a Transformers movie. It sounds like a beautiful soundtrack to a ridiculously boring movie. Perhaps it needs more listens, but currently, I'd skip these tracks like I'd skip a transformers movie. ?
  6. With the amount of people dying to gun violence each day in the US, you'd almost think the civil war has already begun ? Couldn't help but google some stats https://www.teamenough.org/gun-violence-statistics
  7. OMG.. that's...that's painful. Too much truth in one drawing As a general response to Chomsky, this video is an interesting mental exercise in political philosophy.
  8. Just a quick general response regarding the hype. This is not specifically aimed at you, @auxien or anyone else. To be honest, this is mostly about my frustrations regarding the situation surrounding AI. Like for example that call for a 6 months pause in AI development. Because we risk creating something we cant control and we need these 6 months to get a better grip on the risks involved (https://www.dw.com/en/tech-experts-call-for-6-month-pause-on-ai-development/a-65174081). Stuff like this gets my blood boiling, tbh. Who with their right mind would believe these 6 months would matter? Apart from a bunch of people within AI development who calculated they are 6 months behind in development and hope the sheeple follow their aprils fools joke. (Elon Musk…) Just think about the widespread serious response this generated. That stuff bothers me. I ’m still looking for the widespread apologies from people saying they fell for the joke. If you’ve seen it, give me a call. I’m afraid people still assume it was serious and the singularity is “more close than we realize”. Please, if you’re seriously worrying about the singularity, you should get off social media for a month. What you should be worried about however, is your job. Thats the only part of the hype that should keep you on your toes. All the other stuff, lets call them the @Alcofribas trigger points, you can ignore freely. To name a few: consciousness, sentient, awareness. Ignore them not just for yourself. But also for the sanity of said watmm regular. And myself. ?
  9. If you want more, check this one as well. His explanation of how the architecture works is insufferable. (when he's trying to explain it using his hands...) Even Lex visibly can't keep up. Besides the explanation- which you got from the previous video - he makes some interesting comments. Key points: - AI is converging (for the last 5 years) to using this transformer architecture. because it can do "anything". you can throw any "problem" at it. for the sake of the argument, you can read "problem" as function. Or in other words, the transformer is very good at solving specific problems or having a specific kind of functionality. (as opposed to being able to do everything at once, btw. the architecture works best when it is used on a single function, like being a chatbot for chatGPT. which is an essential feature for those who are thinking about "general intelligence" - or whatever that means) - General approach with current AI field is to use the unchanged (!) transformer architecture but play with all the other options. (this tells you how important a good architecture is, btw. if you screw up the architecture, you're nowhere. not even data or countless amounts of paramaters will save you)
  10. OK, to be frank, I should stop responding at this point. If you conflate parameters with data and architecture, this argument will go simply nowhere. More parameters (or data) do not solve every problem if you've got the wrong architecture. The transformer architecture is what is behind the success of chatGPT. More info here:
  11. OK, so in other words, it is actually about functionality. Fair enough. I pretty much agree with this point. Trick is though, that you have to understand what kind of data you need for different kinds of functionality. If you want to make a chatbot, you can learn a model on large amounts of text. If you want to create "a human brain" - which is obviously vague - you have to know what kind of data it needs. And whatever that is, it is not just the internet. Or text. On the architecture side of things, I must say the complementary ML models seems far fetched to me. Unless a transformer model is similarly a collection of " many different complementary ML models together". There's a certain reason (logic) to the transformer model which makes it work. There needs to be one to make this artificial brain work. And assuming we have such an architecture, I'm afraid your going to be stuck with a "brain" in a bottle. Without a body it wont be particularly interesting. And yes, I'm fairly certain we're more than just a brain in a body. There's a bit of embodiment which is tied to our human intelligence. imo. That's why I'm not too hyped, btw.
  12. I think you're a bit too much going for the hype. From the transformer architecture it's clear how important the architecture and the data are for its success. The claim to build a brain also needs both: an architecture and a lot of data (parameters). We're not there yet. Not by a long shot. You can't just train a different architecture on the same data as chatGPT (the internet) and think something entirely different pops out on the other side of learning. This brain "dream" requires a different kind of approach.
  13. This isn't news or a surprise, right? Not sure what's the significance. Neural networks have been around since the 70s. And since roughly 2010's I believe, the complexity of those networks have increased and the term "deep learning" is put on top of it. It's not some dark art or anything.
  14. Facebook? (I'm not on FB, so honestly no idea!) Like to know as well, btw. ? There's a bunch of self released tracks according to discogs. And I have none of them ?
  15. sounds more like bach to me ? good one though. well played ?
  16. Living the free life in the U S of A. ?? https://www.theguardian.com/us-news/2023/apr/20/north-carolina-shooting-girl-parents-basketball-yard
  17. this thread is now about michael jordan! everybody wants to be like mike!
  18. i cannot help but wonder why vinyl keeps on being a thing. using fossils and all that. The climate footprint of vinyls compared to digitals... is that even a fair comparison?
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.