Jump to content
IGNORED

AI - The artificial intelligence thread


YO303

Recommended Posts

12 minutes ago, Satans Little Helper said:

OK, to be frank, I should stop responding at this point. If you conflate parameters with data and architecture, this argument will go simply nowhere.

apologies if i’m conflating anything i shouldn’t be, i truly was asking for information…if my understanding is off i’m glad to learn how.

will check out that vid on transformers as part of this tech, thanks for linking.

  • Like 1
Link to comment
Share on other sites

  • Replies 956
  • Created
  • Last Reply

Top Posters In This Topic

2 hours ago, GORDO said:

like, what does it do? Anything? Everything? Will it run companies and countries? Will it gain autonomy and make decitions without input?

we know technological advancements of any kind all have a vague undertone of helping make human life easier in some regard. the same logic then should apply to AI, since it too is under the umbrella of technological progress. I know these questions you threw out there are rhetorical, but I agree that asking what's the point of AI, and how exactly will it help humanity (like we need some super granular answers here, not some vague marketing techno babble BS) is totally valid.

when I was younger (like 20's), I used to be in the boat that AI somehow advancing past human intelligence made sense from an anthropological point of view. humans have been at the top of the food chain for tens of thousands of years, only a matter of time before something overtakes us (I was hoping it'd be aliens. sad lol). the AI frankenstein thing that eventually overtakes us is by our own very design, is the icing on the proverbial cake (thanks human ego!). now I'm in the middle age phase, and think it's all pretty much just a total crock of bullshit.

humans from the get-go are deeply flawed animals, most of the daily activities everyone busies themselves with are ultimately pointless endeavors. everything is temporary. human emotion is based of off thinking, thoughts, and a shit load of times, our thoughts are totally wrong when compared to reality. and so we want to design an AI that somehow follows the human logic path, with all our confused emotion baked into it, will also ultimately be flawed.

we're never going to be able to reverse engineer consciousness, because we would first need to be able to reverse engineer what started the universe. creation! and then also be able to explain what happens after death. which no one is ever going to be able to do. so tinkering around with trying to design an "artificial" consciousness, when we can't explain our own, is just really fucking stupid. but humans be humans, got to keep pushing those goal posts further out, keep trying to outrun death. so fuck it. let whatever happens happen. we don't really have a choice here. 

Link to comment
Share on other sites

24 minutes ago, auxien said:

apologies if i’m conflating anything i shouldn’t be, i truly was asking for information…if my understanding is off i’m glad to learn how.

will check out that vid on transformers as part of this tech, thanks for linking.

If you want more, check this one as well. His explanation of how the architecture works is insufferable. (when he's trying to explain it using his hands...) Even Lex visibly can't keep up. Besides the explanation- which you got from the previous video - he makes some interesting comments.

Key points:

- AI is converging (for the last 5 years) to using this transformer architecture. because it can do "anything". you can throw any "problem" at it. for the sake of the argument, you can read "problem" as function. Or in other words, the transformer is very good at solving specific problems or having a specific kind of functionality. (as opposed to being able to do everything at once, btw. the architecture works best when it is used on a single function, like being a chatbot for chatGPT. which is an essential feature for those who are thinking about "general intelligence" - or whatever that means)

- General approach with current AI field is to use the unchanged (!) transformer architecture but play with all the other options. (this tells you how important a good architecture is, btw. if you screw up the architecture, you're nowhere. not even data or countless amounts of paramaters will save you)

 

 

  • Like 3
  • Thanks 1
Link to comment
Share on other sites

Well whatever I'll just throw it in here that I think the internet as a whole is a self aware entity that has been acting autonomously to influence humanity for some time now.

And that's the only explanation for how Tumblr and Chan culture dominate our socio political interactions as of late.

  • Haha 1
  • Big Brain 1
Link to comment
Share on other sites

13 minutes ago, GORDO said:

Well whatever I'll just throw it in here that I think the internet as a whole is a self aware entity that has been acting autonomously to influence humanity for some time now.

And that's the only explanation for how Tumblr and Chan culture dominate our socio political interactions as of late.

it is semi-autonomous, it is a vehicle of class oppression, propaganda dissemination, and psychological operations on a mass scale, designed by decisions made by the owning class, and AI designed with their goals in mind

Edited by zlemflolia
  • Like 2
  • Farnsworth 1
Link to comment
Share on other sites

1 hour ago, Summon Dot E X E said:

We definitely need to keep the ML we make tailored towards the existence of life, but I don't think that means creating an ML which has been created to have a thought process as close to a human as possible in order to put that ML in charge of important tasks. 

I definitely don't think we should make something that acts and follows processes similarly as we do. but if you go all the way down the rabbit hole, we are the architect of ML, the concept of machines originated from human brains/thoughts. therefore there's some human fingerprints on this thing. and that's what is worrying about having machines that learn and will some day out pace our intelligence. we started it, but can't see where it ends.

 

1 hour ago, Summon Dot E X E said:

You could have an ML personal assistant, trained on your personal data, who could give you advice on actions to take. A little angel on your shoulder, or devil.

dude that sounds terrifying lol. the ability for concepts like this to get completely abused by people is the reality here. and if the ML/AI things go into bodies that resemble a guy/girl, well...ever see that AMC show called Humans? hot AI housekeeper robot that eventually gets put into 18+ mode haha.

 

1 hour ago, Summon Dot E X E said:

we don't need to solve this problem for ML to cause radical changes in human society.

we will never solve the problem of consciousness. for sure ML / AI will cause radical changes in society. I just don't think it's a good idea to create an "unnatural" intelligence, when a lot of questions about our intelligence - we don't 100% know the answers to. btw this shit is getting way over my pay grade in the philosophical department.

 

1 hour ago, Summon Dot E X E said:

I'm pretty sure those changes are inevitable. Hope they're not too bad.

how's it go...the only constant is change? change is constant? something like that. one of those wisdom line items from the Buddha.

yeah, these changes we are talking about are inevitable. there is no stopping progress. we'll progress our way into our own graves if we have to! it's the human way. I too hope they're not too bad. but look around at everything in the world. a lot of bad shit out there, most of it is due to humans. even nature is pretty brutal. everything alive out there trying to put pieces of other plants/animals/organic material into themselves. passing it through the system, then somehow crapping it out. that's basically all life does maaaaan! 

  • Like 2
Link to comment
Share on other sites

after all's being said and done in this thread I don't see much of a diference between it and my idea/post about embracing AI's worship and let them rule human beings like Gods...

i'm sorry@auxienI deleted my previous post by mistake when editing it... your reaction was probably a facepalm so you're free to react again unless in the meantime you gave it a 2nd thought and realized I'm right...

Edited by cruising for burgers
  • Burger 1
Link to comment
Share on other sites

36 minutes ago, cruising for burgers said:

after all's being said and done in this thread I don't see much of a diference between it and my idea/post about embracing AI's worship and let them rule human beings like Gods...

i'm sorry@auxienI deleted my previous post by mistake when editing it... your reaction was probably a facepalm so you're free to react again unless in the meantime you gave it a 2nd thought and realized I'm right...

it was a burger, i'm just trying to help you gather every burger any chance i can my good friend MJ. rather warranted with the seemingly dismissive take tho.

idk why anyone sees the discussion of potential seriousness as just hype/being ridiculous or whatever. i can see that concern, of course, but it's not like me or anyone else is going 'oh shit y'all! it's about to be Judgement Day and we're all gonna die!!!!!' but also yknow shit like this is popping up regularly.

  • Like 1
Link to comment
Share on other sites

8 hours ago, Satans Little Helper said:

OK, to be frank, I should stop responding at this point. If you conflate parameters with data and architecture, this argument will go simply nowhere. More parameters (or data) do not solve every problem if you've got the wrong architecture.

The transformer architecture is what is behind the success of chatGPT. More info here:

 

good video for ELI5 which is good, i'm obv not familiar with any of this deeper stuff.

7 hours ago, Satans Little Helper said:

If you want more, check this one as well. His explanation of how the architecture works is insufferable. (when he's trying to explain it using his hands...) Even Lex visibly can't keep up. Besides the explanation- which you got from the previous video - he makes some interesting comments.

Key points:

- AI is converging (for the last 5 years) to using this transformer architecture. because it can do "anything". you can throw any "problem" at it. for the sake of the argument, you can read "problem" as function. Or in other words, the transformer is very good at solving specific problems or having a specific kind of functionality. (as opposed to being able to do everything at once, btw. the architecture works best when it is used on a single function, like being a chatbot for chatGPT. which is an essential feature for those who are thinking about "general intelligence" - or whatever that means)

- General approach with current AI field is to use the unchanged (!) transformer architecture but play with all the other options. (this tells you how important a good architecture is, btw. if you screw up the architecture, you're nowhere. not even data or countless amounts of paramaters will save you)

so i watched and generally understood everything in these. nothing crazy, i knew about some aspects of that (probably from Rob Miles on Computerphile) about attention/back propogation, but didn't know the spread of the transformer component of the architecture specifically. interesting of course, but i don't see how any understanding of that really means much to anything i've been going on about. my question of  "what architecture is needed we don’t have?" is seemingly supported by those videos...transformers seem to be the bee's knees right now. i doubt that'll be all that's ever needed, but for now it seems to be wildly successful, which is exactly what Karpathy is saying (Lex seems like a bit of a dolt btw, pretty sure i've seen his vids recommended before and skipped them because he's pretty distracting)

regarding my other points in: "the parameters available for data are already far vaster than you or i can comprehend (the entirety of human history and research as it exists digitally). babies are trained and grow to reasonable thinking adults on much, much, much less data (tho vastly different, given human input is largely physical and familial/community based)." - 'parameters for data' is almost certainly just incorrect verbiage for what i was trying to say, so yeah not quite right. but the rest of the info and ultimately the points i was trying to make are still very much relevant. your point doesn't seem to make sense tho....

"I think you're a bit too much going for the hype. From the transformer architecture it's clear how important the architecture and the data are for its success. The claim to build a brain also needs both: an architecture and a lot of data (parameters). We're not there yet. Not by a long shot. You can't just train a different architecture on the same data as chatGPT (the internet) and think something entirely different pops out on the other side of learning. This brain "dream" requires a different kind of approach." - so, yeah the transformer architecture works very well, and now everyone's just throwing tons of new types of data and trying to add things around the transformers aspect, that's what Karpathy says in your linked vid. so 'the claim to build a brain needs both an architecture and a lot of data' and...well that's what's happening, right? i'm not saying ANY of the stuff = a brain....i'm saying the main guy behind OpenAI is stating nearly that right now. 

knowing what replicating a 'brain' requires in approach is...well, if you've got an answer you'd be a very rich person! no one understands even very simple 'brains' right now. i've seen research into the very simplest of organisms behaviors that is just mind bending, the scientists who KNOW this shit are amazed at very very 'simple' organisms can do that acts like intelligence. 

my slight digression there is to emphasize how important it is to keep in mind that 'brains' are a VARIETY of evolved components that are often made of very simple architectures and yet can yknow, do just fine as 'brains'.

the human brain structure isn't my specific interest so i can't speak in depth on the exacts beyond basically what i've already said...and honestly, i've had a brain for over 40 years and i'm not too fuckin' impressed tbh. human dominance is mostly tied to the ability to create civilization and the magic of storing data outside of the mind with shared language in particular. these are very, very important, of course...but A SINGLE human brain in and of itself isn't much more capable than a dolphin brain or a chimp brain or whatever else. 

i'm rambling, feel free to ignore. i appreciate the links, always glad to pick up the correct understanding so thanks again! 

8 hours ago, zero said:

when I was younger (like 20's), I used to be in the boat that AI somehow advancing past human intelligence made sense from an anthropological point of view. humans have been at the top of the food chain for tens of thousands of years, only a matter of time before something overtakes us (I was hoping it'd be aliens. sad lol). the AI frankenstein thing that eventually overtakes us is by our own very design, is the icing on the proverbial cake (thanks human ego!). now I'm in the middle age phase, and think it's all pretty much just a total crock of bullshit.

both reasonable takes...the truth/reality is of course going to land somewhere in between. i doubt there's any singularity going to pop...but the fact that the main scientists and researchers involved in this shit are all like 'well, actually yeah it's definitely possible, we're looking into it and the possible path to it is being laid faster than many of us expected' is what's worrying to me. the chance that it's not impossible in their eyes is worth noting.

8 hours ago, zero said:

we're never going to be able to reverse engineer consciousness

still don't know where i stand on that really. consciousness is such a huge gradient that trying to define it is possibly useless. i think this ends up being a semantic/conceptual misunderstanding more than any reasoned true issue or 'point' of discussion. it matters in that we don't have any other way to discuss tho...and the ability of a 'created' set of algorithmic processes built on silicon to approach really seeming to be self-aware/capable of consciousness is important.

___________________________

 

personally, i think this all comes down to human exceptionalism/selfishness/fear. the reactions here by some (myself included!) and in general online make this very, very clear to me. this shit makes us ask and have to answer the biggest questions that most of us honestly are not intelligent enough to be able to truly even grasp (again, myself included).

  • Like 2
Link to comment
Share on other sites

Just a quick general response regarding the hype. This is not specifically aimed at you, @auxien or anyone else. To be honest, this is mostly about my frustrations regarding the situation surrounding AI.
 

Like for example that call for a 6 months pause in AI development. Because we risk creating something we cant control and we need these 6 months to get a better grip on the risks involved (https://www.dw.com/en/tech-experts-call-for-6-month-pause-on-ai-development/a-65174081). Stuff like this gets my blood boiling, tbh. Who with their right mind would believe these 6 months would matter? Apart from a bunch of people within AI development who calculated they are 6 months behind in development and hope the sheeple follow their aprils fools joke. (Elon Musk…) Just think about the widespread serious response this generated. That stuff bothers me. I

’m still looking for the widespread apologies from people saying they fell for the joke. If you’ve seen it, give me a call. I’m afraid people still assume it was serious and the singularity is “more close than we realize”.
 

Please, if you’re seriously worrying about the singularity, you should get off social media for a month. What you should be worried about however, is your job. Thats the only part of the hype that should keep you on your toes. All the other stuff, lets call them the @Alcofribas trigger points, you can ignore freely. To name a few: consciousness, sentient, awareness. Ignore them not just for yourself. But also for the sanity of said watmm regular. And myself. ?

  • Like 2
Link to comment
Share on other sites

8 hours ago, Satans Little Helper said:

Please, if you’re seriously worrying about the singularity, you should get off social media for a month. What you should be worried about however, is your job. Thats the only part of the hype that should keep you on your toes. All the other stuff, lets call them the @Alcofribas trigger points, you can ignore freely. To name a few: consciousness, sentient, awareness. Ignore them not just for yourself. But also for the sanity of said watmm regular. And myself. ?

i don’t think we can so easily ignore the Alco Trigger Points (ATP) bc this hyperbolic language is how the tech is being sold to us, i.e., in such a way that we are enchanted by new technology that will ultimately just be used like any other: for domination by the ruling class. 
 

sadly, we live in a society where a dipshit tech bro from stanford can eat an edible, come up with a worthless idea that all-powerful AI can improve laws by making them “more complex,” receive billions in funding, government green-lighting (maybe he buys clarence thomas a nice treat), and lo!, now you’re in prison for 6.3.5.67 months bc you’re Law App (tm) tracked your every move and found you to be in violation AIL77B-7 for saying “fuck Bezos” in front of your Alexa. 

 

  • Like 4
  • Haha 1
  • Sad 1
Link to comment
Share on other sites

On 4/29/2023 at 1:40 AM, Summon Dot E X E said:

With that information, machines have already been shown to show a much higher degree of accuracy of understanding than humans, such as in some medical imaging models.

I disagree that this constitutes “understanding”

it’s basically just pattern recognition
 

 

  • Like 3
Link to comment
Share on other sites

On 4/27/2023 at 9:41 PM, auxien said:

the main guy behind OpenAI is basically saying ChatGPT is a digital brain, btw, 'neural networks.' he's not making comparisons. he admits the tech is essentially superhuman in breadth of knowledge already, which has a bit of truth to it. expects a singularity to occur based on this tech. "AI is going to become truly extremely powerful" ...expects to be able to induce consciousness in AI, has thoroughly thought it through (my assumption is that they're already working on this specifically). states it probably needs some sort of large governmental regulation/control due to how powerful it's going to become, and soon. "for obvious reasons" he can't comment on costs to develop ChatGPT.

nbd tho nothing to worry about here lol just keep memeing lmao 

  Reveal hidden contents

image.png.551b2abeb301ed8d6de978a89726b580.png 

and he got his PhD in compsci under this guy, Hinton:

from this interview:

he's trying to understand the brain, not specifically interested in AI/etc. thinks there's distinctions in what AI is doing and what our brains are doing. 

notable quote regarding the changes happening with AI: "I think it’s comparable in scale with the Industrial Revolution or electricity — or maybe the wheel."

Hinton quits Google and talks to the NYT (‘The Godfather of A.I.’ Quits Google and Warns of Danger Ahead - The New York Times (nytimes.com):

Quote

“It is hard to see how you can prevent the bad actors from using it for bad things,” Dr. Hinton said.

...

Dr. Hinton, often called “the Godfather of A.I.,” did not sign either of those letters and said he did not want to publicly criticize Google or other companies until he had quit his job. He notified the company last month that he was resigning, and on Thursday, he talked by phone with Sundar Pichai, the chief executive of Google’s parent company, Alphabet. He declined to publicly discuss the details of his conversation with Mr. Pichai.

...

Around the same time, Google, OpenAI and other companies began building neural networks that learned from huge amounts of digital text. Dr. Hinton thought it was a powerful way for machines to understand and generate language, but it was inferior to the way humans handled language.

Then, last year, as Google and OpenAI built systems using much larger amounts of data, his view changed. He still believed the systems were inferior to the human brain in some ways but he thought they were eclipsing human intelligence in others. “Maybe what is going on in these systems,” he said, “is actually a lot better than what is going on in the brain.”

...

“The idea that this stuff could actually get smarter than people — a few people believed that. But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”

...

But that may be impossible, he said. Unlike with nuclear weapons, he said, there is no way of knowing whether companies or countries are working on the technology in secret. The best hope is for the world’s leading scientists to collaborate on ways of controlling the technology. “I don’t think they should scale this up more until they have understood whether they can control it,” he said.

Dr. Hinton said that when people used to ask him how he could work on technology that was potentially dangerous, he would paraphrase Robert Oppenheimer, who led the U.S. effort to build the atomic bomb: “When you see something that is technically sweet, you go ahead and do it.”

He does not say that anymore.

bolding is mine.

  • Like 2
Link to comment
Share on other sites

On 4/28/2023 at 5:14 PM, Alcofribas said:

why would feeding a machine a bunch of data from the internet produce intelligence, autonomy, consciousness, etc.? 

what does it mean to say a machine can "do anything in the domain of information?"

what does it mean to say something like chatgpt is a "brain," and is this terminology appropriate or meaningful?

I asked GPT 4 about it and this is what it thinks: 

Feeding a machine a large amount of data from the internet does not inherently produce intelligence, autonomy, or consciousness. Instead, it is the process of training the machine learning model on this data, using sophisticated algorithms, that enables it to learn patterns and generate responses resembling human-like behavior. While these models, such as ChatGPT, may exhibit intelligent behavior, they are not truly autonomous or conscious.

When we say a machine can "do anything in the domain of information," we mean that it has the ability to process, analyze, and manipulate various types of data to perform tasks or make decisions. This ability is limited by the quality and range of data the model has been trained on, as well as the model's architecture and capabilities.

Referring to something like ChatGPT as a "brain" can be a metaphorical way of describing its function, as it processes information and generates responses in a manner that may be reminiscent of human cognitive processes. However, it is important to note that this terminology can be misleading. While a model like ChatGPT can exhibit intelligent behavior, it does not possess true consciousness or self-awareness, and its functioning is fundamentally different from that of a human brain. As a result, it may be more appropriate to use terms like "model" or "algorithm" to describe these systems.

 

Edited by o00o
  • Like 3
Link to comment
Share on other sites

9 hours ago, o00o said:

I asked GPT 4 about it and this is what it thinks: 

Feeding a machine a large amount of data from the internet does not inherently produce intelligence, autonomy, or consciousness. Instead, it is the process of training the machine learning model on this data, using sophisticated algorithms, that enables it to learn patterns and generate responses resembling human-like behavior. While these models, such as ChatGPT, may exhibit intelligent behavior, they are not truly autonomous or conscious.

When we say a machine can "do anything in the domain of information," we mean that it has the ability to process, analyze, and manipulate various types of data to perform tasks or make decisions. This ability is limited by the quality and range of data the model has been trained on, as well as the model's architecture and capabilities.

Referring to something like ChatGPT as a "brain" can be a metaphorical way of describing its function, as it processes information and generates responses in a manner that may be reminiscent of human cognitive processes. However, it is important to note that this terminology can be misleading. While a model like ChatGPT can exhibit intelligent behavior, it does not possess true consciousness or self-awareness, and its functioning is fundamentally different from that of a human brain. As a result, it may be more appropriate to use terms like "model" or "algorithm" to describe these systems.

 

ATP vs GPT

Link to comment
Share on other sites

further notes from Hinton, via the BBC:

Quote

"Right now, what we're seeing is things like GPT-4 eclipses a person in the amount of general knowledge it has and it eclipses them by a long way. In terms of reasoning, it's not as good, but it does already do simple reasoning," he said.

"And given the rate of progress, we expect things to get better quite fast. So we need to worry about that."

"We're biological systems and these are digital systems. And the big difference is that with digital systems, you have many copies of the same set of weights, the same model of the world.

"And all these copies can learn separately but share their knowledge instantly. So it's as if you had 10,000 people and whenever one person learnt something, everybody automatically knew it. And that's how these chatbots can know so much more than any one person."

he goes on to say he thinks the benefits currently outweigh the risks so doesn’t suggest stopping…and mentions even the US deciding to stop would just let other countries make faster progress… all obvious, sure. but the risks are also possibly v serious and the weight of them doesn’t seem fully accounted for, based on his and many others’ statements.

the BBC article ends with these bits not by Hinton but just the author:

Quote

Even Google boss Sundar Pichai said in a recent interview that even he did not fully understand everything that its AI chatbot, Bard, did.

Make no mistake, we are on a speeding train right now, and the concern is that one day it will start building its own tracks.

Pichai is for all my understanding a very intelligent and talented programmer/coder/whatever. if even he isn’t fully understanding the hobbled/bland Google version of this currently? and basically all of the creators and developers of this stuff are regularly discussing how some stuff is just beyond their comprehension for how it is even happening i mean….that’s kinda worrying.

maybe it’s like the whole airplane lift thing where it’s *mostly* understood but not like 100%…but obviously it works, planes fly…but that shit has always stuck with me too. that lack of understanding contains a lot of space for error and the unknown.

Edited by auxien
Link to comment
Share on other sites

54 minutes ago, auxien said:
Quote

And all these copies can learn separately but share their knowledge instantly. So it's as if you had 10,000 people and whenever one person learnt something, everybody automatically knew it. And that's how these chatbots can know so much more than any one person."

 

Legion.thumb.png.84a5aa0ba24e9504604a55fd17a32470.png

  • Like 1
Link to comment
Share on other sites

1 hour ago, auxien said:

further notes from Hinton, via the BBC:

he goes on to say he thinks the benefits currently outweigh the risks so doesn’t suggest stopping…and mentions even the US deciding to stop would just let other countries make faster progress… all obvious, sure. but the risks are also possibly v serious and the weight of them doesn’t seem fully accounted for, based on his and many others’ statements.

the BBC article ends with these bits not by Hinton but just the author:

Pichai is for all my understanding a very intelligent and talented programmer/coder/whatever. if even he isn’t fully understanding the hobbled/bland Google version of this currently? and basically all of the creators and developers of this stuff are regularly discussing how some stuff is just beyond their comprehension for how it is even happening i mean….that’s kinda worrying.

maybe it’s like the whole airplane lift thing where it’s *mostly* understood but not like 100%…but obviously it works, planes fly…but that shit has always stuck with me too. that lack of understanding contains a lot of space for error and the unknown.

And the risk is about the ability to create and spread misinformation, right? Just to be clear we’re not talking about a terminator scenario here. 

Edited by Satans Little Helper
  • Burger 1
Link to comment
Share on other sites

Thanks for the burger. And since were all adults:

Quote
The 75-year-old British scientist told the New York Times that he partly regretted his life’s work, as he warned about misinformation flooding the public sphere and AI usurping more human jobs than predicted.

https://www.ft.com/content/6eccea8b-6d81-45df-a922-574b3249e0d1

Link to comment
Share on other sites

21 minutes ago, Satans Little Helper said:

Thanks for the burger.

Reference understood! - Imgflip

21 minutes ago, Satans Little Helper said:

yeah, i shared that NYT article FT pulls that info from on the last page. misinfo spread is a concern.

anyway, this is an, ahem, interesting couple of tweets from Hinton:

image.png.645c006f21f9234d28b36d3db672e182.png

 

  • Burger 1
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.