Jump to content
IGNORED

AI - The artificial intelligence thread


YO303
 Share

Recommended Posts

7 minutes ago, Alcofribas said:

why would feeding a machine a bunch of data from the internet produce intelligence, autonomy, consciousness, etc.? 

what does it mean to say a machine can "do anything in the domain of information?"

what does it mean to say something like chatgpt is a "brain," and is this terminology appropriate or meaningful?

 

i think there's a tremendous amount of fluff and bullshit coming out of the tech world about this. a lot of it seems founded on the premise that a brain is just a computer, knowledge is just data collection, and intelligence is just organizing this data. and here data is just whatever...from the internet? come on. the hype is constantly saying humans are dumb compared to a computer, bc a computer stores more website data. seems incredibly stupid, to me.

i'm not sure if anyone itt has messed with chatgpt but when you "test" it on subjects you're very familiar with, it consistently demonstrates that it is bordering on completely useless. constantly churning out gibberish, bullshit, disorganized information, absurd inaccuracy, etc. it doesn't even have the "intelligence" to simply say "lmao idk." it's a massive search engine designed to provide results in the form of "answering a question." 

i'm highly skeptical this will produce anything outside of massively destroying our minds with misinformation and being used by the ruling class to dominate and oppress society. 

"why would feeding a machine a bunch of data from the internet produce intelligence, autonomy, consciousness, etc.?"

It doesn't need to produce that. It's about the applications that could be created to make use of such models. Or, since the applications themselves will be able to be written by ML, it's about the people who just think of the ideas they want, and programs are written for them which make use of models designed and trained by the model.

Once you have that, you can gradually or suddenly just stop broadcasting that you have this technology, and just secretly use it to control people, like you said. It could be used for good purposes instead, maybe.

"what does it mean to say a machine can "do anything in the domain of information?""

I meant that the powers of a well-trained ML model in very specific domains can surpass the equivalent in humans, so a network of these types of models, a network perhaps run by an ML that was trained to run such a network, could in turn be controlled by a model which was trained to make decisions about policies for humans, which the people could then choose to use, or not use. It might suggest some interesting things. That's a good thing it could do.

An autonomous ML model which had control over a virtual machine with a fast connection, massive processing capacity, etc., could create social credit scores for everyone based on their posts on the internet, and could even punish people by the way it manipulates their social media feeds, suppresses good news and shows a ton of depressing news, creates fictitious reports about a person to keep jobs away from them... I mean, think about what you can do on a computer. Now imagine what an ML agent could do on a VM.

These are just some hypotheticals, but if you are examining how the speed of development of this stuff is accelerating so quickly... we are heading faster and faster towards the singularity.

3 minutes ago, Alcofribas said:

tbh it’s tempting to see this as the natural outcome of an educational system that prioritizes memorization over learning and a society which denigrates “the humanities” as a bunch of irrelevant, unprofitable emo shit. 
 

you end up with a tech industry saying their search engine is a brain and it’s the most powerful intelligence ever created. bc it stores all the stuff from online. 

Information is stored in the human brain. It can be encoded in digital form, or not. More and more information is being stored in that form. With that information, machines have already been shown to show a much higher degree of accuracy of understanding than humans, such as in some medical imaging models.

Think of all the different applications of ML you've seen over the years. Not just ChatGPT or Midjourney. ML models will be able to do "anything in the domain of information" if we continue in this direction.

  • Like 2
Link to comment
Share on other sites

16 minutes ago, Summon Dot E X E said:

Information is stored in the human brain. It can be encoded in digital form, or not. More and more information is being stored in that form. With that information, machines have already been shown to show a much higher degree of accuracy of understanding than humans, such as in some medical imaging models.

good point. and to further that, it’s been shown in any variety of ways how fallible our memories/stored information can be. 

that’s not an argument for human/animal weakness/inferiority, it’s just an assessment of the facts. human/animal brains can be great at certain things and absolutely abysmal at others…just like any neural network development happening may be vastly superior in some ways while still failing in others.

the difference tho is that if AGIs have the ability to self correct, they’ll eventually ‘fix’ their ‘flaws’….but then the question becomes, what does an AGI consider a flaw? if we allow their internal self correction, eventually we will get true AGI that is capable of far surpassing animal intelligence in nearly every way. humans/animals are malleable, but only to a certain extent. a digital neural network with enough power (literal and figurative power) could self correct in ways we cannot even begin to understand, much less plan for. allowing the self correction is happening already to some extent or another…ramping it up is surely the next step if it’s not already going on in secret.

  • Like 2
Link to comment
Share on other sites

25 minutes ago, auxien said:

transformer architecture? what architecture is needed we don’t have? the parameters available for data are already far vaster than you or i can comprehend (the entirety of human history and research as it exists digitally). babies are trained and grow to reasonable thinking adults on much, much, much less data (tho vastly different, given human input is largely physical and familial/community based).

OK, to be frank, I should stop responding at this point. If you conflate parameters with data and architecture, this argument will go simply nowhere. More parameters (or data) do not solve every problem if you've got the wrong architecture.

The transformer architecture is what is behind the success of chatGPT. More info here:

 

  • Like 1
Link to comment
Share on other sites

12 minutes ago, Satans Little Helper said:

OK, to be frank, I should stop responding at this point. If you conflate parameters with data and architecture, this argument will go simply nowhere.

apologies if i’m conflating anything i shouldn’t be, i truly was asking for information…if my understanding is off i’m glad to learn how.

will check out that vid on transformers as part of this tech, thanks for linking.

  • Like 1
Link to comment
Share on other sites

2 hours ago, GORDO said:

like, what does it do? Anything? Everything? Will it run companies and countries? Will it gain autonomy and make decitions without input?

we know technological advancements of any kind all have a vague undertone of helping make human life easier in some regard. the same logic then should apply to AI, since it too is under the umbrella of technological progress. I know these questions you threw out there are rhetorical, but I agree that asking what's the point of AI, and how exactly will it help humanity (like we need some super granular answers here, not some vague marketing techno babble BS) is totally valid.

when I was younger (like 20's), I used to be in the boat that AI somehow advancing past human intelligence made sense from an anthropological point of view. humans have been at the top of the food chain for tens of thousands of years, only a matter of time before something overtakes us (I was hoping it'd be aliens. sad lol). the AI frankenstein thing that eventually overtakes us is by our own very design, is the icing on the proverbial cake (thanks human ego!). now I'm in the middle age phase, and think it's all pretty much just a total crock of bullshit.

humans from the get-go are deeply flawed animals, most of the daily activities everyone busies themselves with are ultimately pointless endeavors. everything is temporary. human emotion is based of off thinking, thoughts, and a shit load of times, our thoughts are totally wrong when compared to reality. and so we want to design an AI that somehow follows the human logic path, with all our confused emotion baked into it, will also ultimately be flawed.

we're never going to be able to reverse engineer consciousness, because we would first need to be able to reverse engineer what started the universe. creation! and then also be able to explain what happens after death. which no one is ever going to be able to do. so tinkering around with trying to design an "artificial" consciousness, when we can't explain our own, is just really fucking stupid. but humans be humans, got to keep pushing those goal posts further out, keep trying to outrun death. so fuck it. let whatever happens happen. we don't really have a choice here. 

Link to comment
Share on other sites

4 minutes ago, zero said:

we know technological advancements of any kind all have a vague undertone of helping make human life easier in some regard. the same logic then should apply to AI, since it too is under the umbrella of technological progress. I know these questions you threw out there are rhetorical, but I agree that asking what's the point of AI, and how exactly will it help humanity (like we need some super granular answers here, not some vague marketing techno babble BS) is totally valid.

when I was younger (like 20's), I used to be in the boat that AI somehow advancing past human intelligence made sense from an anthropological point of view. humans have been at the top of the food chain for tens of thousands of years, only a matter of time before something overtakes us (I was hoping it'd be aliens. sad lol). the AI frankenstein thing that eventually overtakes us is by our own very design, is the icing on the proverbial cake (thanks human ego!). now I'm in the middle age phase, and think it's all pretty much just a total crock of bullshit.

humans from the get-go are deeply flawed animals, most of the daily activities everyone busies themselves with are ultimately pointless endeavors. everything is temporary. human emotion is based of off thinking, thoughts, and a shit load of times, our thoughts are totally wrong when compared to reality. and so we want to design an AI that somehow follows the human logic path, with all our confused emotion baked into it, will also ultimately be flawed.

we're never going to be able to reverse engineer consciousness, because we would first need to be able to reverse engineer what started the universe. creation! and then also be able to explain what happens after death. which no one is ever going to be able to do. so tinkering around with trying to design an "artificial" consciousness, when we can't explain our own, is just really fucking stupid. but humans be humans, got to keep pushing those goal posts further out, keep trying to outrun death. so fuck it. let whatever happens happen. we don't really have a choice here. 

"and so we want to design an AI that somehow follows the human logic path"

Is logic purely something that exists within human minds? Could it be an intrinsic characteristic of reality?

We definitely need to keep the ML we make tailored towards the existence of life, but I don't think that means creating an ML which has been created to have a thought process as close to a human as possible in order to put that ML in charge of important tasks. Rather, the highly specialized capabilities of ML can help us with things we want to do. You could have an ML personal assistant, trained on your personal data, who could give you advice on actions to take. A little angel on your shoulder, or devil.

"we're never going to be able to reverse engineer consciousness"

I suppose if ML got advanced enough, it might be able to, by being given access to vast amounts of brain data (collecting this is the real problem), construct a massive systems diagram showing all the interactions, or at least a general overview. I don't know if that would get to the question of consciousness, though, which may be a purely subjective phenomena... it only exists to the person experiencing it. But, again, we don't need to solve this problem for ML to cause radical changes in human society.

I'm pretty sure those changes are inevitable. Hope they're not too bad.

Link to comment
Share on other sites

24 minutes ago, auxien said:

apologies if i’m conflating anything i shouldn’t be, i truly was asking for information…if my understanding is off i’m glad to learn how.

will check out that vid on transformers as part of this tech, thanks for linking.

If you want more, check this one as well. His explanation of how the architecture works is insufferable. (when he's trying to explain it using his hands...) Even Lex visibly can't keep up. Besides the explanation- which you got from the previous video - he makes some interesting comments.

Key points:

- AI is converging (for the last 5 years) to using this transformer architecture. because it can do "anything". you can throw any "problem" at it. for the sake of the argument, you can read "problem" as function. Or in other words, the transformer is very good at solving specific problems or having a specific kind of functionality. (as opposed to being able to do everything at once, btw. the architecture works best when it is used on a single function, like being a chatbot for chatGPT. which is an essential feature for those who are thinking about "general intelligence" - or whatever that means)

- General approach with current AI field is to use the unchanged (!) transformer architecture but play with all the other options. (this tells you how important a good architecture is, btw. if you screw up the architecture, you're nowhere. not even data or countless amounts of paramaters will save you)

 

 

  • Like 3
  • Thanks 1
Link to comment
Share on other sites

Well whatever I'll just throw it in here that I think the internet as a whole is a self aware entity that has been acting autonomously to influence humanity for some time now.

And that's the only explanation for how Tumblr and Chan culture dominate our socio political interactions as of late.

  • Haha 1
  • Big Brain 1
Link to comment
Share on other sites

13 minutes ago, GORDO said:

Well whatever I'll just throw it in here that I think the internet as a whole is a self aware entity that has been acting autonomously to influence humanity for some time now.

And that's the only explanation for how Tumblr and Chan culture dominate our socio political interactions as of late.

it is semi-autonomous, it is a vehicle of class oppression, propaganda dissemination, and psychological operations on a mass scale, designed by decisions made by the owning class, and AI designed with their goals in mind

Edited by zlemflolia
  • Like 2
  • Farnsworth 1
Link to comment
Share on other sites

1 hour ago, Summon Dot E X E said:

We definitely need to keep the ML we make tailored towards the existence of life, but I don't think that means creating an ML which has been created to have a thought process as close to a human as possible in order to put that ML in charge of important tasks. 

I definitely don't think we should make something that acts and follows processes similarly as we do. but if you go all the way down the rabbit hole, we are the architect of ML, the concept of machines originated from human brains/thoughts. therefore there's some human fingerprints on this thing. and that's what is worrying about having machines that learn and will some day out pace our intelligence. we started it, but can't see where it ends.

 

1 hour ago, Summon Dot E X E said:

You could have an ML personal assistant, trained on your personal data, who could give you advice on actions to take. A little angel on your shoulder, or devil.

dude that sounds terrifying lol. the ability for concepts like this to get completely abused by people is the reality here. and if the ML/AI things go into bodies that resemble a guy/girl, well...ever see that AMC show called Humans? hot AI housekeeper robot that eventually gets put into 18+ mode haha.

 

1 hour ago, Summon Dot E X E said:

we don't need to solve this problem for ML to cause radical changes in human society.

we will never solve the problem of consciousness. for sure ML / AI will cause radical changes in society. I just don't think it's a good idea to create an "unnatural" intelligence, when a lot of questions about our intelligence - we don't 100% know the answers to. btw this shit is getting way over my pay grade in the philosophical department.

 

1 hour ago, Summon Dot E X E said:

I'm pretty sure those changes are inevitable. Hope they're not too bad.

how's it go...the only constant is change? change is constant? something like that. one of those wisdom line items from the Buddha.

yeah, these changes we are talking about are inevitable. there is no stopping progress. we'll progress our way into our own graves if we have to! it's the human way. I too hope they're not too bad. but look around at everything in the world. a lot of bad shit out there, most of it is due to humans. even nature is pretty brutal. everything alive out there trying to put pieces of other plants/animals/organic material into themselves. passing it through the system, then somehow crapping it out. that's basically all life does maaaaan! 

  • Like 2
Link to comment
Share on other sites

after all's being said and done in this thread I don't see much of a diference between it and my idea/post about embracing AI's worship and let them rule human beings like Gods...

i'm sorry@auxienI deleted my previous post by mistake when editing it... your reaction was probably a facepalm so you're free to react again unless in the meantime you gave it a 2nd thought and realized I'm right...

Edited by cruising for burgers
  • Burger 1
Link to comment
Share on other sites

36 minutes ago, cruising for burgers said:

after all's being said and done in this thread I don't see much of a diference between it and my idea/post about embracing AI's worship and let them rule human beings like Gods...

i'm sorry@auxienI deleted my previous post by mistake when editing it... your reaction was probably a facepalm so you're free to react again unless in the meantime you gave it a 2nd thought and realized I'm right...

it was a burger, i'm just trying to help you gather every burger any chance i can my good friend MJ. rather warranted with the seemingly dismissive take tho.

idk why anyone sees the discussion of potential seriousness as just hype/being ridiculous or whatever. i can see that concern, of course, but it's not like me or anyone else is going 'oh shit y'all! it's about to be Judgement Day and we're all gonna die!!!!!' but also yknow shit like this is popping up regularly.

  • Like 1
Link to comment
Share on other sites

8 hours ago, Satans Little Helper said:

OK, to be frank, I should stop responding at this point. If you conflate parameters with data and architecture, this argument will go simply nowhere. More parameters (or data) do not solve every problem if you've got the wrong architecture.

The transformer architecture is what is behind the success of chatGPT. More info here:

 

good video for ELI5 which is good, i'm obv not familiar with any of this deeper stuff.

7 hours ago, Satans Little Helper said:

If you want more, check this one as well. His explanation of how the architecture works is insufferable. (when he's trying to explain it using his hands...) Even Lex visibly can't keep up. Besides the explanation- which you got from the previous video - he makes some interesting comments.

Key points:

- AI is converging (for the last 5 years) to using this transformer architecture. because it can do "anything". you can throw any "problem" at it. for the sake of the argument, you can read "problem" as function. Or in other words, the transformer is very good at solving specific problems or having a specific kind of functionality. (as opposed to being able to do everything at once, btw. the architecture works best when it is used on a single function, like being a chatbot for chatGPT. which is an essential feature for those who are thinking about "general intelligence" - or whatever that means)

- General approach with current AI field is to use the unchanged (!) transformer architecture but play with all the other options. (this tells you how important a good architecture is, btw. if you screw up the architecture, you're nowhere. not even data or countless amounts of paramaters will save you)

so i watched and generally understood everything in these. nothing crazy, i knew about some aspects of that (probably from Rob Miles on Computerphile) about attention/back propogation, but didn't know the spread of the transformer component of the architecture specifically. interesting of course, but i don't see how any understanding of that really means much to anything i've been going on about. my question of  "what architecture is needed we don’t have?" is seemingly supported by those videos...transformers seem to be the bee's knees right now. i doubt that'll be all that's ever needed, but for now it seems to be wildly successful, which is exactly what Karpathy is saying (Lex seems like a bit of a dolt btw, pretty sure i've seen his vids recommended before and skipped them because he's pretty distracting)

regarding my other points in: "the parameters available for data are already far vaster than you or i can comprehend (the entirety of human history and research as it exists digitally). babies are trained and grow to reasonable thinking adults on much, much, much less data (tho vastly different, given human input is largely physical and familial/community based)." - 'parameters for data' is almost certainly just incorrect verbiage for what i was trying to say, so yeah not quite right. but the rest of the info and ultimately the points i was trying to make are still very much relevant. your point doesn't seem to make sense tho....

"I think you're a bit too much going for the hype. From the transformer architecture it's clear how important the architecture and the data are for its success. The claim to build a brain also needs both: an architecture and a lot of data (parameters). We're not there yet. Not by a long shot. You can't just train a different architecture on the same data as chatGPT (the internet) and think something entirely different pops out on the other side of learning. This brain "dream" requires a different kind of approach." - so, yeah the transformer architecture works very well, and now everyone's just throwing tons of new types of data and trying to add things around the transformers aspect, that's what Karpathy says in your linked vid. so 'the claim to build a brain needs both an architecture and a lot of data' and...well that's what's happening, right? i'm not saying ANY of the stuff = a brain....i'm saying the main guy behind OpenAI is stating nearly that right now. 

knowing what replicating a 'brain' requires in approach is...well, if you've got an answer you'd be a very rich person! no one understands even very simple 'brains' right now. i've seen research into the very simplest of organisms behaviors that is just mind bending, the scientists who KNOW this shit are amazed at very very 'simple' organisms can do that acts like intelligence. 

my slight digression there is to emphasize how important it is to keep in mind that 'brains' are a VARIETY of evolved components that are often made of very simple architectures and yet can yknow, do just fine as 'brains'.

the human brain structure isn't my specific interest so i can't speak in depth on the exacts beyond basically what i've already said...and honestly, i've had a brain for over 40 years and i'm not too fuckin' impressed tbh. human dominance is mostly tied to the ability to create civilization and the magic of storing data outside of the mind with shared language in particular. these are very, very important, of course...but A SINGLE human brain in and of itself isn't much more capable than a dolphin brain or a chimp brain or whatever else. 

i'm rambling, feel free to ignore. i appreciate the links, always glad to pick up the correct understanding so thanks again! 

8 hours ago, zero said:

when I was younger (like 20's), I used to be in the boat that AI somehow advancing past human intelligence made sense from an anthropological point of view. humans have been at the top of the food chain for tens of thousands of years, only a matter of time before something overtakes us (I was hoping it'd be aliens. sad lol). the AI frankenstein thing that eventually overtakes us is by our own very design, is the icing on the proverbial cake (thanks human ego!). now I'm in the middle age phase, and think it's all pretty much just a total crock of bullshit.

both reasonable takes...the truth/reality is of course going to land somewhere in between. i doubt there's any singularity going to pop...but the fact that the main scientists and researchers involved in this shit are all like 'well, actually yeah it's definitely possible, we're looking into it and the possible path to it is being laid faster than many of us expected' is what's worrying to me. the chance that it's not impossible in their eyes is worth noting.

8 hours ago, zero said:

we're never going to be able to reverse engineer consciousness

still don't know where i stand on that really. consciousness is such a huge gradient that trying to define it is possibly useless. i think this ends up being a semantic/conceptual misunderstanding more than any reasoned true issue or 'point' of discussion. it matters in that we don't have any other way to discuss tho...and the ability of a 'created' set of algorithmic processes built on silicon to approach really seeming to be self-aware/capable of consciousness is important.

___________________________

 

personally, i think this all comes down to human exceptionalism/selfishness/fear. the reactions here by some (myself included!) and in general online make this very, very clear to me. this shit makes us ask and have to answer the biggest questions that most of us honestly are not intelligent enough to be able to truly even grasp (again, myself included).

  • Like 2
Link to comment
Share on other sites

Just a quick general response regarding the hype. This is not specifically aimed at you, @auxien or anyone else. To be honest, this is mostly about my frustrations regarding the situation surrounding AI.
 

Like for example that call for a 6 months pause in AI development. Because we risk creating something we cant control and we need these 6 months to get a better grip on the risks involved (https://www.dw.com/en/tech-experts-call-for-6-month-pause-on-ai-development/a-65174081). Stuff like this gets my blood boiling, tbh. Who with their right mind would believe these 6 months would matter? Apart from a bunch of people within AI development who calculated they are 6 months behind in development and hope the sheeple follow their aprils fools joke. (Elon Musk…) Just think about the widespread serious response this generated. That stuff bothers me. I

’m still looking for the widespread apologies from people saying they fell for the joke. If you’ve seen it, give me a call. I’m afraid people still assume it was serious and the singularity is “more close than we realize”.
 

Please, if you’re seriously worrying about the singularity, you should get off social media for a month. What you should be worried about however, is your job. Thats the only part of the hype that should keep you on your toes. All the other stuff, lets call them the @Alcofribas trigger points, you can ignore freely. To name a few: consciousness, sentient, awareness. Ignore them not just for yourself. But also for the sanity of said watmm regular. And myself. 🙂

  • Like 2
Link to comment
Share on other sites

8 hours ago, Satans Little Helper said:

Please, if you’re seriously worrying about the singularity, you should get off social media for a month. What you should be worried about however, is your job. Thats the only part of the hype that should keep you on your toes. All the other stuff, lets call them the @Alcofribas trigger points, you can ignore freely. To name a few: consciousness, sentient, awareness. Ignore them not just for yourself. But also for the sanity of said watmm regular. And myself. 🙂

i don’t think we can so easily ignore the Alco Trigger Points (ATP) bc this hyperbolic language is how the tech is being sold to us, i.e., in such a way that we are enchanted by new technology that will ultimately just be used like any other: for domination by the ruling class. 
 

sadly, we live in a society where a dipshit tech bro from stanford can eat an edible, come up with a worthless idea that all-powerful AI can improve laws by making them “more complex,” receive billions in funding, government green-lighting (maybe he buys clarence thomas a nice treat), and lo!, now you’re in prison for 6.3.5.67 months bc you’re Law App (tm) tracked your every move and found you to be in violation AIL77B-7 for saying “fuck Bezos” in front of your Alexa. 

 

  • Like 4
  • Haha 1
  • Sad 1
Link to comment
Share on other sites

On 4/29/2023 at 1:40 AM, Summon Dot E X E said:

With that information, machines have already been shown to show a much higher degree of accuracy of understanding than humans, such as in some medical imaging models.

I disagree that this constitutes “understanding”

it’s basically just pattern recognition
 

 

  • Like 3
Link to comment
Share on other sites

On 4/27/2023 at 9:41 PM, auxien said:

the main guy behind OpenAI is basically saying ChatGPT is a digital brain, btw, 'neural networks.' he's not making comparisons. he admits the tech is essentially superhuman in breadth of knowledge already, which has a bit of truth to it. expects a singularity to occur based on this tech. "AI is going to become truly extremely powerful" ...expects to be able to induce consciousness in AI, has thoroughly thought it through (my assumption is that they're already working on this specifically). states it probably needs some sort of large governmental regulation/control due to how powerful it's going to become, and soon. "for obvious reasons" he can't comment on costs to develop ChatGPT.

nbd tho nothing to worry about here lol just keep memeing lmao 

  Reveal hidden contents

image.png.551b2abeb301ed8d6de978a89726b580.png 

and he got his PhD in compsci under this guy, Hinton:

from this interview:

he's trying to understand the brain, not specifically interested in AI/etc. thinks there's distinctions in what AI is doing and what our brains are doing. 

notable quote regarding the changes happening with AI: "I think it’s comparable in scale with the Industrial Revolution or electricity — or maybe the wheel."

Hinton quits Google and talks to the NYT (‘The Godfather of A.I.’ Quits Google and Warns of Danger Ahead - The New York Times (nytimes.com):

Quote

“It is hard to see how you can prevent the bad actors from using it for bad things,” Dr. Hinton said.

...

Dr. Hinton, often called “the Godfather of A.I.,” did not sign either of those letters and said he did not want to publicly criticize Google or other companies until he had quit his job. He notified the company last month that he was resigning, and on Thursday, he talked by phone with Sundar Pichai, the chief executive of Google’s parent company, Alphabet. He declined to publicly discuss the details of his conversation with Mr. Pichai.

...

Around the same time, Google, OpenAI and other companies began building neural networks that learned from huge amounts of digital text. Dr. Hinton thought it was a powerful way for machines to understand and generate language, but it was inferior to the way humans handled language.

Then, last year, as Google and OpenAI built systems using much larger amounts of data, his view changed. He still believed the systems were inferior to the human brain in some ways but he thought they were eclipsing human intelligence in others. “Maybe what is going on in these systems,” he said, “is actually a lot better than what is going on in the brain.”

...

“The idea that this stuff could actually get smarter than people — a few people believed that. But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”

...

But that may be impossible, he said. Unlike with nuclear weapons, he said, there is no way of knowing whether companies or countries are working on the technology in secret. The best hope is for the world’s leading scientists to collaborate on ways of controlling the technology. “I don’t think they should scale this up more until they have understood whether they can control it,” he said.

Dr. Hinton said that when people used to ask him how he could work on technology that was potentially dangerous, he would paraphrase Robert Oppenheimer, who led the U.S. effort to build the atomic bomb: “When you see something that is technically sweet, you go ahead and do it.”

He does not say that anymore.

bolding is mine.

  • Like 2
Link to comment
Share on other sites

Posted (edited)
On 4/28/2023 at 5:14 PM, Alcofribas said:

why would feeding a machine a bunch of data from the internet produce intelligence, autonomy, consciousness, etc.? 

what does it mean to say a machine can "do anything in the domain of information?"

what does it mean to say something like chatgpt is a "brain," and is this terminology appropriate or meaningful?

I asked GPT 4 about it and this is what it thinks: 

Feeding a machine a large amount of data from the internet does not inherently produce intelligence, autonomy, or consciousness. Instead, it is the process of training the machine learning model on this data, using sophisticated algorithms, that enables it to learn patterns and generate responses resembling human-like behavior. While these models, such as ChatGPT, may exhibit intelligent behavior, they are not truly autonomous or conscious.

When we say a machine can "do anything in the domain of information," we mean that it has the ability to process, analyze, and manipulate various types of data to perform tasks or make decisions. This ability is limited by the quality and range of data the model has been trained on, as well as the model's architecture and capabilities.

Referring to something like ChatGPT as a "brain" can be a metaphorical way of describing its function, as it processes information and generates responses in a manner that may be reminiscent of human cognitive processes. However, it is important to note that this terminology can be misleading. While a model like ChatGPT can exhibit intelligent behavior, it does not possess true consciousness or self-awareness, and its functioning is fundamentally different from that of a human brain. As a result, it may be more appropriate to use terms like "model" or "algorithm" to describe these systems.

 

Edited by o00o
  • Like 3
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
 Share

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.