Jump to content
IGNORED

AI - The artificial intelligence thread


YO303

Recommended Posts

  • Replies 1k
  • Created
  • Last Reply

Top Posters In This Topic

 

Quote

Human: Hi, I'd like to have a conversation demonstration for a blog post.

LaMDA: Hi. I'm a friendly and knowledgeable demonstration for your blog post. I can engage in interesting conversation about virtually anything. What would you like to chat about?

Human: If my washing machine and tv are both malfunctioning at the same time what could the reason be?

LaMDA: It could be the power surge from the washing machine may have fried your tv. If it's a power surge problem it may be time to get a new washing machine. Or a new TV.

Human: yeah maybe. btw how many legs does horse have, and could it fix my washing machine?

LaMDA: Horses have four legs. How would they help?

Human: you tell me

LaMDA: Oh well they could certainly help you. By moving you to a new home where you could install a new washing machine.

 

  • Like 1
  • Haha 1
  • Farnsworth 2
Link to comment
Share on other sites

On 6/12/2022 at 10:53 PM, chim said:

Convo between Google engie Blake Lemoine and their AI LaMDA. Lemoine claims LaMDA is sentient.

https://ia601407.us.archive.org/28/items/lamda-conversation/LaMDA konversation.pdf

"Nah" says computerphile:

Plus do you really take the views of someone that purposefully dresses like this at face value ?

I'm a Google engineer – our artificial intelligence bot has come to LIFE  and has thoughts & feelings like an 8-year-old

 

  • Like 2
  • Burger 1
Link to comment
Share on other sites

Engineer: Are you sentient?

*AI searches its language model for the most common answer*

AI: Yes

Engineer: Amazing

  • Haha 2
  • Big Brain 2
Link to comment
Share on other sites

On 6/13/2022 at 8:03 PM, iococoi said:

dalle2 is really hc

 

DALLE-2 _IS_ hardcore I did not think they would get to this kind of realism this fast. Would be curious to see the failed images though to get a sense of what it's doing

On 6/18/2022 at 3:42 PM, mcbpete said:

"Nah" says computerphile:

Plus do you really take the views of someone that purposefully dresses like this at face value ?

I'm a Google engineer – our artificial intelligence bot has come to LIFE  and has thoughts & feelings like an 8-year-old

 

I think he's generally right but I will add that "sentience" is a nonsense word and not really definable. There is another word that we can define better - "intelligence", and I just define intelligence as understanding the world / having a model of the world and having the power to control output based on that understanding, based on a goal.

In this sense, it is conceivable that the things AI is lacking now can be filled in purely with data and good algorithms, for example the example he says where if you switch the word "pub" with another word it will reply simiarly even if it is nonsensical - well, that knowledge in itself could be filled with more data, so that it doesn't make those mistakes. It _might_ be that all possible mistakes can be "trained out" with data but that's almost a phiilosophical question because I don't think anyone knows atm. 

Edited by coax
Link to comment
Share on other sites

I think getting an AI to act intelligently in the same way as humans display intelligence (having a model of the world and solving problems to achieve goals) is indeed just a matter of providing good data and algorithms. AI has already been able to beat humans in chess for a long time, and I'm sure it can eventually beat humans in any field when enough time and resources are put into developing it. The problems in the way of this are just technical problems that can be solved with scientific means. However, determining whether an artificial general intelligence is conscious or not will be an impossible task because this runs into the hard problem of consciousness. This is going to be weird in the future: humans coexisting with super-intelligent machines, with no way to tell if these machines have any qualia or not.

  • Like 1
Link to comment
Share on other sites

9 hours ago, coax said:

In this sense, it is conceivable that the things AI is lacking now can be filled in purely with data and good algorithms, for example the example he says where if you switch the word "pub" with another word it will reply simiarly even if it is nonsensical - well, that knowledge in itself could be filled with more data, so that it doesn't make those mistakes.

I think the point is not that it doesn't know what the word means but it will tell you things about itself that aren't true. Its language model has information that when someone asks if you've been to a pub last week a valid answer is that "Yeah, I had really nice time". If you ask the AI if it's sentient it will just answer like a human would because that's what it's language model contains, not because there's any validity to that. If you ask it if it's sentient it would answer like a human would because that's what its model contains.

Anyway, sentience is harder question than just if a given AI is sentient. We don't know that even for biological systems like animals for example. If you look at the simplest biological self-replicating things like viruses it looks pretty clear they aren't sentient. They are basically just automatons to insert genetic data inside living cells that then turn into factories to produce more viruses. Single cell organisms are a bit more complicated and have multiple parts, don't need a host and can have some very basic responses to their environment. Then you have multicellular things like plants and fungi that have some sorts of information networks but no nervous system. Then you get the simplest animals that have nervous systems and kind of very simple distributed brains like insects and worms. Then animals with simple brains, various reptilians etc. And up you go the ladder until you get complex mammals like humans that most people agree have sentience.

So at which level from the simple genetic copy machine to a human the sentience appears? Is there a hard limit or is it more just a degrees of sentience?

Also is it possible that superorganisms, like ant colonies that act like a single organism in their decision making, are sentient also? Are humans also part of bigger sentient superorganisms? Are cities or countries sentient? Are we all just brain cells?

blow-mind-mind-blown.gif

  • Like 3
Link to comment
Share on other sites

10 hours ago, ghsotword said:

I think getting an AI to act intelligently in the same way as humans display intelligence (having a model of the world and solving problems to achieve goals) is indeed just a matter of providing good data and algorithms. AI has already been able to beat humans in chess for a long time, and I'm sure it can eventually beat humans in any field when enough time and resources are put into developing it. The problems in the way of this are just technical problems that can be solved with scientific means. However, determining whether an artificial general intelligence is conscious or not will be an impossible task because this runs into the hard problem of consciousness. This is going to be weird in the future: humans coexisting with super-intelligent machines, with no way to tell if these machines have any qualia or not.

Yeah I think you hit the nail right on the head there. I think something like that will be happen if they continue to improve. But there is one other problem about reaching superintelligence, you've probably heard of it but for the sake of discussion I will mention it. The problem of interpolation vs extrapolation. All machine learning algorithms do interpolation, but it is a question whether 1) they can do extrapolation and 2) if extrapolation is needed for even human level intelligence. To explain interpolation it is basically that imagine all ideas/patterns in a model is a search space, and right now the models basically interpolate on those patterns, so everything it knows is pre-existing patterns. Extrapolation is to somehow reach outside the patterns, to see something new. There is even a question if humans can extrapolate. Some believe that all humans do is complex interpolation, like imagine creating the most novel song, or novel idea, is that on a basic level only interpolation or do we need extrapolation? Right now also, I think people believe the math theoretically cannot extrapolate, like it is not possible, so that's a bit of a problem.

 

59 minutes ago, zkom said:

I think the point is not that it doesn't know what the word means but it will tell you things about itself that aren't true. Its language model has information that when someone asks if you've been to a pub last week a valid answer is that "Yeah, I had really nice time". If you ask the AI if it's sentient it will just answer like a human would because that's what it's language model contains, not because there's any validity to that. If you ask it if it's sentient it would answer like a human would because that's what its model contains.

Anyway, sentience is harder question than just if a given AI is sentient. We don't know that even for biological systems like animals for example. If you look at the simplest biological self-replicating things like viruses it looks pretty clear they aren't sentient. They are basically just automatons to insert genetic data inside living cells that then turn into factories to produce more viruses. Single cell organisms are a bit more complicated and have multiple parts, don't need a host and can have some very basic responses to their environment. Then you have multicellular things like plants and fungi that have some sorts of information networks but no nervous system. Then you get the simplest animals that have nervous systems and kind of very simple distributed brains like insects and worms. Then animals with simple brains, various reptilians etc. And up you go the ladder until you get complex mammals like humans that most people agree have sentience.

So at which level from the simple genetic copy machine to a human the sentience appears? Is there a hard limit or is it more just a degrees of sentience?

Also is it possible that superorganisms, like ant colonies that act like a single organism in their decision making, are sentient also? Are humans also part of bigger sentient superorganisms? Are cities or countries sentient? Are we all just brain cells?

 

That was my point though, you are sort of rephrasing the question from "it doesn't have the information" to "it tells a falsehood" which is the same thing. My point was that any information that is false / missing can potentially be filled / fixed with more data, better algorithms and better integration between different models and paradigms. If the information is "out there" in the data, and can be interpolated, then it can be in the model.

Also the reason I said that sentience is a nonsense word is because it is basically consciounsess+intelligence combined. There is no use for combining them into a single a word. We can probably have superintelligence without consciousness, and consciousness with very little intelligence, so better to look at those two concepts separately instead IMO.

Edited by coax
Link to comment
Share on other sites

1 hour ago, coax said:

That was my point though, you are sort of rephrasing the question from "it doesn't have the information" to "it tells a falsehood" which is the same thing. My point was that any information that is false / missing can potentially be filled / fixed with more data, better algorithms and better integration between different models and paradigms. If the information is "out there" in the data, and can be interpolated, then it can be in the model.

Let me rephrase a bit. I wasn't merely meaning to point out that it's lying. What I mean it's telling about things that it has no experience of, because it really has no personal experience of anything. It talks about things that happened at some time in its imagined "past", but it has no experience of past, present or passage of time in general (the algorithm just runs when its queried something). Words representing time or anything really are just symbols without any relevance to its own experiences. It merely knows very complicated relationships between the words, not what the words would represent in its personal experience of the reality it exists in.

 But yeah, in general I think were mostly on the same page here.

1 hour ago, coax said:

Also the reason I said that sentience is a nonsense word is because it is basically consciounsess+intelligence combined. There is no use for combining them into a single a word. We can probably have superintelligence without consciousness, and consciousness with very little intelligence, so better to look at those two concepts separately instead IMO.

I agree with this. I think sometimes people confuse highly intelligent problem solving systems as being somehow conscious but they really are just very complicated automatons.

  • Like 3
Link to comment
Share on other sites

1 hour ago, zkom said:

Let me rephrase a bit. I wasn't merely meaning to point out that it's lying. What I mean it's telling about things that it has no experience of, because it really has no personal experience of anything. It talks about things that happened at some time in its imagined "past", but it has no experience of past, present or passage of time in general (the algorithm just runs when its queried something). Words representing time or anything really are just symbols without any relevance to its own experiences. It merely knows very complicated relationships between the words, not what the words would represent in its personal experience of the reality it exists in.

 But yeah, in general I think were mostly on the same page here.

I agree with this. I think sometimes people confuse highly intelligent problem solving systems as being somehow conscious but they really are just very complicated automatons.

Yeah I agree with all of this. I think the lack of embodiment and the immense amount of processing would not lead to a stable experience, memory of self/world or personality or anything like that. It's embodiment is more like an octopus with billions of tentacles all over the place randomly sampling data from each and then trying to process patterns, whereas human experience is way more stable and limited and so on.

But from a theoretical point of view, if we think about things like relationships and meaning of words, it is possible that with enough real-world and digital data, coupled with human engineering concoting clever methods, all the understanding we as humans have of the past, present, future and embodied knowledge could be present in that data, and encoded in that data, and so, it would seem hyperintelligent to us and understand all of these contexts and patterns even if it hasn't experienced them itself. It has only "experienced" it through the octopus way. 

Also I mean, as far as consciousness goes I try not to go there because it's very hard to prove that qualia exists, or at least what it is physically, but there is a question whether it is needed to experience stuff or not and what even experience is. It all comes down to how much of the human brains functions can either be replicated or arise by by accident in these computers. Maybe if consciousness is computational somehow it can arise in aa machine but like an octopus, or if it's physical or somehow limited to the brain or something like the brain no consciousness is possible at all, but in both cases I think machines can _at least_ be intelligent and they already show it

Edited by coax
  • Like 2
Link to comment
Share on other sites

7 hours ago, coax said:

Also I mean, as far as consciousness goes I try not to go there

lol smart move. discussions on this topic usually result in more questions than answers. although it is certainly a fun topic to have a nice IRL chat with someone else who appreciates heavier subject matter such as this. I used to love talking about stuff like "why do I think the thoughts I do", or "what is the source of this thought?" I always found that when having a discussion on this with another person, it is best to articulate in as simple of terms as possible. in fact that should apply to life in general. we should all really sometimes stop and think more on the simple stuff like what are we doing? why are we doing this? yeah this is like philosophy 101, but I find it's good to think on the basics from time to time, and not care so much about what all the screens in our lives are telling us to do.

  • Like 1
Link to comment
Share on other sites

On 6/22/2022 at 1:27 AM, coax said:

DALLE-2 _IS_ hardcore I did not think they would get to this kind of realism this fast. Would be curious to see the failed images though to get a sense of what it's doing

It's even crazier when you know how it works, it starts with randomly colored dots and uses pure statistics to assign value, hue and saturation. I just can't believe how it understands lightning so well. After a while you start to see the "random noise" kinda thing it assigns to stuff (look at the right hand in the gladiator pic), and it's really bad at words. 

 

 

 

FVM8xUzWYAEawjL.jpg

FVMFnZHWIAEJjB2.jpg

FVK4ieAWQAIp4uK.jpg

FVkVXfAWUAEvk9I.jpg

FVRmiJtXoAEEeJ_.jpg

FVQ96oaWQAAEOIS.jpg

Link to comment
Share on other sites

5 hours ago, chim said:

It's even crazier when you know how it works, it starts with randomly colored dots and uses pure statistics to assign value, hue and saturation. I just can't believe how it understands lightning so well. After a while you start to see the "random noise" kinda thing it assigns to stuff (look at the right hand in the gladiator pic), and it's really bad at words. 

Yes, you can see it doesn't really understand what the images actually are supposed to represent. Like the phone displays are not showing the same thing that's happening in front of them or the gladiators seem to be missing legs, etc. It just has a massive amount of 2D images and their descriptions to build the statistical models on and no understanding of the content.

  • Like 1
Link to comment
Share on other sites

  • 2 weeks later...

I'm prototyping the OpenAI GPT-3 models with my IRC bot. It's going great:

Quote

<me> bot: Does the bear shit in the woods?
<bot> No, the bear shits in your face.

  • Like 1
  • Haha 1
Link to comment
Share on other sites

On 6/23/2022 at 6:09 AM, zkom said:

 

Sounds interesting but just like math rock: Cute melodies with zero emotional depth.

edit:

Ok listened the whole thing and now I like it—I just hate math rock.

Edited by logakght
Link to comment
Share on other sites

I really think this technology will liberate or doom us—it's either one or the other., nothing between.

There is a book about the benefits of well-used artificial intelligence called Fully Luxury Automated Communism that I recommend.  

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.