Jump to content
Sign in to follow this  
YO303

AI - The artificial intelligence thread

Recommended Posts

2 hours ago, Brisbot said:

it's so scary because it's just a big question mark. Even if AI is great the first few decades or centuries it could turn on us after any amount of time. Or maybe it's playing the incredibly long game. Maybe it can plan to turn in a million years. Who knows.

One issue that people have which elicits skepticism towards those of us who fear AI is that they anthropomorphize it.  They think that people fearing AI claim that AI will "wake up" and "decide" to kill all humans.  It's not even like that though.  It's possible that even if it lacks sentience or any awareness or true egostic volition of its own, it could still start behaving contrary to our interests despite having no internal representation of the motivational structure that makes any sense in terms of meat-bag mental processes, so we could never comprehend it because it's not translatable into our mental space at all.  it's a completely alien form of intelligence.  the real danger is things like paperclip maximizers where the AI is super-competent and it accidentally just starts doing something where a byproduct is that it destroys us or enslaves us

 

https://wiki.lesswrong.com/wiki/Paperclip_maximizer

Share this post


Link to post
Share on other sites

^good points.

consciousness & intelligence are not just mechanistic, or logic based. 

how can developments in ai & quantum computing factor in emotion(s), intuition, creativity, empathy, etc. ? 

Share this post


Link to post
Share on other sites

The only scary thing about the omnipotent and perfect power of AI is that it was designed by weak and imperfect beings like us. There is bound to be unforeseen consequences of an AI that can self moderate with any root level logic imperfections magnified out via its actions

Share this post


Link to post
Share on other sites
3 hours ago, Stickfigger said:

The only scary thing about the omnipotent and perfect power of AI is that it was designed by weak and imperfect beings like us. There is bound to be unforeseen consequences of an AI that can self moderate with any root level logic imperfections magnified out via its actions

Software in general can be created to be perfect and mathematically proven to be bug-free.  A very small subset of software is made this way:

https://en.wikipedia.org/wiki/Formal_verification

One big issue is that the algorithms used by AI are inherently heuristic (meaning imperfect but good enough) so formal verification cannot be used on most of the common AI algorithms like neural networks.

Share this post


Link to post
Share on other sites
1 hour ago, Zeffolia said:

Software in general can be created to be perfect and mathematically proven to be bug-free.  A very small subset of software is made this way:

It's a very small subset because it seems to be impossible - and has been for years - to do formal verification for any but the most trivial and contrived of software programs.

I'd wager formal verification is a dead end.

Like strong AI in the 1970s turned out to be a dead end.

 

 

Share this post


Link to post
Share on other sites
4 minutes ago, rhmilo said:

It's a very small subset because it seems to be impossible - and has been for years - to do formal verification for any but the most trivial and contrived of software programs.

I'd wager formal verification is a dead end.

Like strong AI in the 1970s turned out to be a dead end.

 

 

That's not true, there is a formally verified OS in the works

https://cacm.acm.org/magazines/2010/6/92498-sel4-formal-verification-of-an-operating-system-kernel/abstract

Once we get basic building blocks things will get easier, it's an immature area still but it's not a dead end, just a neural networks are not a dead end despite seeming it at first due to a lack of maturity

Share this post


Link to post
Share on other sites

That article is 8 years old: "Communications of the ACM, June 2010"

 

I agree with you that once we get basic building blocks things should get easier. Problem is no one has built a single block yet, and that isn't for lack of trying.

Of course, like you said, for years neural networks seemed like a dead end as well, until all of a sudden they weren't, but I somehow feel with formal verification it's different. For one, real world software development is done in a way (lots of shared states and side effects) that makes it impossible to do any form of formal verification while software that can be reasoned about has very little real world use.

That, and the world, not least AI, is moving away from anything that can be verified at all: machine learning systems and neural networks are turning into black boxes that no one has any idea how they actually work (AlphaZero, for example). This is probably a fundamental issue - a system that can perform a truly complicated task may very well be too complex to be reasoned about.

Share this post


Link to post
Share on other sites
Posted (edited)

Not sure what the deal with formal verification is in the context of AI, to be honest. Think about the Turing-test. The starting idea for proving AI was an informal verification. So what's the use of a formal one?

A recent article about verifiability could give some insight:

https://www.quantamagazine.org/computer-scientists-expand-the-frontier-of-verifiable-knowledge-20190523/

The thing with AI however, is that it goes beyond problems with answers which can be good or bad. Verification becomes subjective, instead of objective. Like earning a drivers license, in a way. Driving a car is different to solving a mathematical problem.

Btw, there are plenty of initiatives to unblack-box those deep neural nets, btw. Example:

https://www.quantamagazine.org/been-kim-is-building-a-translator-for-artificial-intelligence-20190110/

Edited by goDel

Share this post


Link to post
Share on other sites
Posted (edited)

OMG GUYS Elon Musk accidentally created an AI that's going to kill us all!
 

On 5/25/2019 at 5:13 PM, Zeffolia said:

One issue that people have which elicits skepticism towards those of us who fear AI is that they anthropomorphize it.  They think that people fearing AI claim that AI will "wake up" and "decide" to kill all humans.  It's not even like that though.  It's possible that even if it lacks sentience or any awareness or true egostic volition of its own, it could still start behaving contrary to our interests despite having no internal representation of the motivational structure that makes any sense in terms of meat-bag mental processes, so we could never comprehend it because it's not translatable into our mental space at all.  it's a completely alien form of intelligence.  the real danger is things like paperclip maximizers where the AI is super-competent and it accidentally just starts doing something where a byproduct is that it destroys us or enslaves us

 

https://wiki.lesswrong.com/wiki/Paperclip_maximizer

IDK, like I said I think it's a big question mark and could go a million different ways. Yeah, I know about the paperclip maximizer thing. To me there are so many inherent problems with us primates trying to control AI that I honestly think it's unwise/unethical to let it get to the point of consciousness - however that would emerge. Once the AI is created, then it will be with us humans for good. So the first 1000 years can be great only for the next 1000 to turn to shit.

I do agree that at first it might not have a consciousness, but eventually surely if humans still control the AI there will be AI WITH a programmed in ego or ego analog. There will probably many many different AI's with different strengths and weaknesses.

If AI does end up killing us, it will probably be in a way we never imagined.

Edited by Brisbot

Share this post


Link to post
Share on other sites

 

Share this post


Link to post
Share on other sites
On 5/28/2019 at 4:40 AM, rhmilo said:

That, and the world, not least AI, is moving away from anything that can be verified at all: machine learning systems and neural networks are turning into black boxes that no one has any idea how they actually work (AlphaZero, for example). This is probably a fundamental issue - a system that can perform a truly complicated task may very well be too complex to be reasoned about.

Yeah this bugs me on many levels... not least of which that it seems like the most boring possible way to create things, taking "history repeats itself" as a general strategy rather than a warning/adage.

Also, model training can apparently be pretty harmful to the environment, much like bitcoin mining: https://www.technologyreview.com/s/613630/training-a-single-ai-model-can-emit-as-much-carbon-as-five-cars-in-their-lifetimes/

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
Sign in to follow this  

×
×
  • Create New...