Jump to content
IGNORED

AI - The artificial intelligence thread


YO303

Recommended Posts

2 hours ago, Brisbot said:

it's so scary because it's just a big question mark. Even if AI is great the first few decades or centuries it could turn on us after any amount of time. Or maybe it's playing the incredibly long game. Maybe it can plan to turn in a million years. Who knows.

One issue that people have which elicits skepticism towards those of us who fear AI is that they anthropomorphize it.  They think that people fearing AI claim that AI will "wake up" and "decide" to kill all humans.  It's not even like that though.  It's possible that even if it lacks sentience or any awareness or true egostic volition of its own, it could still start behaving contrary to our interests despite having no internal representation of the motivational structure that makes any sense in terms of meat-bag mental processes, so we could never comprehend it because it's not translatable into our mental space at all.  it's a completely alien form of intelligence.  the real danger is things like paperclip maximizers where the AI is super-competent and it accidentally just starts doing something where a byproduct is that it destroys us or enslaves us

 

https://wiki.lesswrong.com/wiki/Paperclip_maximizer

Link to comment
Share on other sites

  • Replies 1k
  • Created
  • Last Reply

Top Posters In This Topic

^good points.

consciousness & intelligence are not just mechanistic, or logic based. 

how can developments in ai & quantum computing factor in emotion(s), intuition, creativity, empathy, etc. ? 

Link to comment
Share on other sites

3 hours ago, Stickfigger said:

The only scary thing about the omnipotent and perfect power of AI is that it was designed by weak and imperfect beings like us. There is bound to be unforeseen consequences of an AI that can self moderate with any root level logic imperfections magnified out via its actions

Software in general can be created to be perfect and mathematically proven to be bug-free.  A very small subset of software is made this way:

https://en.wikipedia.org/wiki/Formal_verification

One big issue is that the algorithms used by AI are inherently heuristic (meaning imperfect but good enough) so formal verification cannot be used on most of the common AI algorithms like neural networks.

Link to comment
Share on other sites

1 hour ago, Zeffolia said:

Software in general can be created to be perfect and mathematically proven to be bug-free.  A very small subset of software is made this way:

It's a very small subset because it seems to be impossible - and has been for years - to do formal verification for any but the most trivial and contrived of software programs.

I'd wager formal verification is a dead end.

Like strong AI in the 1970s turned out to be a dead end.

 

 

Link to comment
Share on other sites

4 minutes ago, rhmilo said:

It's a very small subset because it seems to be impossible - and has been for years - to do formal verification for any but the most trivial and contrived of software programs.

I'd wager formal verification is a dead end.

Like strong AI in the 1970s turned out to be a dead end.

 

 

That's not true, there is a formally verified OS in the works

https://cacm.acm.org/magazines/2010/6/92498-sel4-formal-verification-of-an-operating-system-kernel/abstract

Once we get basic building blocks things will get easier, it's an immature area still but it's not a dead end, just a neural networks are not a dead end despite seeming it at first due to a lack of maturity

Link to comment
Share on other sites

That article is 8 years old: "Communications of the ACM, June 2010"

 

I agree with you that once we get basic building blocks things should get easier. Problem is no one has built a single block yet, and that isn't for lack of trying.

Of course, like you said, for years neural networks seemed like a dead end as well, until all of a sudden they weren't, but I somehow feel with formal verification it's different. For one, real world software development is done in a way (lots of shared states and side effects) that makes it impossible to do any form of formal verification while software that can be reasoned about has very little real world use.

That, and the world, not least AI, is moving away from anything that can be verified at all: machine learning systems and neural networks are turning into black boxes that no one has any idea how they actually work (AlphaZero, for example). This is probably a fundamental issue - a system that can perform a truly complicated task may very well be too complex to be reasoned about.

Link to comment
Share on other sites

Not sure what the deal with formal verification is in the context of AI, to be honest. Think about the Turing-test. The starting idea for proving AI was an informal verification. So what's the use of a formal one?

A recent article about verifiability could give some insight:

https://www.quantamagazine.org/computer-scientists-expand-the-frontier-of-verifiable-knowledge-20190523/

The thing with AI however, is that it goes beyond problems with answers which can be good or bad. Verification becomes subjective, instead of objective. Like earning a drivers license, in a way. Driving a car is different to solving a mathematical problem.

Btw, there are plenty of initiatives to unblack-box those deep neural nets, btw. Example:

https://www.quantamagazine.org/been-kim-is-building-a-translator-for-artificial-intelligence-20190110/

Edited by goDel
Link to comment
Share on other sites

OMG GUYS Elon Musk accidentally created an AI that's going to kill us all!
 

On 5/25/2019 at 5:13 PM, Zeffolia said:

One issue that people have which elicits skepticism towards those of us who fear AI is that they anthropomorphize it.  They think that people fearing AI claim that AI will "wake up" and "decide" to kill all humans.  It's not even like that though.  It's possible that even if it lacks sentience or any awareness or true egostic volition of its own, it could still start behaving contrary to our interests despite having no internal representation of the motivational structure that makes any sense in terms of meat-bag mental processes, so we could never comprehend it because it's not translatable into our mental space at all.  it's a completely alien form of intelligence.  the real danger is things like paperclip maximizers where the AI is super-competent and it accidentally just starts doing something where a byproduct is that it destroys us or enslaves us

 

https://wiki.lesswrong.com/wiki/Paperclip_maximizer

IDK, like I said I think it's a big question mark and could go a million different ways. Yeah, I know about the paperclip maximizer thing. To me there are so many inherent problems with us primates trying to control AI that I honestly think it's unwise/unethical to let it get to the point of consciousness - however that would emerge. Once the AI is created, then it will be with us humans for good. So the first 1000 years can be great only for the next 1000 to turn to shit.

I do agree that at first it might not have a consciousness, but eventually surely if humans still control the AI there will be AI WITH a programmed in ego or ego analog. There will probably many many different AI's with different strengths and weaknesses.

If AI does end up killing us, it will probably be in a way we never imagined.

Edited by Brisbot
  • Haha 1
Link to comment
Share on other sites

  • 2 weeks later...
On 5/28/2019 at 4:40 AM, rhmilo said:

That, and the world, not least AI, is moving away from anything that can be verified at all: machine learning systems and neural networks are turning into black boxes that no one has any idea how they actually work (AlphaZero, for example). This is probably a fundamental issue - a system that can perform a truly complicated task may very well be too complex to be reasoned about.

Yeah this bugs me on many levels... not least of which that it seems like the most boring possible way to create things, taking "history repeats itself" as a general strategy rather than a warning/adage.

Also, model training can apparently be pretty harmful to the environment, much like bitcoin mining: https://www.technologyreview.com/s/613630/training-a-single-ai-model-can-emit-as-much-carbon-as-five-cars-in-their-lifetimes/

Link to comment
Share on other sites

  • 5 months later...

This is transcendentally beautiful:

https://en.wikipedia.org/wiki/Differentiable_neural_computer

 

Quote

DNC networks were introduced as an extension of the Neural Turing Machine (NTM), with the addition of memory attention mechanisms that control where the memory is stored, and temporal attention that records the order of events. This structure allows DNCs to be more robust and abstract than a NTM, and still perform tasks that have longer-term dependencies than some predecessors such as Long Short Term Memory (LSTM). The memory, which is simply a matrix, can be allocated dynamically and accessed indefinitely. The DNC is differentiable end-to-end (each subcomponent of the model is differentiable, therefore so is the whole model). This makes it possible to optimize them efficiently using gradient descent.[3][6][7]

The DNC model is similar to the Von Neumann architecture, and because of the resizability of memory, it is Turing complete.[8]

 

 

Also this is a really cool website that describes many different neural network architectures

https://www.asimovinstitute.org/neural-network-zoo/

NeuralNetworkZoo20042019-1400x2380.png

  • Like 1
Link to comment
Share on other sites

just finished this.. is it posted here yet?  is a must watch. amazing shit. creepy too. the next 20 years are gonna be even more kookoo for coco puffs. 

 

 

  • Like 2
Link to comment
Share on other sites

Neat procedurally generated Choose Your Own Adventure game. Some fun stories coming out of it.

http://www.aidungeon.io/?m=1

Main criticism is that it lacks some memorisation of the previous interactions, and is where it eventually breaks down. You can keep the ball rolling if it throws out something off-kilter by typing revert to reroll the interaction.

Link to comment
Share on other sites

  • 1 month later...

Great video on surveillance capitalism and the extend to which companies are capable of and planning to socially engineer outcomes which are profitable for them under capitalism.  A couple details:

Sites basically spy on you to target ads.  But we already know this.  They're trying to get into your brain even deeper than just clicking ads.  Pokemon Go was a Google incubator project, speculated to have been funded by the CIA, meant to test out the first application of "footfall prediction" i.e. technology which can sell "predictions" that people will walk to a certain place, or in other words manipulations of that person's mind and behavior, to get them to walk there.  Imagine for instance Pokemon Go putting the most rare special Pokemon that everyone wanted in front of a McDonald's because McDonald's paid for that Pokemon placement, and when people walk to catch it McDonald's pays Pokemon Go's owners 2 cents or some market rate.  This is just the beginning.  Imagine how much more profane it will get once VR and Augmented Reality glasses get popular...

 

Edited by Zeffolia
  • Like 1
Link to comment
Share on other sites

  • 2 weeks later...

They made a Travis Scott bot that writes lyrics and sings like him, posted here with complimentary deepfake video. I can't tell it from the real thing tbh

source: https://www.adweek.com/creativity/this-fake-travis-scott-song-was-created-with-ai-generated-lyrics-and-melodies/

Edited by Gocab
  • Like 1
Link to comment
Share on other sites

  • 3 months later...

Is this going to be a pillow talk cast with Lex and Whitney? Are they a thing? Or did he build her. She's got that android kinda look, so possible

Link to comment
Share on other sites

27 minutes ago, goDel said:

Is this going to be a pillow talk cast with Lex and Whitney? Are they a thing? Or did he build her. She's got that android kinda look, so possible

hehe i dont want to rip on either of them. lex is the man and i know people who like whitney a lot. i might actually check it out, lex is killing it with his podcast. i know that he has an interest in the question of whether an AI could be created that a human could "genuinely" fall in love with, like in the movie her. 

 

Edited by very honest
  • Like 1
Link to comment
Share on other sites

Neural networks need sleep:

https://www.discovermagazine.com/technology/why-artificial-brains-need-sleep

 

Well, sort of. Headline’s a bit overblown. What it boils down to is that you need to feed random noise (which this article calls “sleep”) to machine learning systems to “teach” them that not very signal you feed them contains relevant information. This is not news.

Still, I’d never thought about it in these terms. In some ways the analogy really does hold: systems that process information autonomously really do need to be cailibrated every now and then to prevent them from seeing information where there is none* and in living organisms we call this “sleep”.

 

 

* such as conspiracies 

Link to comment
Share on other sites

  • 2 months later...
  • 3 months later...

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.