Jump to content
IGNORED

AI - The artificial intelligence thread


YO303

Recommended Posts

On 9/7/2017 at 6:50 PM, Zeffolia said:

There is no "AI" there is only "applied machine learning" so to speak, and the real primary dangers of it include:

-Society allowing itself to become dependent upon black box oracles they don't understand, for efficiency purposes

-Targeted psyops campaigns through ads

Yeah, real AI is very far, if at all possible. And by possible I mean strictly within our current linear, speed-based 'intelligence'. Human brain don't work like that. Also (maybe somewhat romantically) I believe true intelligence can only be analog. <- ok, maybe bad word use, but definitely, digital yes/no/1/0 system cannot facilitate intelligence. I might be wrong though, of course.

Quote

These are the immediate dangers.  The rest like AI robots with machine guns are too far off in the future to worry about.

You actually don't need AI to make killer robots. We are quite capable of making those right now. They might not walk on legs for the sake of efficiency, but strap a computer with IR cameras and machineguns to a track-based chassis and you're good to go. All modern, high-power stand-off weapons that US army is fielding, is quite capable of acquiring, locking and guiding all kinds of weapons to its target. Humans are included in this process solely to oversee and check data. 

Link to comment
Share on other sites

  • Replies 1k
  • Created
  • Last Reply

Top Posters In This Topic

On 5/15/2018 at 6:44 AM, Salvatorin said:

sidenote: I tend to think transhumanists miss the big idea entirely and need to spend more time researching instead of putting magnets in their fingertips or whatever. ultimately they just wanna float around in infinitely fun simulated ballpits as bloated furries stroking each other's multitudinous new and improved genitals. Completely unaware that all of that would likely become completely irrelevant. I think a hivemind intelligence is the most likely outcome, ending up with entirely alien motivations, building matrioshka brains and joining up with some kind of universal intelligence and ultimately merging totally with the universe, essentially "waking up" all matter and completing a cosmic loop, becoming Brahman.

 

I know I'm descending into shroomy DMT talk but well, if you've taken enough of it, the notion of individual consciousness starts to seem like a cruel joke.

Psychedelics make typically separate segments of your brain communicate in more detail than usual.  Brain-computer-interface based VR will be the ultimate psychedelic, linking not just separate parts of an individual's brain, but linking individual brains.  Will it be a Brahman or a drooling Borg Cube with nobody home though?

Link to comment
Share on other sites

You might be interested in reading Archaic Revival by Terence Mckenna if you haven't already. I just started it myself..He had similar predictions about reaching hyperreality with psychedelics and computers simulation. John C Lily also had visions of autonomous, sentient "Solid State Intelligences" in his book The Scientist.

Link to comment
Share on other sites

On 5/24/2019 at 7:06 AM, viscosity said:

You might be interested in reading Archaic Revival by Terence Mckenna if you haven't already. I just started it myself..He had similar predictions about reaching hyperreality with psychedelics and computers simulation. John C Lily also had visions of autonomous, sentient "Solid State Intelligences" in his book The Scientist.

It's one of my favorite books, I read it a long time ago though.  I'll have to re-read it with fresh eyes

Edited by zlemflolia
Link to comment
Share on other sites

GAN which generates random fake images of human faces

https://thispersondoesnotexist.com/

 

GAN which generates random fake images of anime waifus

https://www.thiswaifudoesnotexist.net/

 

GANs are the indisputable most IDM thing of 2019 and if you don't agree then you're wrong

Edited by Zeffolia
Link to comment
Share on other sites

Personally I’m a big fan of this:

https://docs.google.com/spreadsheets/d/e/2PACX-1vRPiprOaC3HsCf5Tuum8bRfzYUiKLRqJmbOoC-32JorNdfyTiRRsR7Ea5eWtvsWzuxo8bjOxCG84dAg/pubhtml

It’s an overview of AI and machine learning gone wrong, for example:

Quote

A robotic arm trained to slide a block to a target position on a table achieves the goal by moving the table itself.

 

Or 

Quote

Creatures bred for speed grow really tall and generate high velocities by falling over

 

Almost like reading a list of synopses for Stanislaw Lem stories

Link to comment
Share on other sites

You mean "wrong". Given the conditions, I'm sure these were the most simple and efficient solutions. ;)

It was rather enjoying to watch AI win playing Starcraft by doing stuff professional Starcraft players would never do. And still wouldn't consider. Even after having seen human players being completely outplayed by AI. 

I think the biggest problem in understanding the potential of AI, is the human centered notion of what "intelligent" is, or should be. It's especially present in the space of autonomous cars. Where we tend to think AI should produce cars that drive like humans. While in reality, if you'd use AI to its full potential, you'd end up with something completely different. And many times more efficient. (Because human-centered traffic is very inefficient!)

Link to comment
Share on other sites

53 minutes ago, rhmilo said:

Personally I’m a big fan of this:

https://docs.google.com/spreadsheets/d/e/2PACX-1vRPiprOaC3HsCf5Tuum8bRfzYUiKLRqJmbOoC-32JorNdfyTiRRsR7Ea5eWtvsWzuxo8bjOxCG84dAg/pubhtml

It’s an overview of AI and machine learning gone wrong, for example:

 

Or 

 

Almost like reading a list of synopses for Stanislaw Lem stories

That's an amazing spreadsheet and it should increase your healthy fear of AI

Link to comment
Share on other sites

8 hours ago, goDel said:

I think the biggest problem in understanding the potential of AI, is the human centered notion of what "intelligent" is, or should be. It's especially present in the space of autonomous cars. Where we tend to think AI should produce cars that drive like humans. While in reality, if you'd use AI to its full potential, you'd end up with something completely different. And many times more efficient. (Because human-centered traffic is very inefficient!)

In the end, probably, but as it stands self driving cars are still easily thrown off by simple things, such as stickers on roads - and computers for some reason find it impossible to distinguish birds from bicycles. So it’s still early days.

Really interesting to watch where this is going, though.

Link to comment
Share on other sites

AI is gonna kill us all!$@#%^$#^

Then again since it's incredibly intelligent it may just get bored and kill itself if it feels it isn't stimulated enough. It may be a crime to let it exist if it's self aware. It could be the equivalent of spending a million years in jail with how quickly it could think in an otherwise boring static universe. It just wouldn't... function at the level of "time" humans function at, if that makes sense. And interesting things wouldn't happen often enough to keep it stimulated, if it requires stimulation at all.

Then again why would it want to do anything on its own at all that isn't programmed in. Humans are driven by things like dopamine and serotonin and try to do their own thing, it would be foolish to think just "human level" analytical intelligence AI would be driven like that at all. It may not even have a will to live. Humans want to live because of... evolution. AI could have human level intelligence and not give a crap about if it lives or dies no different from computers today.

At least early on with AI wants to 'kill us' it will be a human's doing. It will change into something unrecognizable after a few decades, but at first I think things will be fine.

Edited by Brisbot
Link to comment
Share on other sites

6 minutes ago, Brisbot said:

gain since it's incredibly intelligent it may just get bored and kill itself if it feels it isn't stimulated enough. It may be a crime to let it exist if it's self aware

In Dune they had a massive uprising called The Butlerian Jihad (a historical event introduced as having happened in the far past of Dune) where all "thinking machines" were destroyed and all societies in their local region of space agreed to never create them.  I really like this idea and it's very Stallman-esque

 

"With software there are only two possibilities: either the users control the program or the program controls the users. If the program controls the users, and the developer controls the program, then the program is an instrument of unjust power. " -- Richard M Stallman

https://old.reddit.com/r/StallmanWasRight/

Edited by Zeffolia
Link to comment
Share on other sites

I just watched the DeepMind AphaStar Vs pro-player. I've seen similar videos where the AI wins before as well.

This whole topic also really fascinates me. 

I'm sure it's already been discussed but the applications of this are just incredible right. If you look for instance at the UK political system over Brexit, if AI could have learnt through some sort of metanalysis of human behaviour and history (like it can actually objectively study the workings of human kind even though that's hugely abstract, I'm assuming it's not up to this yet)....then it makes you wonder what could happen politically.  If an AI can study the entire history of man in like a week and then process everything to come up with intellignet responses to our issues, I say go for it. 

 

Really amazing topic.

Link to comment
Share on other sites

26 minutes ago, Zeffolia said:

In Dune they had a massive uprising called The Butlerian Jihad (a historical event introduced as having happened in the far past of Dune) where all "thinking machines" were destroyed and all societies in their local region of space agreed to never create them.  I really like this idea and it's very Stallman-esque

 

"With software there are only two possibilities: either the users control the program or the program controls the users. If the program controls the users, and the developer controls the program, then the program is an instrument of unjust power. " -- Richard M Stallman

https://old.reddit.com/r/StallmanWasRight/

Even thinking machines without willpower or an understanding it exists, but with a high level of analysis? Assuming such a thing is possible, I imagine it'll be possible for AI to be way more analytically intelligent than a human and also have no... agency?

Also totally forgot an AI would probably be able to turn its intelligence up or down depending on how much it needs. Probably be able to cut its intelligence into smaller intelligences to do other things, then come back together and synthesize. Then turn its intelligence down really low to watch re-runs of the office or something idk.

Edited by Brisbot
Link to comment
Share on other sites

1 minute ago, Polytrix said:

Does anyone know if AI is up to that stage of ability yet though? Like being able to interpret and assess human language?

Lol a little bit but no. It can interpret language in some really linear ways that were manually programmed in, but not in the interpret you mean.

We're probably going to be old or ded by the time we see actual human level AI, whatever that even means. So we're either the generation (or two) before where everyone is killed, or the generation before where everyone lives in a utopia of some kind. Really funny dichotomy there.

Link to comment
Share on other sites

Haha! Good response.

Well I'm not totally sure. I feel as though if you can teach an AI to ''learn'' and develop a ''neural network'' then isn't it already interpreting language?

I don't know if I want a utopia. I think I just want some sort of hyperintelligent processor for otherwise flawed human thinking.

Link to comment
Share on other sites

4 minutes ago, Polytrix said:

Haha! Good response.

Well I'm not totally sure. I feel as though if you can teach an AI to ''learn'' and develop a ''neural network'' then isn't it already interpreting language?

I don't know if I want a utopia. I think I just want some sort of hyperintelligent processor for otherwise flawed human thinking.

oh man that's the thing about AI. It can be incredibly versatile and also incredibly linear. So it's like who knows what the first true AI will be like.

Currently A computer can beat any human at chess but it can't beat a human at moving the chess pieces in the way chess pros do. Or walking to the chess board to begin the chess game. Or seeing the irony or humor of the way a chess game turned out, etc. etc. And even when it can do those things will it 'feel' it or just know we feel it and act accordingly.

So that's why AI is so scary because it's just a big question mark. Even if AI is great the first few decades or centuries it could turn on us after any amount of time. Or maybe it's playing the incredibly long game. Maybe it can plan to turn in a million years. Who knows.

Link to comment
Share on other sites

9 minutes ago, Polytrix said:

But isn't it a risk worth taking to some degree? 

well it's going to happen regardless because that's how humans are. I think it's worth it. Better than staying at our current tech for sure. I think at first AI will still be extensions of people.

Tho, I wouldn't be surprised if humans end up as they are in Wall-e at some point. Not necessarily in space and with the planet destroyed, but... just mostly there to consume. 

I don't know how humans can continue to find meaning when super intelligent AI has been a thing for long enough. It can just generate new media for you to consume, better than humans could ever make. And that's if it doesn't off us. 

Well... maybe humans will become AI to some degree as well... *shrugs*

So you know we could be living in a golden age right now before everything is just.... done for you. Well in an 'as good as it's going to get' type golden age... maybe a bronze age.

Edited by Brisbot
Link to comment
Share on other sites

2 hours ago, Brisbot said:

Even thinking machines without willpower or an understanding it exists, but with a high level of analysis? Assuming such a thing is possible, I imagine it'll be possible for AI to be way more analytically intelligent than a human and also have no... agency?

Also totally forgot an AI would probably be able to turn its intelligence up or down depending on how much it needs. Probably be able to cut its intelligence into smaller intelligences to do other things, then come back together and synthesize. Then turn its intelligence down really low to watch re-runs of the office or something idk.

No AI right now can "understand" natural language in the way we think of the word understand.  There are a lot of different technologies related to comprehension, one is word embeddings which are a method of encoding words or phrases into a vector where there exists some distance function that can calculate the distance between two words in terms of meaning, even within a context of the terms before and after it.  using recurrent neural networks you can leverage this to implement language translators by encoding words in a language-agnostic way in a high dimensional vector space representing the word's affinity for certain categories of meaning in each dimension of its vector encoding

As for cutting intelligence into smaller intelligences and doing things then coming back together and synthesizing it, this is also a very common technique.  You can take machine learning models and train them in different non-correlated ways, then you can create a meta-model that synthesizes the results of each of the other lower models in the hierarchy.  For instance if you want to create an image recognition tool you can train multiple different neural networks with dramatically different topologies and hyper-parameters in a way where their behavior is not correlated, and then you can create another neural network whose input is the outputs of those other neural network, and train this coordinator network to behave even better than any one of the individual children.  One naive way to do this is to take a democratic vote - each child model votes and the majority vote wins and is the output of the meta-model.  Other more complex ways exist.  You can even train a model then calculate its average error and train another model on top meant to counteract that error, then you quite literally calculate a linearization of these models and they are better than before because you have a base model and another model on top which counteracts the base model's known error (this is more commonly done on decision trees not neural networks).  you can even nest these arbitrarily deep where you have a base, a base error corrector, a base error corrector corrector, a base error corrector corrector corrector, etc.

Edited by Zeffolia
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.