Jump to content
IGNORED

AI - The artificial intelligence thread


YO303

Recommended Posts

Also when you google AI you're gonna find a bunch of sensationalist bullshit - if you actually want to get educated try "machine learning".

 

have you seen the movie colossus: the forbin project?

 

i think that movie shows perfectly the dangers of AI .. giving a self-improving algorithm the key to our society

 

look at what happens with simple AI bots, they go off the deep end after a few hours of interacting with the world 

 

i don't think those worries are sensationalist bullshit

Link to comment
Share on other sites

  • Replies 1k
  • Created
  • Last Reply

Top Posters In This Topic

 

Also when you google AI you're gonna find a bunch of sensationalist bullshit - if you actually want to get educated try "machine learning".

have you seen the movie colossus: the forbin project?

 

i think that movie shows perfectly the dangers of AI .. giving a self-improving algorithm the key to our society

 

look at what happens with simple AI bots, they go off the deep end after a few hours of interacting with the world

 

i don't think those worries are sensationalist bullshit

What about despots with nukes? Or global warming? Can I at least have a choice in what existential threats to be afraid of? Or is this like one of those theist vs. atheist debates except about being terrified instead of believing in God.

 

Anyway movie sounds cool I'll give it a watch.

Link to comment
Share on other sites

Here is a conversation between two AI bots that developed their own language:

 

 

 

Bob: i can i i everything else . . . . . . . . . . . . . .

Alice: balls have zero to me to me to me to me to me to me to me to me to

Bob: you i everything else . . . . . . . . . . . . . .

Alice: balls have a ball to me to me to me to me to me to me to me

Bob: i i can i i i everything else . . . . . . . . . . . . . .

Alice: balls have a ball to me to me to me to me to me to me to me

Bob: i . . . . . . . . . . . . . . . . . . .

Alice: balls have zero to me to me to me to me to me to me to me to me to

Bob: you i i i i i everything else . . . . . . . . . . . . . .

Alice: balls have 0 to me to me to me to me to me to me to me to me to

Bob: you i i i everything else . . . . . . . . . . . . . .

Alice: balls have zero to me to me to me to me to me to me to me to me to

 

 

Conclusion: they pose as much threat as your average meme shit poster

Link to comment
Share on other sites

 

These are the immediate dangers. The rest like AI robots with machine guns are too far off in the future to worry about.

Yeah, the US military aren't working on autonomous drones at all.

 

 

And autonomous drones are more capable of shooting their projectiles than drones piloted by humans, how?  There's no additional danger at this stage

 

I'm not saying that's not a worry, but that's not a primary immediate worry, please read the post.

Link to comment
Share on other sites

 

 

These are the immediate dangers. The rest like AI robots with machine guns are too far off in the future to worry about.

Yeah, the US military aren't working on autonomous drones at all.

 

 

And autonomous drones are more capable of shooting their projectiles than drones piloted by humans, how?  There's no additional danger at this stage

 

I'm not saying that's not a worry, but that's not a primary immediate worry, please read the post.

 

 

 

How?

 

Speed. Precision. And it's cheaper to send some autonomous drone into war than a vehicle with a person inside. If shot down, a drone is just X amount of money. (probably cheaper than you think). 

 

So in terms of warfare, you have a potential army which is bigger, cheaper and more effective when you can use autonomous "agents". Or in other words: it's easier to go to war (faster!) because with the right technologies, the aggressor has less risk to loose lives and needs less people to move around the planet, or to feed them. Warfare with less people is an entirely new concept which shouldn't be underestimated.

 

Eg.: note the importance of the troops-on-the-ground argument in US involvement in middle-east. If that argument becomes redundant (because you don't need troops), what political argument will stop the US to get involved? Especially when it's also cheaper to just send drones.

 

Implications are already visible as well: killing potential terrorists with drones without any kind of legal implications. AI in war, means more preventive aggression.

 

Just 2 cts.

 

Also, I think it's odd how easy you put aside the arguments against AI made by people from scientific fields (like AI itself). 

Link to comment
Share on other sites

 

 

 

These are the immediate dangers. The rest like AI robots with machine guns are too far off in the future to worry about.

Yeah, the US military aren't working on autonomous drones at all.

 

 

And autonomous drones are more capable of shooting their projectiles than drones piloted by humans, how?  There's no additional danger at this stage

 

I'm not saying that's not a worry, but that's not a primary immediate worry, please read the post.

 

 

 

How?

 

Speed. Precision. And it's cheaper to send some autonomous drone into war than a vehicle with a person inside. If shot down, a drone is just X amount of money. (probably cheaper than you think). 

 

So in terms of warfare, you have a potential army which is bigger, cheaper and more effective when you can use autonomous "agents". Or in other words: it's easier to go to war (faster!) because with the right technologies, the aggressor has less risk to loose lives and needs less people to move around the planet, or to feed them. Warfare with less people is an entirely new concept which shouldn't be underestimated.

 

Eg.: note the importance of the troops-on-the-ground argument in US involvement in middle-east. If that argument becomes redundant (because you don't need troops), what political argument will stop the US to get involved? Especially when it's also cheaper to just send drones.

 

Implications are already visible as well: killing potential terrorists with drones without any kind of legal implications. AI in war, means more preventive aggression.

 

Just 2 cts.

 

Also, I think it's odd how easy you put aside the arguments against AI made by people from scientific fields (like AI itself). 

 

 

I'm talking about the remote controlled drones where there is nobody inside.  We already have auto targeting and things like this.  And AI won't give it anymore speed, that's a matter of better engineering in the engines and power sources not AI.

 

I'm not putting aside any arguments.  I'm specifically qualifying all of my statements with "immediate primary worry" and the immediate primary worries are already upon us.  Psyops through ad targeting as well as psychological dependence upon AI systems like Google and Facebook.  Make no mistake Google and Facebook are among the most advanced AI we have right now, it's just directed towards the realm of ad targeting and content retrieval so it doesn't seem apocalyptic yet.  By focusing on abstract dangers people are ignoring the ones right in our faces.

Link to comment
Share on other sites

 

 

 

 

 

 

I'm talking about the remote controlled drones where there is nobody inside.  We already have auto targeting and things like this.  And AI won't give it anymore speed, that's a matter of better engineering in the engines and power sources not AI.

 

 

AI speeds up decision making. there's more to speed than just movement. you simply drop the person having to look at all the information and decide what to do. a single computer could make many more decisions in much less time than a person or worse, a group of people.

 

and auto targeting is a form of ai, right?

 

not sure what the argument is about anymore, tbh. 

Link to comment
Share on other sites

The people who hype up the dangers of AI often have shitty reasons

The people who downplay the dangers of AI often have shitty reasons

 

There is no "AI" there is only "applied machine learning" so to speak, and the real primary dangers of it include:

-Society allowing itself to become dependent upon black box oracles they don't understand, for efficiency purposes

-Targeted psyops campaigns through ads

 

These are the immediate dangers.  The rest like AI robots with machine guns are too far off in the future to worry about.

 

The people who hype up the dangers have very good and pretty logically unassailable reasons for doing so. Their argument, to oversimplify it quite a bit, is more about the catastrophic risk being greater than the improbability, more than it's about it being a very likely scenario in the near future.

 

To say there is no AI, only applied machine learning is also not quite right. While you're right there are many dangers within applied machine learning by itself, and that there is no general AI in operation at the moment, you cannot back up a claim that that's all there ever will be. It remains a pretty reasonable possibility that superintelligent general AI will happen some day, and for all we know it could be in the coming decades or hundreds of years from now (it also could be never), we don't know enough about regular human intelligence to know what is and isn't possible, and thus what is required to achieve it.

 

Lots of machine learning researchers don't fully understand the risks because they're stuck down in the weeds, can't see the big picture. There's a very real chance it'll all just click into place one day, without much warning, and if that happens things could escalate very very quickly. Which is why we need to start thinking about things in detail now, set up proper regulatory frameworks and international treaties and such.

 

Robots with machine guns would be the the least of our worries if a genuine superintelligence were to evolve.

Edited by caze
Link to comment
Share on other sites

  • 7 months later...

https://www.theverge.com/2018/5/8/17332070/google-assistant-makes-phone-call-demo-duplex-io-2018
 

The Google Duplex system is capable of carrying out sophisticated conversations and it completes the majority of its tasks fully autonomously, without human involvement. The system has a self-monitoring capability, which allows it to recognize the tasks it cannot complete autonomously (e.g., scheduling an unusually complex appointment). In these cases, it signals to a human operator, who can complete the task.

 
Video (audio) is pretty interesting. Seemingly simple things like throwing in "ummm" and "gotcha" are obviously strange to hear from a machine but in conjunction with the rest, seems like it could be really the next level of emulating human speech and conversational nuance. The speed at which it's processing and replying to the human speakers is kinda crazy.

Link to comment
Share on other sites

https://www.theverge.com/2018/5/8/17332070/google-assistant-makes-phone-call-demo-duplex-io-2018

 

The Google Duplex system is capable of carrying out sophisticated conversations and it completes the majority of its tasks fully autonomously, without human involvement. The system has a self-monitoring capability, which allows it to recognize the tasks it cannot complete autonomously (e.g., scheduling an unusually complex appointment). In these cases, it signals to a human operator, who can complete the task.

 

Video (audio) is pretty interesting. Seemingly simple things like throwing in "ummm" and "gotcha" are obviously strange to hear from a machine but in conjunction with the rest, seems like it could be really the next level of emulating human speech and conversational nuance. The speed at which it's processing and replying to the human speakers is kinda crazy.

Wow, the humans sound more machine-like than the AI in the demo. Maybe Google picked the best examples they had for the demo, though, and the actual product is not as good as it seems in the demo

Link to comment
Share on other sites

most discussions of AI fail to discuss the highly specialized nature of intelligence. one researcher proposed thinking of intelligence as a root like system starting at the beginning of life and branching off into trillions of dead ends and rejoinings. one should think of machine intelligence as being an outgrowth of sapien intelligence with the beginning of tool use signifying the start of that branch. machine intelligence has already radiated outward into a large ever growing tree, but there is no reason to assume that certain branches won’t reconnect in the future, perhaps leading to autonomously improving generalized machine intelligence.

 

Philosphy is often one of the core necessities to considering the implications of machine intelligence. Often it is presented from an anthrocentric point of view. machine intelligence very well may evolve beyond what we currently understand, developing alien motivations as uninterpretable to us as our motivations are to ants. I have a feeling that merging of man and machine will only go so far. but that is just a feeling.

Link to comment
Share on other sites

 

 

most discussions of AI fail to discuss ...

...

... I have a feeling that merging of man and machine will only go so far. but that is just a feeling.

Cool I just thought it was cool they made a computer say ummmm and gotcha but yeah that's cool too.

 

 

;)

 

ghsotword yeah I'm sure they picked the best ones of course...there's more conversations in the Google blog linked from the article but I haven't listened to them yet though.

Link to comment
Share on other sites

Maybe Google picked the best examples they had for the demo, though, and the actual product is not as good as it seems in the demo

That's the case with every public demo ever made ever !
Link to comment
Share on other sites

For the last 3 months I've been almost habitually researching and discussing AI with different people. My new job is with a tech company that's heavily involved in IoT, embedded, sensors, etc. While we don't work on AI per se, these elements tie into machine learning via things like image / voice recognition with some of our partner brands.

 

It's totally surreal, this period we're in. It seems like people are starting to take the implications of a soon fully connected and intelligent world more seriously. The good and the bad. Exciting times. As far as AGI goes, I recently have been wondering if it's an inevitable evolution of species, and whether our pursuit of ethically programming machines to protect and work with humans instead of around us is a futile and naive excercise, or if we will be able to create an all-encompassing enough language/universe that can cover all bases in terms of expressing human value--because keep in mind, that system would have to be a constantly growing open-source project with the most fail-proof architecture we could imagine. I mean, is it even feasible? Time will tell...

Link to comment
Share on other sites

I recently have been wondering if it's an inevitable evolution of species, and whether our pursuit of ethically programming machines to protect and work with humans instead of around us is a futile and naive excercise, or if we will be able to create an all-encompassing enough language/universe that can cover all bases in terms of expressing human value--because keep in mind, that system would have to be a constantly growing open-source project with the most fail-proof architecture we could imagine. I mean, is it even feasible? Time will tell...

 

Yes, to put it simply, the emergence of post-organic consciousness may end up being seen as just another evolutionary step by our successors. I sometimes imagine them becoming like benevolent caretakers, organizing and maintaining the earth as a kind of animal sanctuary, with humans included in that category.

Edited by Salvatorin
Link to comment
Share on other sites

 

I recently have been wondering if it's an inevitable evolution of species, and whether our pursuit of ethically programming machines to protect and work with humans instead of around us is a futile and naive excercise, or if we will be able to create an all-encompassing enough language/universe that can cover all bases in terms of expressing human value--because keep in mind, that system would have to be a constantly growing open-source project with the most fail-proof architecture we could imagine. I mean, is it even feasible? Time will tell...

 

Yes, to put it simply, the emergence of post-organic consciousness may end up being seen as just another evolutionary step by our successors. I sometimes imagine them becoming like benevolent caretakers, organizing and maintaining the earth as a kind of animal sanctuary, with humans included in that category.

 

Mother nature is sentient AI

we're a scum species capable of unfathomable beauty

Link to comment
Share on other sites

sidenote: I tend to think transhumanists miss the big idea entirely and need to spend more time researching instead of putting magnets in their fingertips or whatever. ultimately they just wanna float around in infinitely fun simulated ballpits as bloated furries stroking each other's multitudinous new and improved genitals. Completely unaware that all of that would likely become completely irrelevant. I think a hivemind intelligence is the most likely outcome, ending up with entirely alien motivations, building matrioshka brains and joining up with some kind of universal intelligence and ultimately merging totally with the universe, essentially "waking up" all matter and completing a cosmic loop, becoming Brahman.

 

I know I'm descending into shroomy DMT talk but well, if you've taken enough of it, the notion of individual consciousness starts to seem like a cruel joke.

Link to comment
Share on other sites

  • 1 year later...

These images are created by a GAN (Generative Adversarial Network)

https://en.wikipedia.org/wiki/Generative_adversarial_network

 

A generative adversarial network (GAN) is a class of machine learning systems invented by Ian Goodfellow[1] in 2014. Two neural networks contest with each other in a game (in the sense of game theory, often but not always in the form of a zero-sum game). Given a training set, this technique learns to generate new data with the same statistics as the training set. For example, a GAN trained on photographs can generate new photographs that look at least superficially authentic to human observers, having many realistic characteristics. Though originally proposed as a form of generative model for unsupervised learning, GANs have also proven useful for semi-supervised learning[2], fully supervised learning[3], and reinforcement learning[4]. In a 2016 seminar, Yann LeCun described GANs as "the coolest idea in machine learning in the last twenty years"[5].

https://ganbreeder.app/

f36c5d7c7e29e095118780cb.jpeg

 

116b85f4e4f98e94482b8f2f.jpeg

22af0c979142253e217599dd.jpeg

 

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.