Jump to content
IGNORED

Proof of Animal Consciousness Thread


may be rude

Recommended Posts

Yes, but what should we tell them to do?

 

You say "what their training data suggests they SHOULD do"...you need to have an aim/goal/values to have a "should"...and they (aims etc) won't just magically appear...they need to be put there by humans

 

So again, we're back to WHAT goals/aims/values should be picked, and we're back to the need for philosophy and ethics and all that

 

Maximum number of lives saved.  What other option is there.  Maybe we can be a bit objective-erring-on-the-side-of-becoming-immoral and apply weights to individual lives as well.  Save the President over two murderers.  

 

Realistically though if every car is self driving and we outlaw non-self-driving cars we can make it so no accidents will ever happen.  Every car can scan the area they're in constantly, above and below, in front and behind on each side, and share that information over the internet with every other car in the world.  And literally they can avoid all accidents.  Computers solve problems like this all the time with concurrent scheduling algorithms, it's stupidly easy for them

Link to comment
Share on other sites

The real issue of artificial intelligence is that it is impossible to 100% probe the range and domain of a sufficiently advanced decision making system to test its behavior.  You can't see "Hey it's trained in this way so it will do this, and we can now verify it probably won't do this" and trust it, and you cannot brute force it because the domain is too large, the universe would die before we can do that.  You have to add in manual overrides for output behavior which limit the decision making abilities of the system.  That's my understanding, I haven't studied the topic as much as I should have.  

Link to comment
Share on other sites

Crows wait until your back is turned before swooping in and claiming that french fry someone dropped on the ground.

They do this because they're ashamed to be seen eating food off of the ground.

 

Need I say more?

The year ago me is still on point

 

Also when did AI become animals?

Link to comment
Share on other sites

 

Yes, but what should we tell them to do?

 

You say "what their training data suggests they SHOULD do"...you need to have an aim/goal/values to have a "should"...and they (aims etc) won't just magically appear...they need to be put there by humans

 

So again, we're back to WHAT goals/aims/values should be picked, and we're back to the need for philosophy and ethics and all that

Maximum number of lives saved. What other option is there. Maybe we can be a bit objective-erring-on-the-side-of-becoming-immoral and apply weights to individual lives as well. Save the President over two murderers.

 

Realistically though if every car is self driving and we outlaw non-self-driving cars we can make it so no accidents will ever happen. Every car can scan the area they're in constantly, above and below, in front and behind on each side, and share that information over the internet with every other car in the world. And literally they can avoid all accidents. Computers solve problems like this all the time with concurrent scheduling algorithms, it's stupidly easy for them

I don't know if you're making a subtle joke or what, but the Utilitarianism vs Deontology debate has been around for a long-ass time...

 

there are some serious problems with "save the most amount of lives" as a moral heuristic...if that were the Highest Good, then we would be morally obliged to (e.g.) kill ONE person and give his organs to FIVE people on a transplant waiting list

 

In short, should we ACTUALLY be pushing one person off a bridge to save the five people down below? Or do we as individuals have a right not to be pushed off bridges for the greater good, or have our organs stolen to save the many?

Link to comment
Share on other sites

 

 

Yes, but what should we tell them to do?

 

You say "what their training data suggests they SHOULD do"...you need to have an aim/goal/values to have a "should"...and they (aims etc) won't just magically appear...they need to be put there by humans

 

So again, we're back to WHAT goals/aims/values should be picked, and we're back to the need for philosophy and ethics and all that

Maximum number of lives saved. What other option is there. Maybe we can be a bit objective-erring-on-the-side-of-becoming-immoral and apply weights to individual lives as well. Save the President over two murderers.

 

Realistically though if every car is self driving and we outlaw non-self-driving cars we can make it so no accidents will ever happen. Every car can scan the area they're in constantly, above and below, in front and behind on each side, and share that information over the internet with every other car in the world. And literally they can avoid all accidents. Computers solve problems like this all the time with concurrent scheduling algorithms, it's stupidly easy for them

I don't know if you're making a subtle joke or what, but the Utilitarianism vs Deontology debate has been around for a long-ass time...

 

there are some serious problems with "save the most amount of lives" as a moral heuristic...if that were the Highest Good, then we would be morally obliged to (e.g.) kill ONE person and give his organs to FIVE people on a transplant waiting list

 

In short, should we ACTUALLY be pushing one person off a bridge to save the five people down below? Or do we as individuals have a right not to be pushed off bridges for the greater good, or have our organs stolen to save the many?

 

 

We only have the right to resist them pushing us without feeling bad

Link to comment
Share on other sites

Utilitarism is not about the number of lives that can be saved but the amount of joy and suffer that is felt I think. A suffering person's life is not as valuable as a happy person's life. A murderer will cause a lot of suffering while a doctor can cure it. The sum of joy has to be maximized and the sum of suffering minimized. The problem is that joy and suffering cannot be measured. But it can indeed be roughly estimated and doing that we can easily see that we're not living in an utilitarist society: industrial livestock farming, nationalism and individualism is not utilitarist.

Edited by darreichungsform
Link to comment
Share on other sites

Utilitarism is not about the number of lives that can be saved but the amount of joy and suffer that is felt I think. A suffering person's life is not as valuable as a happy person's life. A murderer will cause a lot of suffering while a doctor can cure it. The sum of joy has to be maximized and the sum of suffering minimized. The problem is that joy and suffering cannot be measured. But it can indeed be roughly estimated and doing that we can easily see that we're not living in an utilitarist society: industrial livestock farming, nationalism and individualism is not utilitarist.

So the car should err on the side of killing sad people? What about 3 sad people vs 2 happy people? Maybe since it's stressful being black in America--and therefor black people are probably less happy on average than white people--we should err on the side of killing black people...in fact, we should steal organs from black people and give them to white people

Link to comment
Share on other sites

I'm joking of course...I don't think the solutions lie in utilitarianism...I think we would rather our cars not kill people based on their social worth or their happiness or whatever, that's just a recipe for a shitty paranoid world

 

Although we should probably have heuristics for hitting a 4-year kid vs a 100-year person with stage 4 colon cancer (I would love to see the CCTV footage of that)

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.