Jump to content
IGNORED

Technology, the future and abuse


coax

Recommended Posts

  • Replies 56
  • Created
  • Last Reply

A thread after my own heart. The book "techno-fix" is a great resource for anyone concerned about the clash of technology, environment, policy and science.

 

Cool thanks. I've heard about it but haven't read it.

 

there have been a few instances where a nuclear war could have been started that would destroy the earth and we sort of passed this test and it seems like things are defusing even further as everything is becoming more globalized and western values are taking over the world, and i believe that those values and beliefs prevented the people from pushing the nuke buttons in both ussr and us. while i do subscribe to the idea that technologies and other artifacts do have some sort of agency and power (http://en.wikipedia.org/wiki/Actor%E2%80%93network_theory) it still seems that humans are much more in control of things to a large extent, it seems a bit far fetched to think that your doomsday scenario will be overlooked, that AI+nanotechnology+advanced neuroscience will get out of control and fuck us all.

 

I can relate to that view I totally can. I guess the doomsday scenario is more like the singularity, but it could certainly be 'the walking dead' (in terms of collapsed economy, not the zombie aspect) or whatever too but that's not as interesting as I'm more interested in the inherent power and reach of these technologies. I was also at least attempting to describe some of the problems with mental models and our view of ourselves and reality etc and how this can fit in with the power disparity of the powerful vs the powerless.

 

As far as I can tell it's not outrageous to claim that VR, simulations and even neuroscience could be used for torture and for advertising and marketing. It's a little more reaching to say that it will be used by western governments and corporations for influencing people, but it's not as far fetched to think of scenarios of China, Russia, North Korea or whoever else using these. And middle eastern terrorists in a different way :p

 

There's all kinds of scenarios, but I'm worried about the new normal, and how lines are being pushed, and new normals created through long time periods. We don't really know where we are or how we end up where do. Boiling frog effect. The real issue is the vast power of these things, since any technology will by necessity change normals in society and change behavior. You know, caze talks about humans being virtual constructs in a solar computational material. How would anyone know who is in charge, how we got there, what is being planned, etc? He can say that and be taken seriously, while my worries of exploitation of tools on earth are not. That is mysterious to me. If I was a simulation inside a computer somewhere, I would by definition not have any power. I would be at the mercy of whoever was 'administrator' of the hardware of that simulation. But in any case that is far beyond what my main point was and we can go into all kinds of scenarios regarding that.

 

Just to say as a final point, I'm worried about the general knowledge of me, you and everyone else, when compared to the powerful. Technology and society seems to move in a general direction, and there are usually way more people who don't care and accept it, than people who are in any kind of opposition. For the most part, the plans of the powerful go through without much resistance, and usually without any scrutiny by the 'general public'. We have no idea what the endgame is, or if they have one. I'm not claiming that I know they do or that they have one, and I don't want this to sound like a conspiracy, but how am I to debate about this without talking about the obvious gaps in knowledge, influence and power without bringing it up.

 

I had a post a while back about democracy and whatever, and I still believe all of that. I don't believe in reptilians controlling everything :P but there is ample space between a global conspiracy, - and technology, levels of knowledge and awareness being exploited. And it can only be worse with these kinds of deep and far reaching technologies that can potentially put humans in a simulation or control/remove/alter human consciousness and memory, and alter materials at a low scale etc. I guess all I'm hoping for is some kind of awareness of and resistance of, blindly accepting new technology, making conscious decisions about new policies that come into play etc. Right now if I want a new passport in my country, I HAVE to give my fingerprint. It's done at the police station. Ok I'm rambling on for long enough here so, but a lot can be said about this and it all connects together in many ways.

Link to comment
Share on other sites

I'm just going to add a few points, almost all directly from the book I mentioned earlier ("Techno-Fix"), for your consideration.

 

Why "techno-fixes" (technologically driven "solutions" to existing problems) don't work, as Michael Huesmann explains, (in my words for better or worse):

 

1. All technologies have unintended side effects.

2. Technological fixes do not, by their nature, attack the root problem they attempt to solve - rather they fix symptoms, and therefore do not offer long-term solutions.

3. These fixes actually cover up the symptoms; this causes problems to grow over time, perhaps unnoticed by most.

 

Technology has, despite our belief, failed to empirically show that it increases basic human happiness. Indeed it often robs us of old sources of happiness (eg TV replaces social interaction, assembly lines replace craft work).

 

When "progress" became a societal driver - recently, as far as all human history is concerned - possibly partly due to historical religious narratives from the Abrahamic faiths which replaced a cyclical concept of time theretofore assumed - people expected this to happen:

 

As material progress increases, so too would social progress and spiritual (moral) progress. But instead, material progress increased and the other two stagnated - a flatline if we were to graph it loosely. Technology is not the root cause of the positive social change we have seen in the last century. Increased communication, increased individual autonomy, these are positive values in our society - but they are not necessary for social change, nor have these things been a net gain for humanity from either a social or moral standpoint.

 

The USA is the most "techno-optimistic" country on Earth. Our vested financial interests see technology as profit, and it is sold to us with the promise that more technology = more happiness. Now consider that we utilize technology heavily, but our understanding of it in the US is abysmal: 80% of US citizens do not meet minimum standards of scientific literacy; 50% don't believe in evolution; 28% believe in astrology; 33% in witches; 25% in UFO visitors. There is a fundamental disconnect between the use, and the understanding, of the technologies we use and trust.

 

Huesmann discusses the "Myth of Value Neutrality," which has already come up in this thread - the idea that all technologies are amoral, or that they are entirely "neutral" in their existence, to be used for good or bad. Here's the simple and obvious counter argument: with any advanced technology, the values of the inventor/engineer/corporation are necessarily embedded into the technology itself. Examples might include genetic engineering of crops - this is a technology embedded with the value of the commodification of plants & animals. Similarly, the car is an advanced technology embedded with the value of personal independence. The latter seems like a noble value to uphold, but I'll get to the some of the consequences in the next bit. In short, advanced technologies are not "amoral" in the sense that they simply exist in a neutral state until they are used.

 

Also discussed is what he calls the "Myth of Technological Autonomy." The question is: why does technology seem to have autonomy? It's so large, it gains a sort of inertia: think of car culture. How would we ever get out of that now? With millions of cars, highways, truckers, even planes and boats, it seems an immovable path has been lain, as if technology is just moving along, going this way anyway... False. This idea diverts us from the need to actively engage in determining the path of our technologies. It is not necessary, or wise, to simply accept whatever path happens from the decisions of executives and politicians. With respect to cars, there are a host of unintended side effects that had nothing to do with the initial value: increased energy consumption, countless deaths, overuse of material resources, enormous amounts of toxic waste, and of course, the difficulty of changing course.

 

The "Technological Imperative" is the belief that whatever can be built, should be built. Here's the issues with it.

 

1. This idea is used as an excuse to avoid democratic deliberation with respect to the development & implementation of innovative technologies.

2. It makes technological developments seem inevitable, by hiding the fact that the developments are directed by special interests and powerful social classes.

3. This idea promotes a culture-wide passive acceptance of new technologies, no matter how destructive. Consider mountain-top removal.

 

So, how does technology make its way into our society? It can be broken down into two steps: Peer review by scientists, engineers, academics. Step two is simply market assessment for profitability, conducted by business executives. There's an issue here. We're missing a crucial step: review by citizen representatives to determine if that new proposed technology could be seen to have any negative social or environmental consequences that would be unacceptable. There is a wonderful chapter on the process of

 

 

Those are just some of the points the author makes throughout the book. I apologize for not having any sources - the book has well over 1,000 citations from scientific journals & government statistics - but I've lended the book to a professor of mine and he claims he lost it. :P

 

As a disclaimer, I don't necessarily hold any of those ideas myself, but they're relevant to the thread. I come from a background in both microfabrication & nanotechnology (my degree and lab jobs), and in the world of building reuse, recycling, salvage, and controlled demolition. I've swayed through both sides of this argument over the last decade; it's a tricky thing to understand, and it's all to easy to fall back on personal beliefs, either hastily or lazily. I will say at this point I am highly suspicious of the likes of Kurzweil, Kaku, Diamandis and others; they paint a picture too rosy for reality, because it is easy to sell and gives people hope. That's nice but it's ultimately a terrible thing to do without deep consideration of the consequences, or at least inviting their audiences to consider the pitfalls. Their arguments are entirely one-sided, a suspicious rhetorical trick at best, deliberately misleading at worst.

 

Anyway, I hope some of these points are interesting or provocative to at least a few of you. Here's a link to the book I based most of this post on.

 

 

Link to comment
Share on other sites

there have been a few instances where a nuclear war could have been started that would destroy the earth and we sort of passed this test

 

True, we've done OK so far, but this isn't an exam. The test continues so long as there is risk. As for the values of "the west" causing the stop of nuclear war, I call shenanigans. What value that was entirely western can you claim stopped these wars? Is there really some Eastern or primitive value that says "Fuck this gay earth" any more than your average westerner might? Claiming that as a Western Value Triumph seems really arbitrary.

Link to comment
Share on other sites

There hasn't been a time where people weren't afraid of where this new scary technology was going to lead us

 

You have noticed the often-applauded "acceleration" of new technologies in the last half a century, yeah? There's more reason for concern now than there ever has been in the past.

Link to comment
Share on other sites

One last post, then beer time. I'm interested to know what people see nanotechnology as providing for humanity in the short and long term? And what, if any, concerns do you have about the way it's designed, implemented, or produced?

Link to comment
Share on other sites

I'm just going to add a few points, almost all directly from the book I mentioned earlier ("Techno-Fix"), for your consideration.

 

Why "techno-fixes" (technologically driven "solutions" to existing problems) don't work, as Michael Huesmann explains, (in my words for better or worse):

 

1. All technologies have unintended side effects.

2. Technological fixes do not, by their nature, attack the root problem they attempt to solve - rather they fix symptoms, and therefore do not offer long-term solutions.

3. These fixes actually cover up the symptoms; this causes problems to grow over time, perhaps unnoticed by most.

 

Technology has, despite our belief, failed to empirically show that it increases basic human happiness. Indeed it often robs us of old sources of happiness (eg TV replaces social interaction, assembly lines replace craft work).

 

When "progress" became a societal driver - recently, as far as all human history is concerned - possibly partly due to historical religious narratives from the Abrahamic faiths which replaced a cyclical concept of time theretofore assumed - people expected this to happen:

 

As material progress increases, so too would social progress and spiritual (moral) progress. But instead, material progress increased and the other two stagnated - a flatline if we were to graph it loosely. Technology is not the root cause of the positive social change we have seen in the last century. Increased communication, increased individual autonomy, these are positive values in our society - but they are not necessary for social change, nor have these things been a net gain for humanity from either a social or moral standpoint.

 

The USA is the most "techno-optimistic" country on Earth. Our vested financial interests see technology as profit, and it is sold to us with the promise that more technology = more happiness. Now consider that we utilize technology heavily, but our understanding of it in the US is abysmal: 80% of US citizens do not meet minimum standards of scientific literacy; 50% don't believe in evolution; 28% believe in astrology; 33% in witches; 25% in UFO visitors. There is a fundamental disconnect between the use, and the understanding, of the technologies we use and trust.

 

Huesmann discusses the "Myth of Value Neutrality," which has already come up in this thread - the idea that all technologies are amoral, or that they are entirely "neutral" in their existence, to be used for good or bad. Here's the simple and obvious counter argument: with any advanced technology, the values of the inventor/engineer/corporation are necessarily embedded into the technology itself. Examples might include genetic engineering of crops - this is a technology embedded with the value of the commodification of plants & animals. Similarly, the car is an advanced technology embedded with the value of personal independence. The latter seems like a noble value to uphold, but I'll get to the some of the consequences in the next bit. In short, advanced technologies are not "amoral" in the sense that they simply exist in a neutral state until they are used.

 

Also discussed is what he calls the "Myth of Technological Autonomy." The question is: why does technology seem to have autonomy? It's so large, it gains a sort of inertia: think of car culture. How would we ever get out of that now? With millions of cars, highways, truckers, even planes and boats, it seems an immovable path has been lain, as if technology is just moving along, going this way anyway... False. This idea diverts us from the need to actively engage in determining the path of our technologies. It is not necessary, or wise, to simply accept whatever path happens from the decisions of executives and politicians. With respect to cars, there are a host of unintended side effects that had nothing to do with the initial value: increased energy consumption, countless deaths, overuse of material resources, enormous amounts of toxic waste, and of course, the difficulty of changing course.

 

The "Technological Imperative" is the belief that whatever can be built, should be built. Here's the issues with it.

 

1. This idea is used as an excuse to avoid democratic deliberation with respect to the development & implementation of innovative technologies.

2. It makes technological developments seem inevitable, by hiding the fact that the developments are directed by special interests and powerful social classes.

3. This idea promotes a culture-wide passive acceptance of new technologies, no matter how destructive. Consider mountain-top removal.

 

So, how does technology make its way into our society? It can be broken down into two steps: Peer review by scientists, engineers, academics. Step two is simply market assessment for profitability, conducted by business executives. There's an issue here. We're missing a crucial step: review by citizen representatives to determine if that new proposed technology could be seen to have any negative social or environmental consequences that would be unacceptable. There is a wonderful chapter on the process of

 

 

Those are just some of the points the author makes throughout the book. I apologize for not having any sources - the book has well over 1,000 citations from scientific journals & government statistics - but I've lended the book to a professor of mine and he claims he lost it. :P

 

As a disclaimer, I don't necessarily hold any of those ideas myself, but they're relevant to the thread. I come from a background in both microfabrication & nanotechnology (my degree and lab jobs), and in the world of building reuse, recycling, salvage, and controlled demolition. I've swayed through both sides of this argument over the last decade; it's a tricky thing to understand, and it's all to easy to fall back on personal beliefs, either hastily or lazily. I will say at this point I am highly suspicious of the likes of Kurzweil, Kaku, Diamandis and others; they paint a picture too rosy for reality, because it is easy to sell and gives people hope. That's nice but it's ultimately a terrible thing to do without deep consideration of the consequences, or at least inviting their audiences to consider the pitfalls. Their arguments are entirely one-sided, a suspicious rhetorical trick at best, deliberately misleading at worst.

 

Anyway, I hope some of these points are interesting or provocative to at least a few of you. Here's a link to the book I based most of this post on.

 

 

Very nice yeah. A couple points. How neutral do you think companies are to moral questions about products and services they produce? There seems to be mostly a *if the customer wants it then it's good" attitude. This means that the customer is mostly responsible for what is popular and also the bad side effects of technology and such. It leads to an even bigger need for talking together and organizing as a group. When I talk to some people, they feel that people who are smarter than them or more powerful than them will take care of all these problems, rather than the bottom-up influence of the general public as a whole. When I brought up most problems a few years back, everyone felt we couldn't do anything and we shouldn't worry about it, while the economic and political theorists say the exact opposite.

In terms of consumer products, this means that if people buy or otherwise support a product, then that's a market signal to make more and invest more in that product. The market signal of profit is one of the strongest signals, unless there is some conspiracy! ;d

 

The rest of your post pretty much sums it up nicely. The only thing I would note even further, is the potential qualitative difference in power and reach of nanotechnology, bio and AI compared to earlier technologies. Going with the technology neutrality myth, which values and capabilities are built into nano/bi/ai/intelligence/predictive analytics? And we can worry especially about police, security and military use. We maybe don't have to worry about middle-upper class people in the west, but think about the palestinians, the citizens of various autocracies, or shadier governments with a big amount of corruption. China is the only country for example that still creates spiked batons. It is last time I read, outlawed everywhere, but China can go on creating them and using them domestically. We already know there is a billion dollar war and security industry

Link to comment
Share on other sites

Yes. It's pretty banal, from what I remember of it.

 

I never thought of it as banal, but I can certainly see how it can be seen that way. From being an AI enthusiast though, I can tell you it's not far off from the projected scenarios. The most hypothetical aspects are those of the nanotechnology in the movie. But there is no physical reason to think the plot points in the movie about the AI's goals, the mind control and other things not becoming true. The movie was about AI and grew out of the whole AI safety movement like www.intelligence.org

It was NOT some banal naive sci-fi that hollywood came up with to make money. Wally Pfizer the director was really into this and spoke to many experts on the subjects. We could debate about how entertaining or well executed the movie was however. I personally felt very engaged in it due to being an AI enthusiast, but I think that's why it failed in the general mainstream, and also it was probably a bit too dry and felt cliche to many people.

Link to comment
Share on other sites

 

there have been a few instances where a nuclear war could have been started that would destroy the earth and we sort of passed this test

 

True, we've done OK so far, but this isn't an exam. The test continues so long as there is risk. As for the values of "the west" causing the stop of nuclear war, I call shenanigans. What value that was entirely western can you claim stopped these wars? Is there really some Eastern or primitive value that says "Fuck this gay earth" any more than your average westerner might? Claiming that as a Western Value Triumph seems really arbitrary.

 

humanism i guess, the idea that human life in general is the ultimate cause.

Link to comment
Share on other sites

 

Very nice yeah. A couple points. How neutral do you think companies are to moral questions about products and services they produce? There seems to be mostly a *if the customer wants it then it's good" attitude. This means that the customer is mostly responsible for what is popular and also the bad side effects of technology and such. It leads to an even bigger need for talking together and organizing as a group. When I talk to some people, they feel that people who are smarter than them or more powerful than them will take care of all these problems, rather than the bottom-up influence of the general public as a whole. When I brought up most problems a few years back, everyone felt we couldn't do anything and we shouldn't worry about it, while the economic and political theorists say the exact opposite.

I'm just going to add a few points, almost all directly from the book I mentioned earlier ("Techno-Fix"), for your consideration.

 

Unfortunately is more like "we must keep the costs down as much as possible while still retaining some usefulness and likeability for the users". While there is a large amount of consumer-created trends, large areas of industry already produce their own trends that in turn influence consumers to regard the brand as a trendsetter, which in turn influence other companies to jump on the profit wagon. On top of that, the PR campaigns (also social-media advertising) that build brands are trying very hard to supress the indices of a bad product, keeping the friendly face for the company, establishing almost a personality-like profile of a company that will further personalize the consumer-producer relationship. In this way, even more is added to an illusion of "everything is ok, keep spending, we're humans too after all."

Link to comment
Share on other sites

 

 

there have been a few instances where a nuclear war could have been started that would destroy the earth and we sort of passed this test

 

True, we've done OK so far, but this isn't an exam. The test continues so long as there is risk. As for the values of "the west" causing the stop of nuclear war, I call shenanigans. What value that was entirely western can you claim stopped these wars? Is there really some Eastern or primitive value that says "Fuck this gay earth" any more than your average westerner might? Claiming that as a Western Value Triumph seems really arbitrary.

 

humanism i guess, the idea that human life in general is the ultimate cause.

 

 

what role would you say was played by this western value in the development of nuclear bombs and their actual use against innocent people?

Link to comment
Share on other sites


coax, on 20 May 2015 - 5:30 PM, said:
I still think capabilities can be inherently bad. I'm just not sure how we can think we would be ready for nanotechnology or neuroscience and not have it go incredibly haywire. What about all the innocent powerless victims not in the western world who can be abused by their leaders?

 

 

Calling them victims before anything has happened is begging the question. Who's to say new technologies won't prevent their victimisation instead? You're not actually providing much of an argument here. Neither you nor I can predict how any new technology will be used, and with what effects. Whether they are beneficial or detrimental will depend on the moral actions of the people who implement them, not the technology in itself.


Not only is all this stuff being created, but world leaders are actively encouraging a global manufacturing, production and adoption of it. It is a free for all arena of all the worst dimensions.

 

 

So what? This sounds like a good thing to me. Again, you've yet to demonstrate why any of this is the 'worst'.

 



It's not even a thoughtful experimental trial and error adoption but rather a flooding of global capital and research. What could possibly go wrong?

 

 

You don't know, I don't know, you also don't know what could go right. Also, future generations' opinions on what's right or wrong are not beholden to our moral notions, to think otherwise is the height of arrogance.


All the things we learn in say, virtual reality, can be applied to that virtual constructs idea you had earlier, in terms of neuroscience. It can be applied in terms of biotechnology and changing or propagating genes or GMO's (even GM humans) to further some goal. Or just for evil purposes.

 

 

Who says those goals will be evil? There's absolutely nothing inherently wrong with genetic engineering, even when it comes to humans (though we're rightly wary about how we tread there, initially it will limited to curing disease, we're a good while away from germline changes, but even then there's nothing wrong with that in principal as long as it's developed in an ethical manner). Breakthroughs in neuroscience will enable us to treat depression, anxiety disorders, dementia, etc. Not allowing advancement in these areas is actually immoral, not the other way round.

Most of the tech ceo's and leaders mostly talk about capitalism and consumer satisfaction in a very distant theoretical way. Any issues are sort of brought up as "yeah those are issues and we need global leaders and tech companies to think about them" and that's the end of the discussion really. We're not talking about making a hammer or even a computer, we're talking about fundamental capabilities that allow deep and vast control and long term planning of societies and individuals at the deepest levels of biology and material science. That's why I said I don't like the word technology. You're using it in the same generic way as we all do. They are fundamentally different.

 

 

They are not fundamentally different, all large scale technological change has had massive repercussions for the societies and ecosystems in which they developed, whether we're talking about the first crop cultivation or animal husbandry, the invention of the printing press, the first steam engine, cracking open the atomic nucleus, etc. Viewed from the perspective of a person going through each change, and compared to the generation that comes after them, the scale of that change is pretty similar in each case. It's illogical to compare the scale of future change in some absolute manner.


I've talked about this myself too so I don't disagree with that. I think the best solution for it would be to try to fulfill the basic needs of people, based on what we know about biology and psychology, but then not go too far with the other technology.

 

 

You've yet to provide a good reason why though, aside from irrational fear.


Capitalism is based on another premise, namely that desires are infinite, which means we could create any and all technology, and we should do so. It's more exciting since our brains also want novel things all the time, but there is a balance to be had. My fundamental worry about this are the deep and vast capabilities that result from low scale control of the environment, it really is transformative in a NEW way.

 

 

Capitalism is not based on the premise that desires are infinite, not sure where you got that from, or what you even mean by that. Capitalism is merely a system designed to efficiently allocate resources (something which is provably impossible to do by design - at least given a limited set of resources, post-scarcity things become different). It has a requirement that there is always some external source to spur growth (this is merely the 2nd law of thermodynamics in another form), maybe that's what you were hinting at, but we are nowhere near the limit of exploiting our energy and resource sources.


As far as energy is concerned, I'm sure you've heard of peak oil, all the problems with renewable energy, and also the resource depletion of water, various minerals and other things.

 

 

We already had this discussion here recently, the short version: peak-oil is bullshit, resource depletion is bullshit.

If our ability to empathize and think in more advanced ways about society is a result of these basic resources, then we obviously need those resources or some sufficient alternative in the future. There's every reason to think that over a longer term time horizon, all the culture will be forgotten, and the basic needs and instincts of people will set back in, if society collapses or retracts significantly.

 

 

The main likelihood of societal collapse (ignoring factors outside our control, like global pandemics, super-volcanos or asteroid strikes) stems from failure to implement new technologies due to ideological ignorance (i.e. continuing to burn, and accelerating the rate of usage of, fossil fuels; and attempting to solve the problem with unworkable band-aids like renewables instead of going full steam into nuclear).

Because you could envision a world where due to resource depletion, only the rich get access to the resources, and then they use that to control the rest of the population.

 

 

One can envision lots of possibilities, again, you fail to provide any evidence as to why your envisionings are more likely than any other.


That is why I find the solar cells and renewable energy stuff somewhat creepy, since it can potentially allow the rich to separate themselves, and then all the remaining oil and other things will be used for isolated rich areas and nothing for the rest. As far as I have seen, every study talks about how the resources and consumption patterns will have to drastically reduce in the next 50 years, for a variety of reasons, but that doesn't mean, with the automation and AI and everything else, that the powerful can't sustain some of this for themselves. It all depends on the specific requirements of the capabilities and what exactly they are.

 

 

This assumes lots of things with simply aren't true: that the resource estimates are accurate (they're mostly not), that the resources we currently use are the ones that we will always need to use (technological changes always makes redundant previous resource sources and technologies), and most importantly, that the only source of resources is this planet.

i think your point about "thirst for knowledge" is also not proven. What if it is rather a primal instinct to find novel experiences and to seek out new information no matter how mundane or irrelevant? To get knowledge, you have to focus and work hard, for years and decades, and that is the opposite of simply swarming to a new signal in the environment and then consuming it until its boring.

 

 

It's proven in the sense that all of human history provides evidence for it, and no evidence to the contrary.

Well who said anything about dualism? Science doesn't even know what magnetism really is, or a photon, or whatever. Most of the technology we have created we created without understanding how it works. We just found the right conditions for an effect and then reproduced it. The problem with consciousness is that nobody has come up with a single intuitive signaling scheme for making something conscious vs not being conscious. Tononi has his theory, but it is a theory based on a set of conditions, not an explanation or understanding, like with magnetism. If it is some quantum or at least low scale effect then it will be as difficult as creating a quantum computer, because we would need to control the computational materials quantum state. I'm not claiming this is what's happening but I'm just saying it's very curious that there isn't a SINGLE computational theory!

 

 

Dualism, or any metaphysical explanation, would be the only thing that could prevent the development of AI. Without it our conscious existence is enough to prove it's possible. You're right that we don't fully understand what a photon or an electromagnetic field is (at a more fundamental level, we don't know what energy, space, and fields in general are - but science isn't really in the business of explaining what things are anyway, more why things happen and how), but we can still build computers and lasers and so on, we don't need to fully understand something to make use of it. AI will likely come about when we cram enough complicated shit into a box that it just starts working for some reason, maybe it'll be able to figure out the finer details and explain them to us. We won't need a single and fully correct computational theory of consciousness in order to build an AI, just like we didn't need a single fully correct theory of gravity to send a spaceship to the moon.
I doubt very much that quantum effects will have much to do with consciousness, though it's possible, and if it is involved it'll probably in ways analogous to how birds sense magnetic fields (though this isn't fully understood, it's probably slightly quantumy), and won't require some kind of mass-entangled quantum state or anything (i.e. more quantum/classical than fully quantum).
Quantum computing is developing nicely btw, there are breakthroughs made every year or two, eventually a tipping point will be reached (and probably not too long from now). The use of quantum computing in AI will probably lead to possibilities in consciousness and intelligence we can't even conceive of.
Link to comment
Share on other sites

Define technology and progress and we might be able to have a discussion.

 

Is that at me? Go ahead and use the common googleable / lexical definitions (application of science in industry, moving toward a goal), I'm not trying to change the game here... but re: the nanotech question, I'm interest in people's personal perceptions of it, so define that one yerself, thanks :P

Link to comment
Share on other sites

Calling them victims before anything has happened is begging the question. Who's to say new technologies won't prevent their victimisation instead? You're not actually providing much of an argument here. Neither you nor I can predict how any new technology will be used, and with what effects. Whether they are beneficial or detrimental will depend on the moral actions of the people who implement them, not the technology in itself.
You're right that that type of technology hasn't been used yet, but it's pretty clear people are tortured, detained, etc all around the globe, so it's making an inference with new technologies. Some types of VR have probably already been experimented with but I haven't a source for it but it exists.
So what? This sounds like a good thing to me. Again, you've yet to demonstrate why any of this is the 'worst'.
The problem as I see it is that it enables dictatorships, terrorist groups and whoever has the means to use this technology. The diffusion of technology and empowerment of the individual is something I've heard abut in various C-span, world economic forum etc videos for a long time. You know, if you have the money and resources, you can trade resources with a corporation or in a country that maybe wants to confront the US indirectly, and they may support your business and have an anti-US / EU agenda for example. Anyone can make AI, nanotechnology, biotech or whatever because it can all be found online or be accessed in some way. And you kind of seem to hinge on this idea that humanity will improve and their morality and decisions will improve, but before that has happened, we're still full-on with exploring the brain, the biology, the security and war technology, etc.

 

Who says those goals will be evil? There's absolutely nothing inherently wrong with genetic engineering, even when it comes to humans (though we're rightly wary about how we tread there, initially it will limited to curing disease, we're a good while away from germline changes, but even then there's nothing wrong with that in principal as long as it's developed in an ethical manner). Breakthroughs in neuroscience will enable us to treat depression, anxiety disorders, dementia, etc. Not allowing advancement in these areas is actually immoral, not the other way round.

 

Well it's like I said above. Why are you so sure they /won't/ be evil? Or couldn't be? The sad thing is, as far as I can tell, none of this would be possible without the support of the general public consumer, and so I'm seriously wondering if it's moral to buy say, an Oculus Rift right now. And I /really/ want one. But maybe, possibly, this is exactly the kind of decision that really matters in the world, and that we just buy what we want without thinking about the consequences, and then who knows what happen to the funding we provide that enables the global industrial base and the research from it. (I'm not just saying vr, I'm saying any upcoming technology that needs critical mass to develop further after the initial launch, a lot of this is already too late)

 

 

This assumes lots of things with simply aren't true: that the resource estimates are accurate (they're mostly not), that the resources we currently use are the ones that we will always need to use (technological changes always makes redundant previous resource sources and technologies), and most importantly, that the only source of resources is this planet.

 

I only hope you're right. I know about Planetary Resources, asteroid mining and such things. It's hard to debate resource depletion since there are a lot of studies and opinions one way or the other. We could quote and go back and forth but that's not really what I'm arguing anyway. I concede that there is a lot we don't know and that a collapse of sorts is by no means a sure thing and that human ingenuity can come up with all kinds of crazy solutions and there are also black swan type of developments that are impossible to see in advance.

 

We already had this discussion here recently, the short version: peak-oil is bullshit, resource depletion is bullshit.

 

You did, but you also said contiuining to use fossil fuels is a bad thing etc, so I wouldn't go so far as saying it is all bullshit. There are various kinds of solutions in several dimensions but that doesn't nullify the point that at minimum, change will occur. Also you talk about how resources become obsolete and technologies do, but yet, as you also state, 80% of global industry is run on fossil fuels. Fossil fuels has been the primary and really only actual energy source to date. There's a difference between primary energy sources and efficient secondary technologies to better utilize that energy. But I agree with the general sentiments, they are not out of the scope of possibility for sure. Asteroid mining for metals, minerals, whatever. Nuclear and renewable for primary energy, and nanotechnology and material science and biotechnology for food, possibly some kind of water recycling, and reducing co2 and fixing climate change, and also medicine and all of that. Even though all of those are incredibly reliant on fossil fuels atm, not out of the scope of possiblity. I did read some article a year back or so about the problems with nuclear but it's not a debate I can have right now, and it could possibly be outdated by now.

 

It's proven in the sense that all of human history provides evidence for it, and no evidence to the contrary.

 

I think there is /some/ evidence. I like Nicolas Carr's work on attention and how we spend our time and what the internet is doing to our brains. I wonder how many people out of all people actually work hard to develop creative new things whether in science, philosophy or art, as opposed to the ones who live mostly in the here and now and mostly react to environmental signals, and also what the basis for those who do hard work is in the biology of all human beings. There's some people who say Only a few men created all of society's riches. A single man can create history etc. In that there are a few single individuals who do some really incredible work, and then there is a lot of iterative or otherwise secondary work being done later.

 

 

Dualism, or any metaphysical explanation, would be the only thing that could prevent the development of AI. Without it our conscious existence is enough to prove it's possible.

 

While this is most likely conceptually true, one problem is that as size and scale of manufacturing goes down, the cost goes up almost exponentially. To create a speaker model with 100 atoms costs a shitload more than putting it together with some wood tools. As far as I have read, the semiconductor industry also has to spend a lot more to create the CPU's of nowadays, and it's incredibly hard to control material at the scale of atoms, mostly because of quantum effects. The amount of energy and precision needed to control materials at this level is immense. And there's still a question of how low we can go, before it's just not viable any more.

 

But look, all of this said, I'm exceeding my wish to be devil's advocate here. I'm not anti science or anything like that. This started with the power and reach of these new and upcoming possible technologies, of which I still feel are way more inherently dangerous than previous technology. But like I can't support with real evidence, it is scenario speculation and I admit that. But that doesn't mean one can't think about it and talk about it. If I was a person who was in a position of power or influence, all I'd have to do is write a blog post, a peer reviewed article or a book, and this would become public discourse in many places. I'm not that person so that won't happen. But I still find it very interesting to talk about and ponder about

Link to comment
Share on other sites

 

Yes. It's pretty banal, from what I remember of it.

 

I never thought of it as banal, but I can certainly see how it can be seen that way. From being an AI enthusiast though, I can tell you it's not far off from the projected scenarios. The most hypothetical aspects are those of the nanotechnology in the movie. But there is no physical reason to think the plot points in the movie about the AI's goals, the mind control and other things not becoming true. The movie was about AI and grew out of the whole AI safety movement like www.intelligence.org

It was NOT some banal naive sci-fi that hollywood came up with to make money. Wally Pfizer the director was really into this and spoke to many experts on the subjects. We could debate about how entertaining or well executed the movie was however. I personally felt very engaged in it due to being an AI enthusiast, but I think that's why it failed in the general mainstream, and also it was probably a bit too dry and felt cliche to many people.

 

 

Ultra cliched, i think "The Diamond Age" is a far better representation of AI and its potential (or lack thereof).

And I am a huge fan of AI, robotics, and their application in the real world.

Link to comment
Share on other sites

 

Well who said anything about dualism? Science doesn't even know what magnetism really is, or a photon, or whatever. Most of the technology we have created we created without understanding how it works. We just found the right conditions for an effect and then reproduced it. The problem with consciousness is that nobody has come up with a single intuitive signaling scheme for making something conscious vs not being conscious. Tononi has his theory, but it is a theory based on a set of conditions, not an explanation or understanding, like with magnetism. If it is some quantum or at least low scale effect then it will be as difficult as creating a quantum computer, because we would need to control the computational materials quantum state. I'm not claiming this is what's happening but I'm just saying it's very curious that there isn't a SINGLE computational theory!

 

 

uh43048,1269810848,lol_wut_pear_chinese.

Link to comment
Share on other sites

 

 

Yes. It's pretty banal, from what I remember of it.

 

I never thought of it as banal, but I can certainly see how it can be seen that way. From being an AI enthusiast though, I can tell you it's not far off from the projected scenarios. The most hypothetical aspects are those of the nanotechnology in the movie. But there is no physical reason to think the plot points in the movie about the AI's goals, the mind control and other things not becoming true. The movie was about AI and grew out of the whole AI safety movement like www.intelligence.org

It was NOT some banal naive sci-fi that hollywood came up with to make money. Wally Pfizer the director was really into this and spoke to many experts on the subjects. We could debate about how entertaining or well executed the movie was however. I personally felt very engaged in it due to being an AI enthusiast, but I think that's why it failed in the general mainstream, and also it was probably a bit too dry and felt cliche to many people.

 

 

Ultra cliched, i think "The Diamond Age" is a far better representation of AI and its potential (or lack thereof).

And I am a huge fan of AI, robotics, and their application in the real world.

 

 

I have not read that book but it seems very interesting. Yeah the movie is cliche I guess but so are the AI safety scenarios to a certain extent. The movie to me was like a walkthrough of strong AI emerging quite suddenly and then starting its own plans of production and so forth. Colossus the forbin project is a similar movie with a different 'era' in terms of the technology but we come back to the same basic plot points.

 

Regarding the magnetism thing. From what I've read they know about the spin of electrons and other details but they don't know what it fundamentally is or why objects are magnetic, like how it functions. I brought that up in context of consciousness and finding a computational model for it because as far as I have been able to tell, consciousness has a similar mysterious property except in that case we have not even found the needed conditions in materials for it to arise. Materials at this scale have intrinsic qualities that we have no idea why are there or the exact mechanisms that occur.

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.