Impending Doom or maybe not?

from the thoughts-on-AI dept.
An Anonymous Coward writes "Recently I have been reading a bit about Kurzweil and Bill Joy's rants about the impending destruction of life-as-we-know-it.

"I'd like to attempt to discount the likelihood of human destruction via machine intelligence by trying to figure out what would/could happen."

Read more for the rest . . . "First: Let's define intelligence in it's simplest form: The ability to solve problems given a set of inputs and some rules. Note that this does not include self-awareness (this is important) NOR does it include self-preservation (more important).

Let's assume that the first intelligent created entity has the ability to solve problems but is not self-aware. If we make it more intelligent (i.e. it can solve problems quicker and perhaps solve large problems) will that make it a threat to us, or even out of our control? I think not.

Let's examine an entity which has self-awareness next. If it has self-awareness and doesn't have self-preservation this would not preclude the possibility of the entity augmenting itself to become more intelligent (faster, more competent) or creating such an entity if ordered to do so. This entity is also not likely to be a threat since it does not care if it is switched on or off and is unlikely to do significant damage to its surroundings unless it has been ordered to do so, in which case it is simply a case of Saddam with an atom bomb.

The last type of entity is a problem-solver with self-awareness and self-preservation. Self-preservation makes this entity dangerous because it will rapidly realize that there are people who want to destroy it because they fear it. The entity is likely to want to eliminate these people. Note, however, that the entity is unlikely to want to create an entity more powerful than itself for the same reasons, but conversely will want to augment *itself* to make itself more powerful. This type of entity is dangerous but how dangerous is it?

I would hazard a guess and say: "Not very". Why? Because the entity still has to work through humans to get what it wants. If it wants to experiment etc (in the real world as opposed to running perhaps flawed simulations) so that it can advance technology, it has to have some way of influencing the material world. Now bear in mind that an AI of this type will probably be the property of some corporation. They will not simply just cooperate with it's every desire, but rather will give it what it needs in order to fulfill *their* desires. What it will come down to in the end is an uneasy truce between the AIs and the humans because the AIs probably won't be easily able to augment themselves, and even if they could, they would have to work through humans, and those that can't will be strictly bound by humans.

The only way I can see that a dystopian future such as Bill Joy's or Kurzweil's is if the AIs can control impressive technology. I personally don't think anyone is that stupid.

There's an additional reason that AIs are not a huge threat and that is humans will probably prefer to augment themselves if possible and thus AI could conceivably not develop too far. This could lead, however to the situation in which the augmented humans are so powerful that they might decide to do away with the rest of us. Picture a super intelligent Saddam Hussein. But bear in mind that even if this *were* to come to pass (human intelligence augmentation) it will suffer from the same problem as the AIs: In order to get resources, you have to work through the system."

Leave a comment

0
    0
    Your Cart
    Your cart is emptyReturn to Shop