Robo-ethics paper and Open-Texture Risk

There’s a paper on roboethics by Yueh-Hsuan Weng of Taiwan’s Conscription Agency in the International Journal of Social Robotics that has gotten a write-up on Physorg (h/t to Accelerating Future).

Here’s the abstract:

Technocrats from many developed countries, especially Japan and South Korea, are preparing for the human-robot co-existence society that they believe will emerge by 2030. Regulators are assuming that within the next two decades, robots will be capable of adapting to complex, unstructured environments and interacting with humans to assist with the performance of daily life tasks. Unlike heavily regulated industrial robots that toil in isolated settings, Next Generation Robots will have relative autonomy, which raises a number of safety issues that are the focus of this article. Our purpose is to describe a framework for a legal system focused on Next Generation Robots safety issues, including a Safety Intelligence concept that addresses robot Open-Texture Risk. We express doubt that a model based on Isaac Asimov’s Three Laws of Robotics can ever be a suitable foundation for creating an artificial moral agency ensuring robot safety. Finally, we make predictions about the most significant Next Generation Robots safety issues that will arise as the human-robot co-existence society emerges.

Now frankly, as I mentioned last week, the major thing we have to worry about with future technology for quite a while yet will simply be whether it works as intended. One very important part of making this happen is to make the systems, whatever they are, as simple as possible (but not simpler, as Einstein said).

However, Weng does address one point that I haven’t seen anywhere else besides my own book Beyond AI — the “open texture” of the law. Well before AI and robotics folks realized that it was impossible to specify actions in the real world precisely, lawyers did, and the legal notion of the open texture is the result. It’s a kind of deontic uncertainty principle — what in Beyond AI I called “formalist float”. Here’s the example I give in the book:

Two men sit down at a lunch counter and order cups of coffee. The first man finishes, gets up, and leaves a dime on the counter where he was sitting. He pays at the register and leaves.

The second man gets up. He places his fingertip on the dime and slides it over to his spot on the counter. Then he, too, pays and leaves.

What, if anything, has been stolen? The dime, intended by the first man for the waitress, still goes to the waitress. The second man never picked it up or possessed it. He was not legally obligated to leave a tip. Yet we are morally certain that he stole something.

What this means for robotics, AI, and indeed any formal system, is that there has to be some common-sense way to fill in the fabric of flesh in the gaps between the bones of the rigid, formal specifications.

But that’s the hard problem of AI itself, not just roboethics. If we can solve it for housecleaning robots, it may give us a leg up on solving it for the law itself, and all the other mechanisms of our rapidly-formalizing world.

Leave a comment

0
    0
    Your Cart
    Your cart is emptyReturn to Shop