Ethics for machines

to boldly go where no man has gone before!

This final phrase of the classic Star Trek opening spiel had two problems with it, one as seen by people after the fact, and the other as seen by those who had gone before.

As seen by earlier generations, the phrase “to boldly go” is a split infinitive.  If E.E. Smith had written Star Trek in the ’20s, he would have written “boldly to go.”  Avoidance of split infinitives, like many elements of grammatical style, was a cognitively expensive signalling behavior that advertised, essentially, that the speaker or writer was in the educated classes in an era where being educated meant you knew Latin.  Infinitives in Latin are single words formed by inflection, rather than with a keyword such as “to” in English, so you can’t put an adverb in the middle of one.  Avoidance of split infinitives lasted in “proper” English at least until mid-20th century, but had begun to fade (to slowly fade 🙂 ) away thereafter.

But there’s no real reason in English not to split infinitives.  They are completely understandable, and often less ambiguous than alternative constructions. Apart from being a cognitively expensive signalling behavior, they had no value, indeed a cost in cumbersome and ambiguous sentences.  Like many rules which were tacked onto the language by well-meaning grammarians, they were overly simplistic formalizations of a system, English grammar, which was and remains much deeper and more complex than anyone thought it was.

The lack of the ability of “hand-coded” grammar to handle real language is most clearly displayed in the early attempts at machine translation, which were an abject failure.  Only after 50 years of trying in natural language understanding, using statistically inferred models not formalized by humans, has serious progress been made (and there’s lots more progress needed before basic competence is achieved).

The other, retrospective, problem with the Star Trek blurb, that the phrasing was considered sexist, was corrected in later incarnations to the more politically correct “where no one has gone before.”  This, as it turns out, is another example where the unthinking application of a simple, formalized rule to “fix” something actually makes it worse.

In the ’60s, the use of “man” in such a context was standard and unambiguous.  It meant a human being (in fact, in my Websters from that era, “human being” is the first primary meaning, specialization to adult males being secondary).  Star Trek, if you remember or have studied these things at all, was in its time one of the most progressive, liberal science fiction shows ever.  It depicted a crew and implied a culture where barriers based on race and sex had been significantly lowered compared to the contemporary norm.  So in the original, “man” meant “human.”

Of course, the Enterprise went all over the galaxy seeking new life, new civilizations.  The citizens of these civilizations had been there before.  The distinction between “man” and these people is a fine one but one which can be reasonably be made consistent with the storyline.

But what happens if you switch from “man,” meaning human, to “one,” meaning, well, anyone. The term is intentionally more inclusive: it pushes the boundary from that between humans and non-humans, to that between someone and something.  But wait: doesn’t that mean that all the denizens of the strange new worlds, who have gone there before, are now not someones, but somethings?  Isn’t the new phrasing, taken out of the context of American academia and applied to Star Trek without thought or understanding, actually worse than before?  In classic Trek, the aliens are non-human people.  In PC Trek, they’re non-persons.

Just as was Victorian proper grammar, politically correct speech patterns are primarily a cognitively expensive signalling behavior.  They have exactly the same import: the speaker is educated, intelligent, and ambitious enough to pay the cognitive price to consciously modify the vernacular.  But as we have seen, PC speech is often yet another case of simplistic human-formalized rules, applied in a context-free way.  They fail on their own terms — implying just the wrong thing, as above — when context shifts.

In other words, simple human-formalized rules applied blindly to something as complex as grammar are brittle, a property they share with bureaucratic rules and AI programs.

Human ethics are similar to human language in their depth and complexity.  They are famously just as difficult to capture in simplistic formalism.  Indeed, given the examples of PC speech, it’s quite arguable that grammar and speech are a proper subset of ethics.  You can’t even reason about whether the Star Trek example is right or wrong without understanding language at a probably better than state of the art level.  And it’s certain that all the subtleties of ontology and epistemology are part of ethics, just as they are of language.

AI is just, in the past decade or so, beginning to get traction in the natural language field beyond the simple human-written formal rules stage.  As for ethics, we’ve just barely gotten into the simple human-written formal rules stage. But if you want a preview of what machine ethics will ultimately look like, study modern natural language processing.

Leave a comment

0
    0
    Your Cart
    Your cart is emptyReturn to Shop