Investigators looking into the deadly crash of two Metro transit trains focused Tuesday on why a computerized system failed to halt an oncoming train, and why the train failed to stop even though the emergency brake was pressed.
The post is just a news clipping, and offers no interpretive comment, but I think some is perhaps appropriate.
Train crashes have been happening regularly for over a century. They are not something new that has anything at all to do with AI or machine ethics or any similar concern. They are, however, a reminder that there is something very important that is often overlooked in the popular concern about the increase in technological impact on our lives. And that is that technology already has a huge impact on our lives, and has done since the industrial revolution — and the first, most important concern we must have is to make sure that the technology we have works properly, as intended.
Unless I am completely mistaken and deluded, there was and is nobody associated with the DC train system who wanted the crash to happen. It’s not a question of morality at the level of bad intentions, either of people or machines.
It was, in simple terms, a case of incompetence. It may have been of design, or of management, or of implementation, or maintenance. It may have been software or hardware. Most likely it was some combination. But the bottom line is simple: things didn’t work the way they were supposed to.
The modern world is full of movements that are overly concerned with motivations, and it is passe to worry about whether whatever cause you’re espousing will actually accomplish the grand goals that are claimed for it. Bluntly put, people are too concerned with other peoples’ wishes, which are none of their business, and not enough concerned with other peoples’ competence, which is very much a legitimate concern.
You’ll find that for things that really matter — like not having train wrecks — people pretty much all want the right thing already.