Last week I posted an essay in which I claimed that the Singularity could be said to be halfway here already because we had already set up a huge program that was more or less running the world (and that it was fast becoming a computer program).
What are the great concerns of the Singularitarians? That we will create a great machine that will take over because it is more intelligent than humans. How is a mere mind going to take over? By fooling us.
I claim that if you look carefully enough, we have in fact already done this. What is a corporation or government but a program, a huge piece of software. The fact that the primitives in the programming language are human acts allows it to be written at a level of abstraction as yet unavailable to programs for electronic computers, but not for long. The essence of a program is a set of instructions telling some agent what to do.
Look at your Form 1040 sometime. It could have been written in COBOL instead of English with virtually no semantic change. What’s the processor it runs on? You are. You’re just a substrate. So is Obama.
Oddly enough, widely regarded IT commentator Bruce Webster has just posted an essay pointing out the similarity between law and software. His point is mostly that it’s shot through with the most horrible examples of programming — spaghetti and self-modifying code, for example — you could imagine.
On the occasions where I have reviewed the actual text of major legislation, I have been struck by the parallels between legislation and software, particularly in terms of the pitfalls and issues with architecture, design, implementation, testing, and deployment. Some of the tradeoffs are even the same, such as trading off the risk of “analysis paralysis” (never moving beyond the research and analysis phase) and the risks of unintended consequences from rushing ill-formed software into production. Yet another similarity is that both software and legislation tend to leverage off of, interact with, call upon, extend, and/or replace existing software and legislation. Finally, the more complex a given system or piece of legislation is, the less likely that it will achieve the original intent.
In a historical note, I’ll point out that I first ran across the idea in a well-organized form in a talk talk by Robert Kowalski, one of the fathers of logic programming, over 20 years ago.
And finally, Michael A. left a comment on the original post:
Interesting post. Isn’t it a bit old-fashioned to think that we will require human figureheads forever?
Depends on who you mean by “we”. In the long run, people will come to realize that the figureheads are figureheads and unnecessary. At that point, being a figurehead will no longer satisfy the “beneffectiveness” levels of the Maslow hierarchy and, assuming the system still has a goal of serving humans, something else will have to be found.
The really deep question is whether there will be a role where people are really useful as opposed to merely being made to feel that way, as most of us are in the current system.