Once upon a time, the story goes, there was a programmer. He was an amazingly productive programmer, producing thousands of working, debugged lines of code every day.
Then he learned about DO-loops.
One of the foundational concepts behind the idea of Singularity is the notion of self-improving AI. And one of the key notions behind that is that as a creature composed of software, an AI could improve itself by rewriting its own software as opposed to the arduous way we humans do it, which is by studying and learning things.
Michael A. over at Accelerating Future, examines what is probably the iconic exegesis of the “superior self-improving AI” theory, Eliezer Yudkowski’s notion of the “codic cortex”, a module of the proposed AI that is specialized to work in computer code the way our visual cortex is specialized to interpret images from the eyes. I propose to examine the codic cortex concept, but first a side comment about a remark by Michael:
With fast serial thinking speed and immunity to boredom, an AI could apply its “best thinking” in an automatic and rigorous way to all levels of its own design.
My quibble here is about boredom. Humans get bored; insects and current-day, idiotic computer programs do not. The apocryphal programmer in the joke above is stupid. Any architecture of an intelligent mind, human or machine, will almost certainly have the ability to get bored. If it has something that needs doing that involves uninteresting repetition, it will build a machine to do it — just as we do. We build physical machines, we build software, and we build software machines inside our own heads. We call them habits.
Now on to the codic cortex. Eliezer writes:
Even at our best, humans are not very good programmers; programming is not a task commonly encountered in the ancestral environment. A human programmer is metaphorically a blind painter – not just a blind painter, but a painter entirely lacking a visual cortex. We create our programs like an artist drawing one pixel at a time, and our programs are fragile as a consequence. If the AI’s human programmers can master the essential design pattern of sensory modalities, they can gift the AI with a sensory modality for codelike structures. Such a modality might perceptually interpret: a simplified interpreted language used to tutor basic concepts; any internal procedural languages used by cognitive processes; the programming language in which the AI’s code level is written; and finally the native machine code of the AI’s hardware. An AI that takes advantage of a codic modality may not need to wait for human-equivalent general intelligence to beat a human in the specific domain competency of programming. Informally, an AI is native to the world of programming, and a human is not.
This is an intuitively appealing analysis, but I don’t think it’s true. Let’s start by seeing if we can say how good humans actually are at programming.
First of, humans definitely do improve the rate at which we program. The first 10,000 lines of code I wrote were punched into paper cards. Programs were written that allowed the computer to do a lot of the housekeeping we originally had to do by hand with physical objects. Furthermore, programming languages appeared that increased in abstraction and sophistication, so that I can write a few lines of code now to do what would have taken a whole box of cards in the 70’s. Furthermore, we improve our programming ability by inventing new ideas. It’s a lot harder to measure software performance than hardware speed, since for most software, most of the growth has been in functionality. But for areas where the task is well-defined enough for comparisons, such as scientific codes, the speed-up due to algorithm innovations has been substantial.
So humans are already exponentially self-improving at programming. We do this by augmenting our minds at higher levels, e.g. learning new ideas, rather than by augmenting them at lower levels, i.e. rearranging the substrate. However, I’ve not seen any argument why this should be less efficacious.
What’s more, if we ever get to the point of having an AI that can program, humans will have written it, and that means we’ll have invented the ideas necessary to understand how programming works. Which, at the moment, we do not. If you Google for “automatic programming” you’ll find this:
Wouldn’t the ultimate programming language be English? No, of course not, you say, it’s too imprecise and ambiguous. (It’s not even statically typed :^). From a programmer’s standpoint, true — but the programmer gets specs from somebody somewhere and is concerned with turning them into machine code.
So I’m not talking about cocktail-party English. I mean technical, jargon-laden English complete with embedded equations and diagrams such as the specs for a development project are written in anyway. And included is the assumption that it’s not just one way, but there is a dialogue between the customer and the developer(s).
A bit of history: up until the early 80s, there was a strong subfield of AI called automatic programming. The idea was exactly the above: hand the system the same specs you’d give a human developer, and it did the rest.
Over the course of the 80s, AI collapsed. There was a boom in expert systems, there were Lisp machines, there was the Connection Machine. None lived up to its promise, and there was the concomitant bust. Automatic programming as a separate subfield of AI disappeared completely.
Even so, a lot of the stuff in languages now came out of 70s AI. Object-oriented languages look an awful lot like the “frame-based” AI languages at the semantic level. Any language with type inference is using a theorem prover to construct programs, just like the AP systems did.
One of my mentors at Rutgers did his PhD on the last AP project (Psi at Stanford). He claims that the ultimate downfall of AP was that you couldn’t build the natural-language front end without cracking the general AI problem. (And no one has…)
The point is, programming is “AI-complete.” Programming (and machine design, which amounts to the same thing in other regimes), is pretty much the hardest thing to do, that humans can do. (In fact, I would claim that the average human can hardly program at all.) A human-level AI would only have to know as much about the way an AI works as an average human does, which is nothing. Which means that the notion of having a specialized organ that is significantly more efficient at programming than the whole mind is at everything else, is not very probable.
I think I can understand the reason that Eliezer might think that it was. Much of the low-level work in programming is detail-oriented bookkeeping of the kind that can be done efficiently by simple programs. Just like adding columns of numbers, computers can do this kind of thing much faster than people can. But the notion that that advantage extends up into the conceptually hard part of programming — which is designing new concepts and new representations, and, crucially, understanding the phenomena the program has to manipulate, simulate or analyze — doesn’t follow at all. An AI that could improve its own code significantly wouldn’t just be a coder who could knock together yet another instant-messaging client. It would have to do better than a whole sector of the world’s smartest humans, each of whom has made a career of specializing in one of the thousands of techniques that would go to make an AI.
(to be continued)