Civilization, B.S.O.D.

Civilization, B.S.O.D.

The other day I got a worried call from my mother-in-law.  My wife usually calls her during her commute but that day she neither called or answered her phone.

Turns out my wife’s iPhone had crashed — the software had wedged and there was no way to reboot.  The amusing, if you can call it that, fact was that her (employer-required) Windows PC had done the same thing the same day.  For the PC, which also ignored any attempt to reboot, we took the battery out and back in, forcing a cold start.

We are rapidly turning our civilization into software. As we do that, and as we build smarter and smarter AIs to do more and more important tasks, it will be important simply to be sure the software we write simply works right and is reliable.

A key design feature that makes natural control systems reliable, as well as adaptive and robust and resilient, is the inclusion of lots of feedback.  In the human brain, there is generally more feedback than forward signal along the major data pathways.  By contrast, the standard model of sequential programming has no feedback at all; in the simplest and most common coding styles, all the operations are done by dead reckoning. Consider an html renderer such as is showing you this page. There are plenty of sites whose pages come up with overprinted regions in some browsers — because the renderer doesn’t look where it’s writing.

It’s possible to do better, of course. In control systems, where feedback is essential, you have tight control loops that check and set, check and set, hundreds or thousands of times a second. Porting some knowledge of feedback in control systems back into systems software (and the rest of our software) would make it more reliable, as well as adaptive and robust and resilient. And as we turn our civilization into software, that’ll be a very good thing.

By | 2017-06-01T14:05:17+00:00 January 6th, 2010|Machine Intelligence, Nanodot, New Institutions, Open Source|16 Comments

About the Author:


  1. DMan January 7, 2010 at 6:09 am - Reply

    Sounds like a good idea, but there are a couple of things that niggle.

    Wouldn’t a constant feedback loop cause a constant hit on a CPU or hard drive? If it’s a very complex piece of software with many parameters to self-check, wouldn’t that make it very memory hungry?

  2. J. Storrs Hall January 7, 2010 at 3:27 pm - Reply

    DMan: I can make software really incredibly efficient if it doesn’t have to work right. If extra cycles and memory are necessary for computers, planes, and economies not to crash, I’d say they’re worth it.

  3. biobob January 8, 2010 at 4:13 pm - Reply

    You make a very good point re feedback stability in software but I would suggest that very few managers would actually be happy to pay for such required work.

    It has been my experience that you cannot possibly put in enough feedback code to avoid ALL possible errors even if you COULD predict all possible points of failure (the latter being unlikely in the extreme).

    Certainly something to be aspired to but unlikely to be achieved !

    I would tend to agree that the vast majority of such feedbacks would consume only trivial CPU load.

  4. TheRadicalModerate January 9, 2010 at 2:34 pm - Reply

    For some reason, I feel compelled to repeat this old chestnut: If carpenters built buildings the way that programmers write programs, the first woodpecker to come along would destroy civilization.

    I don’t know how you’d do what you’re proposing. I’d be very surprised if computation of aggregate feedback for a complex system wasn’t an NP-complete problem. In the unlikely event that such a computation is deterministic, I still don’t think you do this with system-level constructs. Instead, you probably have to build something into your computer language.

    Better still: Wait for the advent of trillion-synapse neural nets with a cycle time under 100 ms. I suspect that the software platforms for doing intelligent process control will emerge pretty quickly once the hardware is there.

  5. […] THE PERILS OF turning civilization into software. […]

  6. fishbane January 10, 2010 at 1:26 pm - Reply

    The problem is deeper than you make it out to be. Take your HTML renderer example. You don’t, in fact, want a naive “look where you’re printing” rule. Lots of typographical effects depend on overprint behavior in CSS. Ditto, various games with DHTML to get interactive features.

    The brief version is that a naive “no running with scissors” rule is far worse than no artificial constraints – we’d have far less innovation with them. What needs to happen is for software engineering to cross-polinate with physical engineering more. A lot of the meta-problems of solid engineering have been solved – what hasn’t happened is for software engineers to learn where they make sense to apply, and physical engineers to learn where they don’t.

  7. JMHawkins January 10, 2010 at 1:47 pm - Reply

    As someone who writes software (probably even the software that renders the page you’re viewing) I can say that the “problem” with writing robust software is that it is a couple of orders of magnitude more expensive than writing software that works “most of the time.” It’s all well and good to say “well, than that’s how we should do it” but unless you’re paying for it, that doesn’t really help. If I decided to create a browser, or a phone, or the software for any other typical commercial product, the way suggested here, I’d go broke because nobody would buy it. Everyone would buy my competitor’s product that had 10x the features at 1/10th the price and crashed every couple of weeks.

    The problem (and hence the solution) is with the folks writing the checks, not the folks writing the code. The comment about woodpeckers wiping out houses built like software is pretty funny, but the homeowner is the guy who wouldn’t pay more than $49.00 for the entire house (and there are a whole bunch of wannabe homeowners complaining that even $49 is too expensive – houses ought to be built for free, as a hobby I guess).

    After a couple of generations of this, the bulk of software engineers don’t have the mindset to create really robust software (why should they? Nobody rewards it). Plus, the development of techniques to ensure robustness have lagged behind where they could be.

    When your wife bought her iPhone, did she do it because iPhones have a reputation as being more reliable than a standard old dumb cell phone that maybe can do texting? Or did she buy it because it has that touch screen where you can slide stuff around with your finger and download all kinds of cool and useful apps?

    It has to be a consumer revolution to fix this problem, but I really don’t see the consumers revolting any time soon.

  8. Micha Elyi January 10, 2010 at 2:22 pm - Reply

    Hey, TheRadicalModerate, it’s not just any old chestnut, that’s a chestnut of Gerald Weinberg’s invention published in Computer information systems: an introduction to data processing by Gerald M. Weinberg and Dennis P. Geller, Little, Brown computer systems series, 1985 p.113.

  9. Dave January 10, 2010 at 2:30 pm - Reply

    You need to look into the Erlang programming language, developed by Erickson for programming telephone switches. Software components developed in Erlang are designed to crash cleanly in the presence of errors and be restarted automatically and quickly by other software components (called supervisors). That way you don’t have to try to detect and respond rationally to every possible failure case, but instead “just fail” and count on another piece of software to clean up and restart whatever portions of the system failed, transparently. Everything in an Erlang system has the sort of multi-layered feedback you are are talking about. It does take a fair amount of CPU cycles, and a somewhat unusual programming style, but the results are impressive. Using this sort of error-resiliant programming, Erickson achieved uptime percentages of up to “eight nines” (99.999999% availability, or a maximum of 1/3 of a second of downtime per year).

  10. Felix Kasza January 10, 2010 at 3:04 pm - Reply

    So assume the renderer gets feedback that there already is something where it wishes to paint the next bit of text. All that means is that the renderer will assume that the page creator _wants_ the overprinting effect — unless you have three separate renderers, written by three separate teams, with Chinese walls in between, and sharing not a single line of code (and running on separate OSes, too). In that case, you can let them vote, by majority, which colour a pixel should be.

    We do things like that (in space vehicles, for example) — but even there, we do not do it completely. Apollo modules, for instance, had five computers, but they were the same model running the same code. _And_ the software they ran was as simple and minimal as possible. For something as complex and incomplete as the HTML spec, you can forget voting.


  11. ErikZ January 10, 2010 at 3:19 pm - Reply

    “I say your civilization because as soon as we started thinking for you it really became our civilization…”
    ~Agent Smith

  12. M. Report January 10, 2010 at 7:29 pm - Reply

    I have told you once, I have told you twice, what I tell you three times is true:
    Software _and_
    hardware redundancy;
    Different designs for each of the six bits.

    EMP is a greater
    threat, and harder
    to protect against;
    Good business opp-
    ortunity there.

  13. C++ Developer January 10, 2010 at 8:04 pm - Reply

    I think restaurants and patrons are more apt than carpenters and woodpeckers: If you saw many restaurants’ kitchens, you’d never eat there. But you don’t demand too see them, or even think about it (often). You presume it’s passed some minimal inspection and is fine. It, you know, seems fine, doesn’t it?

    Of course, with software such minimal inspections often fall apart as the inspector can’t just measure bacteria. (And I’ve seen some pretty horrendous software that’s passed layers of independent review.)

    So where does that leave us? Perhaps pretty much where we are. Perhaps companies should publish the tests they’ve run (though that gives the competition a lot of free work). And how meaningful would that be?

    To bring forward and generalize your “moral railroad” thoughts: we (people and software) pretty much all want the right thing already: for the trains not to crash, the restaurant patrons not to get sick, the software not to crash. Further, we can measure how much we “want” each by how much we’ve spent (training or testing for people or software), though with diminishing returns. So we agonize (hopefully) over whether we’ve done enough. And here we are.

  14. Walter Sobchak January 10, 2010 at 8:23 pm - Reply

    Good take, but ESR is ahead of you and has the plan: Complex Adaptive Systems

  15. Brian January 10, 2010 at 9:05 pm - Reply

    Programming has “Design by contract”.

    This starts to get at the feedback loops you describe. These contracts are typically enabled during testing cycles to expose issues and turned off when running in production to save resources.

    Another developer tool is automated unit tests.

    These tests are used to gather feedback during the development process. They are extremely useful for pushing different scenarios through software subsystems and components to find potential issues.

    Agreed, building robust, runtime based feedback systems is where we need to get, but these two solid software engineering tools help for now.

  16. TheRadicalModerate January 13, 2010 at 11:12 pm - Reply


    Actually, it’s the Space Shuttle that had the five identical computers that voted on each other’s answers–except when the primary flight system code decided to occasionally put an entire minor cycle’s worth of I/O in the wrong cycle. (Gotta love those IBM guys from Oswego–nothing like a scheduled I/O architecture that you decide to implement on an asynchronous, interrupt-driven serial bus. For you aficionados of space program software bugs, this is the problem that caused the last-minute scrub of the first Shuttle launch.)

    And it was really only four computers that voted on each other’s output–the fifth ran the Backup Flight System, which was independently coded so that if there was a software bug that took out the four primaries simultaneously, the crew could switch over to the fifth computer and still fly the vehicle.

    Ah, it brings back fond memories: those carefree days of programming in HAL/S, a language with all the bizarreness of APL and the gracelessness of Fortran, to say nothing of the fun of hand-compiling AP-101 hex patches. (NASA would only let us recompile the whole system every few months. I think that this was their version of source code control.)

Leave A Comment