0
    0
    Your Cart
    Your cart is emptyReturn to Shop

        A cautionary note

        One of the constraints laid down by DARPA at the recent Physical
        Intelligence proposers workshop was that the model of intelligence
        that was to be proposed had to have a physical implementation. It
        seemed odd to some of the attendees that this should be a hard
        constraint, since many models of intelligence have a perfectly
        reasonable implementation as software.

        I have long held something of a nuanced view on this point. On the
        one hand, I never agreed with the philosophers and others who claimed
        that embodiment was necessary for true intelligence, meaning, the
        aboutness of symbols, or any of the rest. The people in the
        Matrix were really intelligent thinking creatures even though their
        bodies had nothing to do with the world they thought they were
        experiencing.

        On the other hand, the need to interact with a robot body and cope
        with the real world has had a very salubrious effect in terms of
        keeping AI researchers “honest” in the sense of not making simplifying
        assumptions about the task to be accomplished (for the researchers who
        did in fact use robots, that is).

        Even a simulated world can turn out to have assumptions built in which
        are either unknown to the writers or operate quite differently from
        how they are believed to. An interesting example comes from my
        experience doing molecular dynamics simulations at Nanorex.

        The point of molecular dynamics is to simulate the atoms as if they
        were point masses (which the nuclei very much are), with a set of
        made-up forces between them to stand for the interactions of the
        electrons. These forces can be thought of as springs; it’s of great
        concern to find a formulation that matches the real forces but that’s
        beside the point of this story.

        The point of the story is that when you take a group of atoms with no
        external source of energy and no sink, i.e. perfectly insulated, it
        doesn’t get hotter and it doesn’t get colder. Energy is conserved in
        a closed system.

        The problem is, of course, that the numerical simulation doesn’t quite
        conserve energy; there are various forms of “leakage” ranging from
        round-off error to discreteness of timesteps. So molecular dynamics
        simulations have a “thermostat” — a piece of code that sums up the
        energy in the model and damps down the motion if the energy is too
        high, or vice versa. For ordinary chemistry, this works fine.

        We were trying to simulate molecular machines, however, and one
        typical “experiment” would be to have a bearing, turn a shaft in it,
        and see how much heat it would generate. So we couldn’t use the
        thermostat to clamp the heat, since we were trying to simulate a
        situation where the heat would vary.

        So I tried to write a simulator which conserved energy at the very
        lowest level, so energy conservation would be a property of the model
        and we wouldn’t need a thermostat. It didn’t work — it’s quite
        difficult to get “spring-like” forces with efficient implementations and
        yet also follow some constraint like being conservative. On the other
        hand, I thought I could get away with another microscopic property of
        physics, namely being reversible. In real physics, the trajectories of
        atoms are described just as accurately by the equations running
        backwards in time as forwards. So it seemed that I should have a
        system that matched physics to that extent, and on the average, as
        much energy would be gained as lost, since there were exactly as many
        energy-gaining trajectories as energy-losing ones — they were the
        same ones in reverse!

        So what happened? It turned out that you could take any random
        assortment of atoms at all, let the simulation run, and it would get
        hotter. No energy sources, no apparent way for this to happen, but
        hotter it would get. Never colder, even though there were, as noted,
        exactly as many possible cooling trajectories as warming ones.
        Totally unlike physics, of course, since in the real world, energy is
        conserved.

        But I had an ace up my sleeve: since I had made the system reversible,
        I could let my atoms get hot and then reverse all their velocities.
        And lo and behold: the system cooled off. It found one of those
        energy-losing trajectories because it was exactly the reverse of the
        energy-gaining one it was on before. But put the system an any random
        state, and it would always warm up, never cool down.

        I finally figured out what was going on, but I’ll let readers chew
        over it as a meaty puzzle and tell you my conclusion next week.

        Leave a comment

        Privacy Overview

        This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.