Last week I posted a story of strange behavior in the simulation of molecular machines.
One commenter asked if this was due to something unusual in the starting configuration of the atoms. This was the first thing we investigated, and didn’t seem to be the case. There was a small amount to strain energy in the assembly, which promptly thermalized, but this was a minor, one-time, and very brief warm-up, whereas the puzzling one was much slower, but accelerated over time (i.e. over a nanosecond, which is a very long time in the world of molecular mechanics).
What finally seemed to be going on was this: I had built a model that was physics-like as far as entropy was concerned, i.e. it conserved information and was reversible; but not as far as energy was concerned, so there were pathways from low to high energy states and vice versa. Now a real physical system will seek states of higher entropy, but because it can’t just take energy from nowhere, the only high-entropy states available to it are ones characterized by more disorder. But in my system, there were pathways to hotter states, which are much higher entropy than colder ones. So the system evolved into the hotter ones by the second law of thermodynamics, blithely ignoring the first law.
So what’s the moral of the story? Is it that you can’t trust computer models? No, there are some trustworthy computer models, but there are some out there that are trusted and shouldn’t be (pre-crash financial risk models spring to mind). The point is that a model is like a scientific theory: it has to be tested by controlled experiments, or else it’s just a conjecture. Even though you think you understand the microscopic dynamics in the standard reductionist way, things you think will average out often don’t, giving you a system with radically different macroscopic behavior than the real world.