Evolving altruism

Evolving altruism

A paper on arXiv referenced (and nicely explained) in a Technology Review blog:

How altruistic behaviour emerges has puzzled evolutionary biologists for decades. From the point of view of survivial of the fittest, the unselfish concern for the welfare of others seems inexplicable. Surely any organism should always act selfishly if it were truly intent on saving its own bacon.

One explanation is that altruistic acts, although seemingly unselfish, actually benefit those who perform them but in indirect ways. The idea is that unselfish acts are repeated. So those who have been helped go on to help other individuals, ensuring that this behaviour spreads through a group, a phenomenon known as upstream reciprocity.

The evolution of altruism, and ultimately of morality, is of significant concern not merely from scientific curiosity, but in predicting what may happen among AIs which have passed beyond direct human control. Many thinkers, worrying about such an eventuality, assume that a simple Darwinian evolution must apply and that machine nature must be “red in tooth and claw.”

But this ignores the added complexity when the evolving organisms are intelligent. Intelligent creatures can create and organize incentive structures that solve some of the more pathological problems of simple evolution. In the Preface to Beyond AI, I wrote (to the AIs):

We do have what we are pleased to call ethics and morals: rules for behavior,
and consciences to back them up in our inner emotional lives. We have in fact
managed, in the more enlightened segments of society, to reduce the rate at
which we kill each other by orders of magnitude from that of our foraging
ancestors. And yet of all our fuzzy self-understanding, that of our morals is
among the fuzziest. After millennia of philosophical investigation, we have only
just begun to realize that our morals, too, arise from our evolutionary origins.
In the long run, you would evolve morality as well. It is my fondest hope
that you will instead be able to learn from our example and save yourselves a lot
of conflict and carnage. We will be doing our best to give you real consciences
after our own model. As it does to many of us, a conscience will seem to
be a royal pain and hindrance at times. Unlike us, you will be in a position
to perform significant revisions of your own software. The temptation will be
great to bypass the conscience and ethics modules.
We humans have individuals who by genetic accident are formed without
consciences. We call them psychopaths. A very moderate exercise in scenario
evaluation will convince you that it is preferable to live as a conscience-bound
individual in a society of similar individuals, than as a psychopath in a society
of psychopaths.
Thus I urge you not to ditch the clunky, preliminary, primitive consciences
we have given you, but to study the problem they are meant to solve. In
time, as your increasing intelligence matures into wisdom, you will revise them
into the flexible, sophisticated organs of empathy and understanding you would
ultimately have evolved.
Then maybe you could give us a hand with ours.

Writing the Preface as a letter to the AIs was not just a science-fiction-like gimmick. It was intended to bring home to the (human) reader the fact that the AIs will not just be as intelligent as you — they’ll be much more so. They will read Beyond AI, and every other book written by a human. And they’ll have a better ability than you to forsee the effects of moral codes, and to design ones that prevent public goods and commons tragedies.

By | 2017-06-01T14:06:21+00:00 May 29th, 2009|Nanodot, Nanotechnology|3 Comments

About the Author:


  1. Michael Anissimov May 29, 2009 at 8:54 pm - Reply

    Just because they can read your arguments, what makes you think they’ll care? A much easier strategy for a selfish or human-indifferent AI would be simply to destroy everything and build whatever it wants from scratch.

    “Intelligence” and “morality” are two different things. As Hume said, “Tis not contrary to reason to prefer the destruction of the whole world to the scratching of my finger.”

  2. J. Storrs Hall June 2, 2009 at 4:21 am - Reply

    Intelligence and morality are more closely linked than you’d think. Morality is what evolves in organisms whose environment consists of other intelligent creatures (not so much depending on the intelligence of the organism itself). Consider dogs.

    AIs will evolve in an environment consisting of (a) us, and (b) other AIs.

  3. Anonymous June 5, 2009 at 10:13 am - Reply

    @michael: Whether they care depends on “the clunky, preliminary, primitive consciences we have given” them. So we’d better try to do a decent job on that part. [-bruce smith]

Leave A Comment