Yesterday I wrote that we don’t have a clue how learning works. If that were as categorically true as I made it sound, the prospects of AGI would be pretty much sunk. AGI requires getting up to the universal level of a learning machine: one that can in theory learn anything any other learning machine can learn. (Like universality in Turing machines, this concept completely ignores the factor of speed, and so isn’t any practical value when it comes to evaluating real machines in the real world. On the other hand, it does say a lot about how much of the huge, knowledgeable super-AI we actually have to build ourselves, and how much it can build for itself once we get it going.)
In my AI@50 paper I compared a universal learning machine to the scientific community. Compared with individual humans, the scientific community has the following advantages as a learning machine:
- unlimited life. An individual human is actually a limited learning machine because we learn just so much and then die. The scientific community could in theory go on forever.
- Increasing processing power. Although this is a speed question, and thus strictly speaking not part of universality, it makes a huge practical difference. The scientific community grows, exponentially if need be, and thus has the processing power to handle an exponential increase in knowledge.
- Diversity of learning biases. Computational learning theory shows a strong inverse relationship between learning speed and the range of things you can learn. Each human seems to have some arbitrary heuristic setting for this. The community as a whole can cover the field, with the “marketplace of ideas” bringing the most appropriate biasses to the fore in any given situation.
Given that, I was pleased to come across a study published in Science about recent advances in learning theory. It highlights how much the human learning process is part of a community, rather than an individual effort.
“We are not left alone to understand the world like Robinson Crusoe was on his island,” said Andrew Meltzoff, lead author of the paper and co-director of the University of Washington’s Institute for Learning and Brain Sciences. “These principles support learning across the life span and are particularly important in explaining children’s rapid learning in two unique domains of human intelligence, language and social understanding.
“Social interaction is more important than we previously thought and underpins early learning. Research has shown that humans learn best from other humans, and a large part of this is timing, sensitive timing between a parent or a tutor and the child,” said Meltzoff, who is a developmental psychologist.
…
“Apparently babies need other people to learn. They take in more information by looking at another person face to face than by looking at that person on a big plasma TV screen,” she said. “We are now trying to understand why the brain works this way, and what it means about us and our evolution.”
Meltzoff said an important component of human intelligence is that humans are built so they don’t have to figure out everything by themselves.
“A major role we play as parents is teaching children where the important things are for them to learn,” he said. “One way we do this is through joint visual attention or eye-gaze. This is a social mechanism and children can find what’s important – we call them informational ‘hot spots’ – by following the gaze of another person. By being connected to others we also learn by example and imitation.”
What does this new increasing understanding of learning tell us about the prospects for AGI, in particular about supersmart rogue AIs that self-improve to weakly-godlike status and take over the world? Mostly that it’s a myth based on a not-very-good understanding of how learning really works. Real, near-future learning machines are very likely to be limited individuals with carefully-tuned biases to learn specific kinds of things fast and reliably. The reason is simple: that’s what people need and will pay for. The ultimate, general, learning machines will wind up being organizations of these. These will be grafted onto, and ultimately replace, human organizations ranging from the market to government (with scientific bodies being somewhere in between).