Foresight — with Peripheral Vision: Nanotech & AI forecast from Josh Hall

Josh Hall, author of Nanofuture: What’s Next for Nanotechnology, sends this message to Nanodot readers:

Dear Foresight members & friends,

It’s the time of year when many of you are renewing your Foresight memberships, and helping us meet our $30,000 goal for our Challenge Grant by December 31:
https://foresight.org/challenge

I believe that the next decade or two will be the period when nanotechnology and AI, along with some of the other technologies of the kind Foresight was founded to watch, really begin to have major effects on the world outside the labs. Here are some thoughts on the subject (this essay is also posted on Nanodot). We hope to expand and deepen the analysis here over the coming year, and we hope you can be a part of it.

Foresight — with Peripheral Vision

Back in the 60s, Marvin Minsky, John McCarthy, and others presided over
a burgeoning field of study, Artificial Intelligence. Using machines that were
pitifully small and underpowered by today’s standards, they made remarkable
strides toward a visionary goal: creating a machine that could think and converse
like a human being.

Then an unfortunate thing happened. In the 70s, the amount of money
going to AI research began to attract political attention, as money will do.
The people not getting the money used political skills to have it redistributed.
The result was that funding shifted from the core of AI to applications — in
fact, the infamous Kefauver Amendment made it illegal for ARPA to fund any
basic research at all! The decade of the 80s was seen as the decade of the
expert system, where techniques developed in AI were used to tackle real-world
problems — but within the field, it was known as “the AI winter.”

What had happened was that in the shift to applications, work shifted to
concerns that were peripheral to the key elements of the original vision. A
machine that plays a good game of chess is not necessarily intelligent. We call
a good human chess player intelligent because the human learned the game by
watching, imitating, modeling, and in general building up a skill. The machine
got the skill by having human programmers figure it out and build it in. It’s the
ability to learn and build skills that constitutes intelligence, not simply having
them. And AI had shifted to be largely a field which built skills directly, instead
of one which studied how to build a machine that could learn them.

Does this sound familiar? It’s essentially the same thing that happened to
nanotechnology two decades later. The core of the generative vision — productive
machinery built to atomic precision — fell out of favor, and was even attacked
by the people who thought they could, or at least claimed they could, produce
the same results by short-circuiting the process.

But of course in both cases the result is something evolutionary instead of
revolutionary. And in both cases, perhaps surprisingly, the missing element is
the same: autogeny.

Walk into any consumer electronics store and you can buy a GPS unit that
would have flabbergasted any AI researcher in the 60s. It knows the map of the
whole continent. It can plan routes, estimate times, and plot your course while
you drive, about as well as a good human navigator — the errors are different
but comparable. It speaks to you in English, and you can speak to it and, for a
limited set of commands, it understands. The GPS seems a tour de force of the
kind of capabilities AIers were trying to build, and it is a damned useful gadget.
But having a GPS won’t help you one single bit when you try to build the next
“AI” device. Neither will a chess-playing program or house-cleaning robot.

Similarly, the products of current-day “nanotechnology” are beginning to
approach, in some respects, some of the possibilities pointed out by Feynman and
Drexler. The density of memory and circuitry is rapidly approaching molecular
scale. An iPod can hold the text of ten tons of books. New materials are being
fabricated whose properties will likely enable single-stage-to-orbit spacecraft.
But while both are damned useful gadgets, neither is going to help you build a
cell repair machine.

But evolutionary advance, along with general scientific and technological
progress, ultimately lays the groundwork for enough new capabilities for
autogenous systems to be built. One can’t be certain, and wishcasting is always a
pitfall to guard against, but there seems to be some movement back to the
center. In AI, Marvin Minsky could say “AI has been brain-dead since the 70s” and
then be invited to keynote the leading AI conference. Ben Goertzel, originator
of the term “artificial general intelligence” and leading proponent of a return to
AI’s roots, reports that he is no longer laughed off the stage at mainstream AI
meetings. There is a AAAI-sanctioned AGI conference series now going in to
its second year.

In nanotechnology, the cracks in the glass ceiling are appearing in the form
of the Battelle/Foresight Roadmap for Productive Nanosystems and some grant
funding for mechanosynthesis work.

I’ll go out on a limb and say I expect the loop to close in AI sometime in the
next decade and in nanotech in the decade after that. The world will become
an interesting place. To foresee with even a cloudy lens, we need to look at
a variety of technologies that could support an autogenous feedback loop and
thus have a revolutionary impact. Here are some candidates:

• Biotech is already built on an autogenous base, the reproductive capacity
of life.

• Software likewise is a substrate capable of supporting autogeny at a
number of levels short of true AI. The intersection of software, datacomm, and
human-based memetics over the past couple of decades has been explosive.

• Robotics: replacing human workers in physical factories, particularly ones
that made humanoid worker robots, would take humans out of the productive
loop, enabling a takeoff.

• Desktop fabricators: the same idea in a smaller package. I expect the
2010s to be a decade of experimentation with them the way the 80s were
for PCs. There seems to be a fairly straightforward path from fabs to
nanofactories, with increased value added at each stage.

The convergence of these, along with AI and nanotech and who knows how
many others I haven’t thought of, will form the core of technological capability
in the twenty-first century. I suspect that a study of the properties of general
autogenous systems will be invaluable in understanding it.

Once this takes hold, virtually everything will begin changing at Moore’s
Law rates. Let’s hope we have enough foresight that the changes will be
improvements.

————————————————————————

Comments are welcome — you can email me at [email protected], or go to our blog Nanodot (https://foresight.org/nanodot) and respond in the comments field for “Foresight — with Peripheral Vision.”

Please, if you can, chip in on the Challenge Grant at https://foresight.org/challenge. Every dollar you donate will be automatically doubled, so Foresight can do twice as much to influence our future in the positive direction that we all hope for.

Josh Hall
[email protected]

Foresight Institute
1455 Adams Drive, Suite 2160
Menlo Park, CA 94025 USA
Tel +1.650.289.0860
Fax +1.650.289.0863

Leave a comment

0
    0
    Your Cart
    Your cart is emptyReturn to Shop