Foresight Nanotech Institute Logo

« Go Back

You are viewing
Foresight Archives

Image of nano

Foresight Background 1, Rev. 1

Abrupt Change, Nonsense, Nobels, and Other Topics

© Copyright 1987-1991, The Foresight Institute.


Will Change Be Abrupt?

Originally published in 1987

A lively topic of debate has been the rate of change we should expect to see as nanotechnology emerges. Some argue that nanotechnology will arrive as a breakthrough, bringing deep, abrupt changes. Others, pointing to the slow, smooth changes in many fields throughout history, argue that nanotechnology will bring gradual changes, however great they may become over time.

The question of abruptness is related to the question of timing, but not identical. A thirty-year transition from primitive to advanced nanotechnology would obviously place advanced nanotechnology at least thirty years off, but an abrupt transition need not imply that it will arrive soon—primitive nanotechnology might itself be slow to arrive, or slow to reach the threshold for an abrupt advance.

Why consider the chances of an abrupt transition? Because if one occurs, we will have to cope with the consequences, and because a measure of foresight may help us do so more effectively.

The "Abrupt" Paleolithic/ Nanotechnic Transition

On a grand historical scale, the transition from the age of stone to the age of nanotechnology will almost surely be abrupt: the paleolithic lasted millions of years, and the neolithic a few thousand. The transitions from stone to bronze, iron, steel, modern materials, and nanostuff will pass in an eyeblink, compared to the duration of the paleolithic. If progress in basic material technology then plateaus out—as the limits of natural law indicate it eventually must—then the long-term history of technical progress will look something like Figure 1: two plateaus, linked by an abrupt jump.

material technology versus time

In ordinary terms, though, no one would call a several-thousand-year transition abrupt. Our concern is with changes that arrive in a day, or great revolutions that turn the world upside down in a decade or less. These would challenge social reaction times, disrupting behaviors and institutions geared to incremental changes spread over the multi-year cycles of politics and engineering development.

The case for gradualism is obvious—consider how most things happen in the world. What is the case for real abruptness?

Past Abrupt Changes

History provides numerous examples of abrupt changes in the realm of technology. Several stand out.

  • For millennia the speed of written human communication was limited to the speed of a fast horse (with occasional, laborious exceptions involving towers, signal flags, and the like). The telegraph raised that speed many million-fold, to near the speed of light.
  • On March 7,1961, Robert M. White set a world speed record of 2,905 miles per hour in an X-15 aircraft. By April 12, Yuri A. Gagarin had raised it to 17,560 miles per hour in a Vostok spacecraft.
  • Before 1986, the maximum temperature for superconduction had crept up to about 23 degrees above absolute zero. In a matter of months, superconductors based on new principles roughly quadrupled that figure.
  • Before 1945, the amount of energy humans could release from a chunk of matter—such as an explosive—was limited to chemical energies, thousands of joules per gram. In 1945, that energy jumped over a million-fold to nuclear levels, billions of joules per gram.

Thus, abrupt changes in technical abilities are not unknown in human history. We may see them again.

Smoothers of Abruptness

But is this so important? The worlds of economics, politics, and daily life were not transformed abruptly, just because achievable speeds and energies were. High-temperature superconductivity has yet to send the electronics industry into mad gyrations. Telegraphy took many years to become practical, and yet more years to spread. Even if nanotechnology were to bring abrupt transitions in human abilities, this abruptness might be of little importance to human life. History, one might argue, shows that a variety of factors smooth the impact of new technologies. Effects have been smoother than causes.

The chief smoother of effects has been the time and cost required to go from a prototype to a manufacturable model, then from a manufacturable model to a working production line, and then from a working production line to a world full of product. Telegraph networks took decades to spread, and remained too expensive for the home (instant home communications awaited low-labor, low-skill telephones). High-temperature, high-current superconductors remain hard to fabricate. Nuclear bombs accumulated in arsenals slowly, and are still beyond reach of most countries. We still lack commercial spaceliners, chiefly for reasons of production cost.

Nanotechnology is in a different class. Once it is well underway, it will be, not a novel product of conventional manufacturing techniques, but a novel way to produce almost anything-including more of itself. Self-replicating machines promise changes of a sort we haven't seen before.

Leaps in Quantity

For the sake of concreteness, imagine the following scenario (one of many worth keeping in mind): Progress in nanotechnology is smooth and gradual. Smoothly and gradually, it leads to primitive assemblers, then more advanced assemblers; assemblers are exploited gradually, and more and more can be done with them. At some point, assemblers can be used to make a variety of fairly conventional materials—good conductors, semiconductors, insulators, structural materials, and so forth, as well as interfaces among these materials. Eventually, these assemblers can be programmed to form large arrays and work together in a tank of fluid (made from cheap industrial chemicals) providing fuel, cooling, and raw materials.

These assembler capabilities can be exploited by means of a "hardware compiler" (apologies to the computer science community for reusing both "assembler" and "compiler" in describing one system). This software would take a computer-aided design (CAD) file defining a set of three dimensional parts and treat it as a three-D paint-by-the-numbers layout, generating instructions for a team of assemblers to form an array and "paint" the volume of each part with the appropriate materials. With abundant assemblers (produced by self-replication) and a hardware compiler, a CAD file for any design based on manageable materials could be translated into hardware overnight—and not just one unit, but millions. The compiler would generate the instructions; assemblers using industrial chemicals would do the rest.

In this scenario, assemblers and hardware compilers do not arrive abruptly. But once they emerge, future products do arrive abruptly. Instead of a new generation of car, computer, or missile taking years to put into production and spread across the countryside, such products could henceforth move from a solid, working prototype to million-unit production in a matter of days. The economic and military consequences would be substantial. This suggests one way in which smooth qualitative advances in nanotechnology could carry us into an era of abrupt changes in the quantity of different products. This transformation of production technology would remove much of the insulation between society and new technologies. Discovery of a room-temperature superconductor on Monday might lead to distribution of a new generation of computer by mail on Friday; design of a water-launched passenger spacecraft on Wednesday might make possible unlimited passenger service to the Moon that weekend. In short, although other delaying factors might intervene to slow the introduction of new technologies, the brute barriers of cost and production would have fallen. Abrupt changes in technical ability would be more directly translated into abrupt consequences for society.

Leaps in Quality

Nanotechnology may bring abrupt changes in a qualitative sense as well. Since we already understand much about the behavior of molecules, materials, and objects, a wide range of systems could be designed and debugged before they can be built—we may do substantial design-ahead, anticipating assemblers. With a body of pent-up know-how, waiting only for tools to translate designs into reality, the arrival of assemblers could bring abrupt changes in what we can build. A wide range of products might appear in a matter of months.

In a final scenario, we may someday develop software systems intelligent enough to do general engineering work, and to improve themselves. In our present state of ignorance about such possibilities, we cannot exclude the possibility of a rapid cycle of self-improvement—and it seems possible, on physical grounds, to build devices that think at least a million times faster than human brains. Such systems could lead to abrupt changes in more ways than we can imagine.

The Smooth-Exponential Fallacy

In considering the advance of technology, people sometimes seem to engage in a form of backwards reasoning that may be termed the Smooth-Exponential Fallacy. This line of thought assumes that advances will progress smoothly, in an exponential fashion (strictly speaking this would imply infinite advance—a major difficulty).

Faced with a series of technologies—say, present technology, mature protein engineering technology, and assembler-based nanotechnology (represented as A, B, and C in Figure 2)—this fallacy leads to reasoning like that illustrated in Figure 3. Since C is so much more powerful than B, the Smooth-Exponential Fallacy assigns it a date decades after B. Our thinker then wanders off in search of steps between B and C that could consume these decades.

A bar is short, B is slightly longer, C is much longer

a smooth curve with B and C far apart

But what if B leads to C fairly directly, because, given B, C just isn't very difficult? Then our thinker, while embracing a comfortable conclusion, will be dangerously wrong; reality will look more like Figure 4.

an abrupt change with B and C very close

There were no systems for sending letters that worked at 10, then 100, then 1000, then 10,000, then 100,000, then 1,000,000, then 10,000,000 meters per second. Technology jumped from the several meters per second of a horse to near the 300,000,000 meters per second of light in one bound. Likewise, there were no semi-nuclear bombs. In the case of assemblers and nanotechnology, intermediate stages are at least possible, but there is no guarantee that people will spend much time on them.


We cannot predict the course of advance with any confidence, today. Assembler technology may emerge by any of several paths, in any of several nations, in the service of any of several goals. It may emerge after substantial design-ahead, or not. It may be regulated and limited, or subsidized and pushed forward at maximum speed.

Changes might be gradual, not just at first, but through each of the many transitions in technological capability that lie ahead of us. But then again, they might not. Since the abrupt scenarios are more challenging, and since they seem possible, they deserve our serious attention.


The Problem of Nonsense in Nanotechnology

bogosity (bo gos'i ty) n. 1. A false idea or concept; misconception. 2. Inaccuracy; opposite of veracity. [colloquial usage in artificial intelligence community; from bogus.]

flake, n. -ky, -kiness. One who habitually generates, spreads, or believes flagrant bogosities.

Nanotechnology—a field embracing mechanical and electronic systems built to atomic specifications seems certain to suffer from an impressive infestation of nonsense. There is nothing novel about a technological field suffering from nonsense, but a variety of factors suggest that nanotechnology will be hit hard.

The health of a field depends on the quality of judgments made within it, both of technical concepts and of individual competence. If concepts are sound and credibility follows competence, the field will be healthy; if bogus concepts prosper and credibility and competence come unhitched, the field will suffer. Maintaining the health of a field requires concern with the quality of these judgments.

Trends in academic interest and media coverage suggest that nanotechnology will receive growing attention. This field overlaps with several others, including much of molecular electronics and advanced biotechnology. Flakiness in this broad field will tend to reduce support and to reduce the number and quality of researchers. Similar (but lesser) effects seem likely to spill over into all fields that appear similar in the eyes of reporters, managers, and politicians.

If bogosities thrive, they will also tend to obscure facts, hampering foresight—and foresight in this field may be of extraordinary importance.

Our Problem: bogosity equals...

Experience already suggests the problems we will face in the quality of the technical literature, of media coverage, and of word-of-mouth. In estimating the future magnitude of this problem, a simple model may be of use: In this model,the bogosity in a field equals the bogosity imported from related areas, plus the bogosity generated internally, minus the bogosity expelled or otherwise disposed of.

...bogosity imported...

Nanotechnology is related to several other areas. For example, the scale of nanotechnology makes quantum effects important—sometimes. But quantum mechanics is a peculiar and often misunderstood subject; popularizations of it shade off into brands of mysticism distant from anything a physicist would recognize. The quantum domain thus holds ample bogosities waiting to be imported. Further, misunderstandings of quantum uncertainty can be used to make molecular machines seem either mysterious or unworkable.

Nanomachines may be developed through protein engineering, and some nanomachines will resemble biological mechanisms. Thus, nanotechnology borders on biology, a field rich in emotional issues and misconceptions, some shading off into mystical views far from anything a biologist would recognize. Genetic engineering (an enabling technology for nanotechnology) has been the center of a remarkably confused debate. Misconceptions about evolution have already led a New York Times writer (in a review of Engines of Creation, 10 August 1986) to suggest that developing molecular circuits and the like may take billions of years—on grounds implicitly suggesting that human designers will be no more intelligent than cosmic rays.

Some applications of nanotechnology border on brain science and artificial intelligence—and quite aside from these applications, many people think of brains when they hear of molecular computation, and some people (for some reason) think that molecular computers will lead automatically to machine intelligence. Nanotechnology seems ripe for invasion by ideas linked to bogus "explanations" of consciousness, rooted in bizarre physical phenomena rather than in complex information processing.

Finally, nanotechnology has many dramatic uses that border on science fiction: the ability to build things atom by atom leads naturally to strong materials, to self-replicating machines, and to a wide variety of systems with impressive performance, including spacecraft. The vast literature of science fiction holds a wealth of appealing, plausible ideas that are often inconsistent with physics and sense. It, too, will provide ready-made bogosities to import. bogosity generated...

Nanotechnology will offer fertile ground for the generation of new bogosities. It includes ideas that sound wild, and these will suggest ideas that genuinely are wild. The wild-sounding ideas will attract flaky thinkers, drawn by whatever seems dramatic or unconventional.

Further, imported bogosities will interbreed, yielding novel hybrids. Inspirations and nonsense imported from quantum mechanics, biology, brain science, and science fiction may lead to suggestions for creating quantum biomolecular consciousness for space robots, or bioevolutionary nanomachines for giant brains. We can expect to hear of a host of vague devices and implausible concepts.

In the policy domain, misunderstandings of opportunities and dangers will be translated into misconceived policy prescriptions. Researchers can expect to face both irresponsible advocacy and irresponsible opposition, both eroding support for the field.

...minus bogosity expelled.

All this would be little problem if normal mechanisms would maintain the quality of ideas. But will they? Consider some of the problems:

People distinguish fact from fiction best when the subjects are visible and familiar—but this domain deals with unfamiliar, invisible entities. Few know enough quantum mechanics, chemistry, or molecular biology to reject bogosities in these fields. Even those with knowledge in one field may fall victim to nonsense in another.

People think more dearly when they have no emotional stake in the subject—but nanotechnology raises issues of life-and-death consequence, issues that will likely become clouded by emotion.

People reject bogosities more rapidly when these can be subjected to practical tests—but in nanotechnology, many ideas can only be tested with tools that won't be developed for years.

Refereed journals operating in an established field can help communities maintain the quality of information—but this field is new and interdisciplinary, it lacks both a refereed journal and an established critical community.

In short, nanotechnology is a fertile field for nonsense, and is presently short of effective quality-control mechanisms.

What Can be Done?

What can we do to reduce damage caused by nonsense?

When asked to judge the eventual feasibility of a technological proposal that lies well beyond the state of the art, one may be forced to say "I don't know." This does little good, but does no harm.

But to declare that "No one can know" would often be false—this position may deny the distinction between what is unachievable using present tools for design and fabrication and what is impossible under known physical law. Likewise, to declare that all wild-sounding ideas are false would itself be false, if history is any guide. These blanket declarations of ignorance or rejection would do actual harm: By being false, they would add to the bogosity problem. By failing to distinguish among ideas, they would blur the very distinctions that need to be made.

These distinctions often can be made, even in an interdisciplinary context. In judging people and bodies of work, one can use stylistic consistency as a rule of thumb, and start by checking the statements in one's field. The mere presence of correct material means little: it proves only that the author can read and paraphrase standard works. In contrast, a pattern of clearcut, major errors is important evidence: it shows a sloppy thinking style which may well flow through the author's work in many fields, from physics, to biology, to computation, to policy. A body of surprising but sound results may mean something, but in a new field lacking standard journals, it could merely represent plagiarism. More generally, one can watch for signs of intellectual care, such as the qualification of conclusions, the noting of open questions, the dear demarcation of speculation, and the presence of prior review. In judging wild-sounding theoretical work standards should be strict, not loose: to develop a discipline, we need discipline.

Over time, these problems will lessen. Community judgment will play a growing role as the community itself grows and matures. Eventually, the field of nanotechnology will be like any other, full of controversy and disputes, but built on a broad base of shared judgments.


Nobel Paths to Nanotechnology

Originally published in 1987

What path will be followed to the first assemblers?

Several paths lead to nanotechnology, and work contributing to one or more of those paths has won several recent Nobel prizes. Even without the motive of building assemblers, practical and academic motives have moved technology in directions that bring assemblers closer.

Chemistry, 1987

The 1987 Nobel prize for chemistry went to Charles J. Pedersen, Donald J. Cram, and Jean-Marie Lehn for developing relatively simple molecules that perform functions like those of natural proteins. Pederson synthesized what are known as "crown ethers," a family of molecules that selectively bind specific metal ions in solution, holding them in properly-sized internal hollows. Cram and Lehn have extended this work, using chemical techniques to synthesize a wide range of molecules that specifically bind other molecules. This sort of selective binding is a common protein function.

The molecular machinery of cells self-assembles though the selective binding of one protein to another. Other molecules that bind selectively to one another might likewise be used as a basis for molecular machinery, providing an alternative to proteins for building first-generation assemblers. The ongoing work of Cram, Lehn, and their coworkers may be of great importance to the development of nanotechnology.

If protein design remains too difficult, building initial molecular machines from non-protein molecules may prove an easier path. Myron L. Bender and Ronald Breslow have already made non-protein molecules that function as enzymes.

Physics, 1986

The 1986 Nobel prize for physics went to Gerd Binnig and Heinrich Rohrer for development of the scanning tunneling microscope (STM). This device, reported in 1982, uses vibration isolation, piezoelectric positioning elements, and electronic feedback to position a sharp needle near a conducting surface with atomic precision.

Assemblers, of course, will work by positioning reactive molecules to atomic precision to direct chemical reactions. Several persons familiar with Eric Drexler's work on assemblers (including Drexler, Conrad Schneiker, Steve Witham, and no doubt others) independently observed that, as Engines of Creation notes, mechanisms of the sort used in scanning tunneling microscopy "may be able to replace molecular machinery in positioning molecular tools," perhaps helping to build a first-generation assembler.

Suitable molecular tools remain to be developed. In a series of experiments, R. S. Becker, J. A. Golovchenko, and B. S. Swartzentruber have produced modifications on a germanium surface, measured as 0.8 nanometer wide. These features are thought to represent single atoms of germanium, electrically evaporated from a bare STM tip, with the large size of the features resulting from problems with STM resolution. At last report, they were unable to call their shot (that is, to put the atom in a pre-selected location), and the process did not work for the related element, silicon. The evaporation process requires that the STM tip be retracted from the surface.

Scanning tunneling microscopy also promises to be of use in characterizing molecules, since it can give atomically-detailed pictures of various surfaces. This could speed molecular engineering, helping designers to "see" what they are doing with greater ease. Little has been demonstrated as yet, however. It has been used neither to sequence DNA, nor to characterize unknown chemical structures. George Castro of IBM's Almaden Research Center reports that experimenters have thus far had difficulty detecting molecules on surfaces, to say nothing of determining their structures. Nonetheless STM and related technologies for microscopy and micro-positioning are well worth watching as possible aids to the development of nanotechnology.

Chemistry, 1984

The 1984 Nobel prize for chemistry went to Bruce Merrifield for developing the technique used for synthesizing the most complex, specific chemical structures now made. This technique, known as solid phase synthesis (or simply the Merrifield method) uses a cyclic set of reactions to extend a polymer chain anchored to a solid substrate. Each cycle adds a specific kind of monomer, building polymers with a specific sequence.

The Merrifield method is at the heart of the machines now used to manufacture specific proteins and gene fragments by chemical methods. It is thus central to protein and genetic engineering (either one of which could, in principle, proceed without the other). The Merrifield method could be used to make other polymers, perhaps including non-protein molecules with protein-like functions, such as specific binding and self-assembly. By providing multiple paths to complex molecular systems, the Merrifield method provides multiple paths to nanotechnology.

What path will be followed to the first assemblers, and hence to nanotechnology? It is hard to guess, today. Protein engineering will clearly suffice, because proteins already serve as the components of complex molecular machines. Micropositioning technologies may help, though development of suitable molecular tools seems likely to prove the hard part of the task. Molecular systems like those explored by Cram and Lehn, together with synthetic techniques based on the Merrifield method provide a wealth of alternatives having many of the advantages of protein engineering, but fewer constraints. With that lack of constraints, however, comes a lack of knowledge, a lack of examples from nature. A reasonable guess is that several paths will be followed, and will contribute in a synergistic fashion. First-generation nanotechnology need not be based on any single class of molecule or device.

In considering this confusing wealth of possibilities, two points are important to keep in mind. The first is that multiple approaches multiply possibilities for success, bringing it closer: assemblers will arrive by whichever is, in practice, the fastest (a simple tautology!), hence difficulties with any single approach need not mean overall delays. The second is that how the first assemblers are built will make little long-term difference: crude assemblers will be used to build better assemblers, and the nature of nanotechnology will soon become independent of the nature of the initial tools. In short, this kind of uncertainty about the path ahead—stemming from a wealth of promising possibilities—gives confidence in the emergence of assemblers, without obscuring the nature of the subsequent nanotechnology.

Editor's note: Among more recent Nobel prize winners, perhaps the one most conspicuously associated with nanotechnology is Prof. Richard E. Smalley, Director of the Center for Nanoscale Science and Technology at Rice University, who shared the 1996 Nobel prize in chemistry with his collaborators Robert F. Curl and Harold W. Kroto for their 1985 discovery of fullerenes, a hitherto unknown crystalline form of carbon. More details. See also the lead story in Update 27.


Value, Merit, and the Miller Curve

From time to time, an important idea begins to spread in society, an idea like environmentalism or nanotechnology. When these ideas are young, first impressions form and society's pattern of response begins to gel. It is then that the quality and credibility of ideas is most important, and it is then that flakes (as defined in The Problem of Nonsense in Nanotechnology) are most attracted.

Often, participation by flakes is accepted early on, when recruiting is most important, yet this is when they are most damaging. This may be, in part, because people tend to confuse merit with value. Roughly speaking, merit in a field measures the respect someone deserves for their effort and achievement; value in a field measures the net worth of what someone has contributed. To illustrate the difference, consider a new field like nanotechnology and an imaginary set of people, all equally interested and motivated, but of differing degrees of flakiness.

Where ideas are concerned, there is surely merit in recognizing a new, important field so early. Curve #1 in Figure 1 shows, in a rough way, how merit varies with flakiness. An individual miraculously free of flakiness—a consistently sound thinker—would be at the upper left end of curve #1. Persons of greater flakiness have less merit, because their recognition and understanding of the field is flawed. They fall at points to the right. Complete flakes earn little merit even for their early interest in the field. They are equally excited by flying saucer sightings and supermarket tabloid reports of cancer cures.

Where introducing ideas to society is concerned, there is surely value in spreading word of a new, important field. Our imaginary sound thinker is of solid, positive value. A complete flake is of negative value, but makes little difference—the flake babbles nonsense, but no one listens.

curve #1 falls off sharply as flakiness increases but remains positive; curve #2 is most negative in the middle of the flakiness scale

The semi-flake, being more perceptive and knowledgeable on the subject than, say, 99.9% of the population, may be considered to have real and substantial merit. But that same semi-flake may be an active source of misconceptions for the rest of society, interfering with understanding by spreading plausible-sounding nonsense. By being listened to, the semi-flake has a real value—but by spreading disinformation, the semi-flake makes that value negative.

Thus the value curve—the second curve of Figure 1—swings negative where the merit curve stays positive. It is known as the Miller Curve, drawn by Mark S. Miller during a discussion of the spread of ideas in society. The gap between value and merit over the pit of the Miller Curve is a perennial source of discomfort in organizational activities. Persons in this region are ahead of most of society, yet they manage to pull understanding backward. They seem so close to being helpful, yet their involvement would do real harm.


Hacking Molecules

Originally published in 1987

Nanotechnology may seem remote. Molecules are invisibly small, and they differ from the familiar objects of daily life. Manipulating them with assemblers is essential to nanotechnology, but assemblers will take years to develop.

Computers, though, can bring nanotechnology closer, letting us design molecular systems using computer models, years before we have assemblers able to build them in real life. This design-ahead process seems sure to occur, but when will it begin? Roger Gregory of the Xanadu hypertext project argues that the answer to this is simple: Almost immediately.

If design-ahead were to require expensive facilities and major funding, it would need to wait for broad acceptance of the importance of nanotechnology, or even for a sense of its imminence. This might take years. But Gregory observes that the early stages of design-ahead need neither funding nor new facilities: personal computers and motivated hackers are enough. ("Hackers" is used here, not in the media's sense of computerish juvenile delinquents, but in the original sense of inventive technologists making computers jump through hoops.) The growth of amateur molecule-hacking may have major consequences for the emergence of nanotechnology.

David Nelson, chief technical officer at Apollo computer, has plotted trends in computer price and performance. They follow a classic smooth exponential, with performance at a given price growing ten-fold every seven years or so. At this rate, good personal computers today have roughly the power of a seven-year-old minicomputer or a fourteen-year-old mainframe. There is every reason to expect this trend to continue for years to come. (Nanotechnology will eventually put many billions of today's mainframes into an air-cooled desktop package, but that is another story.)

Molecular modeling software—able to describe molecules and the forces that shape them—has advanced over the years while migrating into less and less expensive machines. After long residence on machines such as Digital Equipment Corporation's VAX minicomputers, it has now arrived on personal computers, such as the Macintosh. Prices are still high and offerings sparse, but the barrier to amateur molecular design work is being breached.

Can these computer models give accurate results? This depends on one's standard of accuracy, which in turn depends on one's goals.

In engineering, one need only have enough accuracy to distinguish between designs that do and don't work. In nanoengineering, as in ordinary engineering, designers will generally (though not always) aim to maximize such things as stiffness and strength, while minimizing such things as size, mass, and friction. A designer can often compensate for an inexact model by aiming for a large, favorable margin in the uncertain parameters. Software based on modern molecular mechanics models is fairly accurate even by scientific standards; it should be good enough to design a wide range of molecular machines, with substantial confidence in the results.

Molecular modeling software falls into various classes. At the low end are programs that just provide a three-dimensional software sketch pad for patterns of atoms in space. At the high end are systems of programs that do the sort of molecular mechanics mentioned above—that derive molecular shapes and energies from information about the interactions among atoms. (An example of the latter is MicroChem, a program for the Macintosh.)

Present systems are expensive and may need some adaptations to make them more useful for the design of molecular machinery. Once suitable software is available at a reasonable price, however, we can expect to see the emergence of a community of molecule hackers. Interest in nanotechnology is high in the computer community, and electronic mail and bulletin board systems will make it easy for designers to swap ideas, designs, and criticisms. Once the process gets rolling, designs for molecular widgets—such as gears, bearings, shafts, levers, and logic gates—should accumulate at a good pace, spawning a lively informal competition to design the best. As computers and software improve, the complexity of feasible designs will grow.

The spread of amateur nanomachine design will spread an understanding of molecular machines and nanotechnology. It will spread the idea of design-ahead by demonstrating it in action. Within the nanotechnology community, it will provide a channel for creative activity having concrete results, ranging from pictures suitable for video animation to studies suitable for journal publication. It will give people a chance to pioneer future technologies today, while gaining the knowledge, skills, and experience needed to enter the field professionally, when serious research funding begins to grow. By helping people to visualize nanotechnology, it will aid foresight and preparation.

How fast will home molecule hacking get off the ground? It is hard to say, but the activity seems fun, valuable, and worth promoting. The hardware is here, and the software is within reach.

The Foresight Update plans to review software tools useful for molecular design. If you come across reviews or advertisements, or are yourself familiar with such tools, please send us information.

Editor's note: There has been substantial progress in molecular modeling during the 12 years since this essay was written. For more recent material, see the links included in the version of this essay published in Update 2. Another page on molecular modeling tools is available on the Web site of the Institute for Molecular Manufacturing. Web sites with information on molecular modeling are covered in Web Watch in Update 29.


Hypertext Publishing

Originally published in 1987

Interest in hypertext is exploding, for the time being. Dozens of systems are in use, the University of North Carolina, the ACM, and the IEEE have sponsored a conference, and Apple Computer has massively promoted a hypertext product for the Macintosh, HyperCard. There have been hopes of a hypertext revolution bringing an impact on the scale of the Gutenberg revolution. It seems to have arrived.

Or has it? Words can mean many things. A "programming language" can be anything from a system of detailed instructions for pushing bits around inside a computer, to a system of general rules for describing logical reasoning. A "hypertext system" can be anything from a hypernotepad to a hyper-library-of-Congress. Present systems are closer to the notepad class. We shoudn't expect them to give library-class performance.

Different hypertext systems have been built to serve different goals, though some aim to serve several. One goal is to improve personal filing systems by helping people connect information in ways that reflect how they think about it. Another is to improve educational publications by helping authors connect information in rich, explorable networks. Many recent hypertext systems are actually hypermedia systems in which authors can link descriptions to pictures, video, and sound.

Filing systems on a single machine can serve a single user or a small group. Teaching documents written on one machine, can be copied and distributed to other machines around the world. Both these goals can be served by stand-alone systems on single machines, such as HyperCard on the Macintosh. But both these goals, though valuable, are peripheral to the goal of evolving knowledge more rapidly and dependably, to improve our foresight and preparedness.

An improved medium for evolving knowledge would aid the variation and selection of ideas. To aid variation essentially means to help people express themselves more rapidly, accurately, and easily. To aid selection essentially means to help people criticize and evaluate ideas more rapidly, effectively, and easily. Several characteristics of a hypertext system are important to these goals.

To help critical discussion work effectively, a hypertext system must have full links, followable in both directions, rather than just references followable in a single direction. That is, the system must support full hypertext, not just semihypertext. In a semi-hypertext system, a reader cannot see what has been linked to a document, hence cannot see other reader's annotations and criticisms. Many existing hypertext systems lack full links.

To help express criticism, a hypertext system should be fine-grained. In a fine-grained system, anything—not just a document, but a paragraph, sentence, word, or link—can be the target of a link. In a fine-grained hypertext system, if you wanted to disagree with this article, you could express yourself by linking to the objectionable part (perhaps the definition of fine-grained in the previous sentence). In a coarse-grained system, you might have to link to the article as a whole. Many existing hypertext systems are coarse-grained.

To make the system a useful medium of debate, it must be public. This in turn requires suitable software, access policies, and pricing policies (such as fee-for-service, rather than free-to-an-elite). No hypertext system yet functions as a genuine public medium; many cannot do so.

To work, a public hypertext system must support filtering software. If readers automatically see all the links to a document, the equivalent of a presidential speech or an Origin of Species will become incredibly cluttered. Software mechanisms can provide a flexible way to cut through the clutter, enabling readers to be more choosey, seeing only (say) links that other readers (editors, colleagues, etc.) have recommended. There are subtleties to making filtering work well, but promising approaches are known; readers would be free to use whichever filters they think best at the moment, so filters would be free to evolve.

No existing hypertext system is full, fine-grained, filtered, and public-yet all of these characteristics (with the possible exception of "fine-grained") seem essential in a system that can make a qualitative difference in the evolution of knowledge. They are needed if we are to have a genuine hypertext publishing system.

It is this sort of system—not "a hypertext system" but a hypertext publishing system—that can make a real difference to society's overall intellectual efficiency, and overall grasp of complex issues. How great a difference? Even a small improvement in something so fundamental to our civilization would save billions of dollars, lengthen millions of lives, and give us a better chance of surviving and prospering through the coming technological revolutions. And there is reason to think the improvement might not be small.

Editor's note: Foresight's Web Enhancement project to improve social software for the evolution of knowledge has produced the CritSuite of critical discussion tools for the web.


Nanogears, Nanobearings

Originally published in 1987

a carbon skeleton for a bearing shaftEnzymes show that a nanomachine needn't have gears and bearings, but macroengineering shows how useful these parts can be. Conventional gears and bearings are built to tight tolerances—bumps a thousandth of an inch high on a one-inch part would often be too large. Since an atomically smooth surface is bumpy on a tenth-nanometer scale, it might seem that gears and bearings couldn't be reduced below 100 nm or so. A complex nanomachine using gears and bearings would then be huge—entire microns across.

A paper on "Nanomachinery: Atomically Precise Gears and Bearings" (by K. Eric Drexler, in the proceedings of the November, 1987, IEEE Micro Robots and Teleoperators Workshop) examines how to build these devices much smaller. The essential insight is that an atom's surface is a soft, elastic thing, helping to smooth interactions. Conventional gears need precisely machined teeth if their hard surfaces are to mesh smoothly, but nanogears can use round, single-atom teeth, relying on atomic softness to aid smooth meshing.

This principle can also be applied to bearings. In one approach, two surfaces can slide on roller bearings. The bearings can roll smoothly, despite atomic bumpiness, by having a pattern of surface bumps that meshes smoothly, gear-fashion, with a similar pattern of bumps on the bearing race.

Mathematical analysis shows that two surfaces (of a shaft in a sleeve, for example) can slide smoothly over one another if their bumps are spaced to systematically avoid meshing. In effect, the bumps cancel out—while one is pushing back, another is pushing forward. With a ring of six atoms sliding within a ring of 22, for example, the friction force can be less than one billionth of the force holding two atoms together in a molecule.

Yet another class of bearing avoids atomic bumpiness by using a single atom or bond as a bearing. A fraction of a nanometer across, these bearings are as small as the moving parts in a nanomachine can possibly be.

Editor's note: More in depth treatments of gears and bearings are provided in Dr. Drexler's 1992 nanotechnology text book Nanosystems: Molecular Machinery, Manufacturing, and Computation. See also the paper by Dr. Merkle "A Proof About Molecular Bearings".


Britain Spearheads "Nanotechnology"

or What is Nanotechnology?

one cubic nanometer of diamond, containing 176 atoms. A cube 100 nm on a side would contain 176 million atomsUnder the headline 'Funds for Nanotechnology," Britain's IEE News (October 1987) reported that 'Funds are now available through the National Physical Laboratory...for the support of projects which will lead to the commercial exploitaton of nanotechnology techniques. Nanotechnology covers the manufacture and measurement of devices and products where dimensions or tolerances are in the range 0.1 to 100 nm..." This sounds exciting until one realizes that this definition of "nanotechnology" covers everything from memory chips to electron microscopy.

The use of the term "nanotechnology" for everything smaller than 100 nanometers; (0.1 micron) is apt to lead to confusion. As used in recent years in the US (and in this publication), "nanotechnology" implies a general ability to build structures to complex, atomic specifications; it refers to the technique used rather than to the size of the product.

We can see a parallel in the term "microtechnology": the broad ability to shape bulk materials into surface patterns having complex specifications on a scale of microns or less. This term does not apply to all processes having micron-scale products. Consider the case of a forest fire simulation experiment for which micron-sized particles of smoke are needed—the fire we set to produce these particles is not an example of microtechnology. Like nanotechnology, the term refers to a family of techniques and abilities, not size and scale. In the case of nanotechnology, this means structuring matter atom-by-atom with precise control. Some products of nanotechnology, such as fracture-tough diamond composite spacecraft, will not be small.

Nanotechnology is qualitatively different from microtechnology, being based on molecular operations rather than the miniaturization of bulk processes. It will enable a cube 0.1 micron on a side to hold, not just a single device, but the equivalent of an entire microprocessor. It will lead to far more than just denser circuits, more precise machines, and so forth—it will lead to self-replicating machines, cell repair systems, and a broad, deep revolution in technology, economics, and medicine.

The advance of microtechnology into the submicron regime no more calls for a change of prefix than did the similar advance of microscopy—we do not speak of "electron nanoscopes." If "nanotechnology" becomes a trendy term for submicron technology, we are in for some confusing times and a lot of wasted words used in describing assembler-based technology. The IEE News article holds no hint of real nanotechnology. Readers are encouraged to state their opinions on this matter to editors of publications which misuse the term.


Interview: Eric Drexler (Part II)

Originally published in 1987

Continued from Background 0

FI: Who's doing the work in this field today?

Drexler: Well, the question of who's doing work in the field very much depends on what one means by "the field." If you look at the full range of fields that are contributing to the emergence of nanotechnology—protein design, synthetic chemistry, scanning tunneling microscope technology, molecular modeling on computers—there are dozens or hundreds of research groups, in industry and academia, in the US, Europe, Japan and the Soviet Union, and so the numbers are large and what's going on is very diverse. At the other end of the spectrum, looking at nanotechnology itself—at what can be done with real assemblers—that's a development that's far enough off in the future that it doesn't make sense for industry, for example, to be working on it. Therefore people are just beginning to think about it, and at this point hardly anyone is doing work on assemblers and what can be built with them.

Editor's note: Zyvex was founded in 1997 as the first molecular nanotechnology development company—creating technology for atomically precise manufacturing.

FI: What technical work are you doing now in the field?

Drexler: I'm currently working on a series of papers to fill in more detail on the design of things such as molecular machines, molecular mechanical computers, assemblers, and ultimately cell repair machines. The first paper, which discusses the details of molecular structure, motion, and thermal noise for the logic elements of a mechanical nanocomputer, win appear in the Proceedings of the Third International Symposium on Molecular Electronic Devices, scheduled for publication in 1989.

Editor's note: More in depth treatments of these topics can be found in Dr. Drexler's 1992 nanotechnology text book Nanosystems: Molecular Machinery, Manufacturing, and Computation. Dr. Drexler's more recent work at the Institute for Molecular Manufacturing includes designs for molecular machine parts and a paper on "Building Molecular Machine Systems".

FI: When will we develop genuine nanotechnology?

Drexler: This is very hard to say, again because we don't fully understand the ground to cross between here and there, or just how soon the "exploration parties" will set out, how lucky they will be, and so forth. A friend of mine, Roger Gregory of the Xanadu hypertext project, likes to say that his optimistic estimate is thirty years, and his pessimistic estimate is ten. Though I should hasten to add that Roger not only worries about the dangers of this technology but also expects us to benefit greatly from its positive uses. But in terms of that range of dates, I don't know of a good argument against it.

FI: What are your greatest hopes and fears for the future?

Drexler: In the long run my greatest hope is that we will handle the coming revolutions in nanotechnology and the comparable or even greater ones in artificial intelligence so that we can benefit from their enormous potential. The fear of course is that well wipe ourselves out or paint ourselves into some very ugly corner.

In the shorter term, though, my greatest fear is that as this technology moves forward the debate will polarize between groups that blindly support the technology—seeing its benefits for everything from economic well-being, to medicine, to improving the lot of people in the Third World—and those who blindly oppose it, seeing its potential for abuse in the wrong hands and perhaps imagining dangers that aren't even there. I'm afraid that what we'll see is another round of fruitless public mudslinging, with opposed sides not really addressing each other's cases and simply trying to stir up as much emotion as they can for their side of the argument. You can imagine a "debate" polarized between the followers of someone like Lyndon LaRouche, pushing technology with inflamed rhetoric, and the followers of someone like Jeremy Rifkin trying to block it completely. Because if that happens, it's unlikely that we're going to be well-prepared for these developments when they emerge, and it's even possible that this will paralyze the democracies as these technologies emerge.

My hope is that we'll see a diverse, quarrelsome, but basically united center—one that embraces people who fear these technologies and urge caution, but understand that they're inevitable and can have great benefits, and people who look toward these technologies with great hope and optimism, but understand that there are some dangers that need to be watched.

FI: What do you see as important goals for the next few years?

Drexler: In the next few years, we need to reach a wide range of opinion leaders, particularly in the scientific and technical disciplines and some of the longer-range thinkers in political and economic policy. We need to have these opinion leaders exposed to the ideas of nanotechnology, assemblers, and the rest in such a way that they come away seeing them as credible concerns and understanding their basic implications. And we need to develop a family of organizations that bring people together who are concerned with these matters, so that they can exchange ideas and work together effectively to influence the course of events—to influence the way this technology emerges and how it's used. Our goal is to help that process along and to provide a way for people to get together and do these things.

FI: What can readers of this newsletter do to help this goal?

Drexler: Readers can think about how they might be able to help this effort, and can let us know what role they might be able to play. And they can inform their friends and colleagues and try to get them involved as well.

Getting Involved


Books of Note

Compiled in 1987

The Media Lab: Inventing the Future at MIT, Stewart Brand, Viking, 1987.
Vividly describes the Lab's goals and projects in which computers, broadcasting and publishing are merging to give us personalized technologies. A fun read, accessible to laymen.

The Evolution of Cooperation, Robert Axelrod, Basic Books, 1984.
Describes an elegant computer game that showed how cooperation can evolve among self-interested, competing entities. Shows what conditions promote cooperation and the importance of being "nice, retaliatory, and forgiving."

Molecular Electronic Devices II, Forrest Carter (ed.), Marcel Dekker, 1987.
Proceedings of the Second International Workshop on Molecular Electronic Devices held in 1983 at the Naval Research Lab. Oriented toward chemistry, it also includes one paper on nanotechnology. If the price daunts you, have your favorite technical library buy it.

Technologies of Freedom, Ithiel de Sola Poole, Belknap/Harvard, 1983.
A classic work on freedom of speech and of the press in electronic media, combining history, law, and technology. Of interest to all who look forward to hypertext publishing as a new free press. Accessible to laymen, also available in paperback.

Molecular Mechanics, ACS Monograph 177, U. Burkert and N. Allinger, Amer. Chem. Soc., 1982.
Standard reference on molecular mechanics, useful for molecule hackers.

Induction: Processes of Inference, Learning and Discovery, John Holland et al., MIT Press, 1986.
Inductive reasoning and learning in both organisms and machines are given a new theoretical interpretation by two psychologists, a computer scientist, and a philosopher. Yale's Sternberg calls it "the most important book on induction, and probably on reasoning in general, that has ever been written."


Foresight Programs


Home About Foresight Blog News & Events Roadmap About Nanotechnology Resources Facebook Contact Privacy Policy

Foresight materials on the Web are ©1986–2019 Foresight Institute. All rights reserved. Legal Notices.

Web site developed by Stephan Spencer and Netconcepts; maintained by James B. Lewis Enterprises.