include "/Library/WebServer/foresight.org/includes/header.php"; ?>
A publication of the Foresight Institute
A Seattle-based firm has published a set of six HyperCardTM stacks related to nanotechnology, entitled PATH for Prototype Advanced Technology Hypertext. The group writes in its introduction that "The PATH collection of HyperCard stacks is the first effort by The Nanotechnology Group Inc. to provide basic information on advanced technology in a form that we hope will be highly accessible to people of diverse backgrounds."
HyperCard is a form of hypertext software which runs on the Apple Macintosh computer. For the past few years, Macintoshes have been sold with a version of HyperCard already installed.
The six stacks are of different types and different degrees of completion: (1) an interactive periodic table entitled ChemElements, (2) an interactive table of nuclides entitled Isotopes, (3) a HyperCard version of the Proceedings of the Seattle Nanotechnology Study Group's 1989 regional conference on nanotechnology, (4) an organizational stack that points to the other stacks in the collection, and (5&6), "Nanotechnology" and "Glossary" in preliminary form. Not all of the stacks have many hypertext links in the first version.
PATH can be obtained on two 800K disks from NTG for $30. It is also distributed as shareware: you can download a copy from a bulletin board or copy it from some other user's disks and then register for $20 with The Nanotechnology Group, Inc., PO Box 40176, Bellevue, WA 98015.
[Editor's note: PATH was no longer available after 1995, but the Proceedings of the Seattle Nanotechnology Study Group's 1989 regional conference are available on the Web.
First there were buckytubescylinders a few nanometers in diameter consisting entirely of carbon atoms interconnected like chickenwire. Now chemists are developing methods for making molecular-scale tubules from other materials.
Let me sound a deflationary note here. None of the nanotubes discussed above represent a great leap into nanotechnology since their production is based on fortuitous encounters in solution rather than controlled molecular assembly. Furthermore, in structure they are basically repetitive patterns of atoms and thus not a lot more sophisticated than crystals. Finally, they are not quite atomically accurate, since they vary widely in length (and in some cases in circumference, as well).
Nevertheless, it's easy to see hints of nanotechnology here. In the cyclodextrin work, for example, the most interesting feature is the templates (i.e., the PEG "threads"), not the tubules. With two relatively simple modifications, the threads could be transformed into crude molecular graspers. First, remove them from the liquid phase by attaching them at one end to a stationary support (such as a plastic matrix). Then make the stop an integral part of the device by equipping the free end of each thread with a structure that flexes or straightens in response to a chemical signal (or to pulses of light). Since the cyclodextrin molecules and the linking agent must still find their way to the templates by diffusion through the liquid phase, this modified system would still not qualify as nanotechnology. Nevertheless, the templates would be recognizable antecedants of components we might expect to see in molecular assemblers.
The work with amino acid rings, on the other hand, can be considered a preliminary effort to develop modular self-assembling nanodevices. Obvious next steps include: creating a variety of modules (not necessarily ring-shaped) that assemble in a predetermined order; modules that assemble in two and three dimensions; modules that respond to externally sent signals; and modules that carry additional functionality.
Progress in imaging at the molecular level continues at a breathtaking pace. The consensus used to be that the limits of optical microscopy were reached long ago, once microscopes were able to resolve details as small as a wavelength of light. This view was overturned in the 1980s with the development of near-field optics, which circumvents the diffraction limit of light microscopy by restricting the portion of a sample which is viewed at one time. The trick is to use a light source of subwavelength size, positioned very close to the sample. [See Science 262:1382-1384, 26Nov93 for a fascinating discussion near-field optics, present and future.] So rapid is innovation in this field that single molecules are now being observed using visible light.
Eric Betzig and Robert J. Chichester at AT&T Bell Labs recently published images of dye molecules lying on a plastic film. Varying the polarization of the illuminating light caused changes in the shapes and intensities of the individual molecular imagesan effect attributed to the differing orientations of the dye molecules. Thus, not only can the position of a molecule be observed, but two out of three components of its orientation as well. [Science 262:1422-1425, 26Nov93]
A very different method for imaging molecules makes use of the fact that the tip of a scanning tunneling microscope (STM) emits light when the electron tunneling energy is sufficiently high. The emission spot has been shown to be about 0.4 nm in diameteronly a little bigger than an atom, and far smaller than the light sources used in conventional near-field microscopes. As the STM tip scans a sample it carries the light source with it, and photons released at the tip excite fluorescence in the sample. The secondary photons emitted by the sample can then be detected by a photodetector (which need not be small, nor particularly close to the sample). One of the ingenious aspects of this arrangement is that the light coming from the sample is not imaged; its intensity need only be measured and correlated with the position of the STM tip as determined by the STM's ordinary feedback system. In fact, the STM can compile two images concurrently: a conventional STM topographic map of the sample as supplied by electron tunneling, and a photon image supplied by fluorescence.
Using a custom-built STM of this type, and a sample consisting of an array of buckyballs (C60), a group at the University of Lausanne and at IBM in Zurich showed that a well-resolved array of blips in the photon image corresponded to the pattern of C60 molecules as seen in the topographic image. A sensitive optical spectrophotometer will soon be incorporated into the equipment with the expectation that spectral analysis can be performed on individual molecules. [Science 262:1425-1427, 26Nov93]
Near-field microscopy has opened up the intriguing possibility of optically examining molecules inside of living cells. Some of the light sources already in use are small enough to enter and leave single cells without damaging their membranes, and even smaller sources are in development.
Of the various possible approaches to nanotechnology, the one with the best track record is protein chemistry, inasmuch as the molecular machinery of life (on Earth, at least) consists mainly of proteins. By all indications, a protein-based nanotechnology could take care of most of our nanotechnological needs, even if not in the most elegant or efficient manner.
The biological world provides thousands of examples of nanomachines: enzymes, molecular motors, gene regulators, cell scaffolding, to name a few. Many of these biodevices can be put to use as tools and production equipment for developing protein-based nanotechnologywe already use them as probes, manipulators, copiers, modifiers, and factories for making more biodevices. It's almost as if nanotechnology has been handed to us on a silver platter. As always, though, there's a snag. The design of protein nanomachines turns out to be fraught with difficulties. Even if we understood the exact mechanical actions we wanted a new protein to carry out, we still would be unable to specify a sequence of amino acids which, if synthesized as a polypeptide chain, would fold up into a nanomachine and carry out those actions.
The problem of determining the behavior of a polypeptide chain, given knowledge of its sequence, has been a preoccupation for growing numbers of researchers and their computers. Will a given polypeptide just flop around randomly in solution, or will it fold into a stable shape? If it folds, what shape will it take? In conventional protein design, the designer specifies an amino acid sequence and then tests its behavior either experimentally or by computer simulation. The test results then guide the next round of design. Even moderate-sized proteins are impractical to test in this manner because of the amount of lab effort or computer time required.
Recently a group at Princeton University described a far simpler strategy for protein design. Rather than trying to specify a definite sequence of amino acids, the designer would merely specify a sequence of amino acid polarities. (The polarity can be thought of as the degree to which an amino acid interacts electrically with water molecules in the surrounding solution.) Ignoring the geometrical details of the designed protein's amino acids, the designer could concentrate on just one basic feature. The method was further simplified by treating polarity as a binary feature, thereby reducing amino acids to mere ones and zeroespolar or nonpolar subunits.
The researchers tested their theory by using it to design a protein. They decided upon a four-helix bundle because this is a common and well-studied motif in biological proteins; there are simple experimental tests for helicity and stability, and the results can be compared with native proteins containing four-helix bundles. The chosen design was 74 subunits longessentially a string of 74 ones and zeroes, each representing a choice from among the 20 standard amino acids. Taking into account the various constraints on the choice for each subunit, 4.7x1041 different amino acid sequences were consistent with the design.
But would real polypeptides fitting this design actually fold into four-helix bundles? To find out, the researchers chose a small random sample from the huge collection of proposed sequences, and synthesized DNA that coded for the corresponding proteins. This DNA was transferred into bacteria by T7 bacterioviruses. Out of the resulting bacterial clones, 108 were selected for further study. Analysis of the DNA of these clones revealed that 48 of the 108 sequences had been accurately synthesized and incorporated into the bacteria; the others contained mistakes and were therefore abandoned.
The 48 strains of bacteria were tested for the expression of novel proteins. 29 of them expressed protein that was both soluble and resistant to intracellular degradationan indication that the proteins had folded into stable globular structures rather than remaining an unfolded polypeptide chain. Three of the proteins were subjected to tests for helical structure and stability; they compared favorably with natural proteins containing four-helix bundles. [Science 262:1680-1685, 10Dec93]
It remains to be seen whether this streamlined approach to protein design applies only to the design of helical regions of proteins or is a dramatic breakthrough for protein engineering in general.
Lastly we have this report on an idea by Seth Lloyd of Los Alamos National Laboratory, which seems to offer a drastic shortcut to nanotechnology as well as a way to build quantum computerscomputers in which data is represented by the quantum states of individual atoms or groups of atoms.
The proposed computers are made of weakly coupled quantum systems arranged in one-, two-, or three-dimensional arrays. Computations are performed in response to sequences of light pulses imposed upon the entire array simultaneously. Lloyd shows that by an appropriate choice of materials and architecture, and appropriate choices of pulses, each subunit in the array will change to a new quantum state in a manner that depends both on its previous state and upon the pulse sequence. He presents a simple proof that such an array could carry out the full range of logic operations required of any computer.
But it gets better. Lloyd points out that for any given pair of quantum states in the set of allowed states of such a device, there are sequences of pulses which, if applied at the proper times, will switch the device from one state to the other. Since the physical conformation of an object is determined by its quantum state, the conclusion is inescapable that an appropriately designed quantum "computer" can be used as a quantum-mechanical micromanipulator. [Science 261:1569-1571, 17Sep93; and 263:695, 4Feb94]
Of course, not just any quantum computer would make a useful micromanipulator. The quantum states of an effective manipulator would need to correspond to useful configurations of some kind of molecular arm. Manipulators would be designed specifically for that purpose rather than for carrying out general computations. Both types of devices would, however, be based on the same general principles and would be controlled by sequences of light pulses.
Dr. Russell Mills is research director at a company in California.
From Foresight Update 18, originally published 15 April 1994.