Agents, Assemblers, and ANTS
Scheduling Assembly with Market and Biological Software Mechanisms

Tihamer T. Toth-Fejel*

aEnvironmental Research Institute of Michigan, Center for Electronic Commerce,
Mailing address: P.O. Box 134001, Ann Arbor, MI 48113-4001
Street address: 3600 Green Court, Suite 550, Ann Arbor, MI 48105

(734) 623-2544

This is a draft paper for the
Seventh Foresight Conference on Molecular Nanotechnology.
The final version has been submitted
for publication in the special Conference issue of


Nanoscale assemblers will need robust, scaleable, flexible, and well-understood mechanisms such as software agents to control them. This paper discusses the relationship between assemblers and agents, and examines ANTS, an agent-based scheduling application under development.


An original vision of very large quantities of nanoscale assemblers being controlled by agoric1 software (in which software competes for tasks and resources) may be better served by a design that includes software agents. Recent advances in agent software have tended to confirm early predictions of the use of market mechanisms to achieve robust decentralized control over distributed, parallel, and heterogeneous systems, but agents are a better-understood mechanism for achieving these goals, while the real world market still shies away from self-modifying code.

In order to be useful, nanoscale assemblers will need to be controlled by robust, scaleable, flexible, and well-understood software. Unfortunately, object-oriented software is not powerful enough to fit those objectives. Because it can survive in very chaotic environment, agoric software may be (in a sense), like self-modifying code and genetic algorithms, a bit too powerful. Software agents provide a stable middle ground that may fit assembler requirements best. This paper will first define assemblers, agents, and one possible numeric relationship between them. Then it will focus on the case of that relationship that views the assembler as a factory, with agents controlling assembler subcomponents. This case is valuable because it exposes some important issues in the process of assembly, one of which is scheduling. Assemblers need to get the right atom at the right place at the right time. Fortunately, many of these issues are obvious in today's factories, with much work being done to address them. This paper will describe one such effort.

Software Agents and Drexlerian Assemblers

Eric Drexler first envisioned nanoscale assemblers clearly as analogous to a computer-driven machine shop -- a molecular machine that could be programmed to build virtually any molecular structure or device from simpler chemical building blocks. He coined these nanosystems as "assemblers" because their essential function is to assemble individual atoms and molecules into larger assemblages.2

Software agents, on the other hand, have a more diverse origin, and hence their definition is less clear. They arose out of a need to coordinate different computer systems with reliability, modularity, cooperation, and flexibility. Agents are essentially autonomous software objects that can say "No." They are physically or virtually embodied entities that can do the following:

A number of market based software mechanisms have been developed specifically to control large numbers of robots.4, 5, 6

The relationship between software agents and drexlerian assemblers is complex. Table 1 represents just one possible dichotomy.

1 n
Agents 1 Each agent controls one assembler (traditional robotic entity view) One agent controls many assemblers (hierarchical view)
n Many agents control one assembler is controlled by (factory view) Many agents control many assemblers (industry view)

Case 1:1 The traditional view of a drexlerian assembler seems to be modeled on the Von Neumann models of kinetic and cellular automata, and in fact a number of designs aim to combine these two models.7, 8 But all of them seem to simplistically assume a foundation of identical single-entities, each running a single process. At the same time, most of them acknowledge the necessity of some sort of distributed control for coordinating large numbers of them. It turns out that each of the quadrants in the table above are necessary views of a system that can do convergent assembly.

Case n:1 Starting with a programmable mechanosynthesis tool like a Stewart platform9 with some attached peripherals, it quickly becomes obviously that the added complexity of tool setup, not only in terms of materials, but also in terms of timing and scheduling will probably be considerable. By virtue of its input/output material flows and multiple operations, an assembler more resembles a factory than it does a single robot. This is fortunate, because factory control techniques using software agents that are being developed now will most likely be applicable to molecular assembly. Not only self-replication, but any nanoscale manufacturing will involve many subcomponents and many steps. The 30 convergent assembly10 steps necessary to build something the size of a breadbox out of atomic parts is somewhat oversimplified, especially when building a wide variety of parts. Getting the right molecule in the right orientation at the right time is not only an issue in mechanosynthesis, but it is also involves issues of inventory, process control, materials handling, and scheduling -- issues being addressed on a macro scale in manufacturing plants today.

Some present-day factories have achieved periodic "lights-out" manufacturing. For example, Fanuc Robotics claims a throughput of 320 parts per hour with a robot/lathe system11 while the suppliers to the integrated circuit fabrication industry are a bit more cautious with their claims.12 The problem with billions of molecular assemblers is that one must build, initialize subcomponents, and coordinate complex assembly processes completely without any manual control or supervision.

Case 1:n One good thing about agents is that, being a highly modularized software object, they can contain other agents. Otherwise, this case makes no sense. One agent may control many assemblers only if the assemblers themselves are flexible, modular, cooperative, and reliable because they are controlled by agents themselves. The 1:n case is the high-level hierarchical view taken when attempting to do massive parallel distributed control. One primitive example of such massively parallel control would be Utility Fog. Utility Fog is a simple nanosystem that cannot make or break atomic bonds -- it only contains tiny robots that link or unlink their arms to form a solid mass in any desired shape. In addition, if each robot could control its color and reflectivity; then a thin film of Utility Fog could act as a video screen.13

Case n:n Finally, there is the case of many agents controlling many agents in some sort of conflicted blob. Unless this case can be reduced to the previous one (with one subsuming agent), this case will be difficult to design, implement, or debug.

larger image

Figure 1. An idealized representation14

At this point, the most productive avenue of research seems to be at the 1:n and n:1 cases. Both are necessary in convergent assembly, which builds macroscopic objects from molecular sized parts made with bulk chemistry. Convergent assembly is based on the idea that smaller parts can be assembled into larger parts, larger parts can be assembled into still larger parts, and so forth. This process can be systematically repeated in a hierarchical fashion, creating an architecture able to span the size range from the molecular to the macroscopic.15 Figure 1 shows how simple the process of convergent assembly seems in the abstract.

Unfortunately, the real world is a bit more complex, even with the primitive technology we are currently using. Figure 2 illustrates the maximally complex object we can currently assemble with automated techniques - only 30 subcomponents. Current industrial practices use robots for welding and painting much more often than for assembly. In those cases where robots assemble a dozen or so parts, the part feeders, fixtures, and "hard" automation account for around 80% of the installation.16 In contrast, theoretical designs for a molecular assembler call for around a billion atoms.

larger image

Figure 2. A thirty-part real-world example

The Scheduling Application

When viewing the assembler as a factory, with agents controlling assembler subcomponents, it is easy to see that the process of assembly requires the addressing of some of the important issues. Fortunately, many of these issues are obvious in today's factories, with much work being done to address them, though with mixed results. Early MRP (Manufacturing Resource Planning) software products were too limited and brittle in the face of the dynamic factory floor. MRP II and ERP (Enterprise Resource Planning) included added functionality as software vendors desperately tried to meet the problems of managing a complex operation. Now the software is so complex that implementations require a multi-million dollar and multi-year effort, with no guarantee of full success. In addition, scaling these complex software systems is unthinkable. Fortunately, small-grained, agent-based systems promise an alternative to the monolithic and centralized systems in use today. A small-grained agent is one that responds to its environment with simple rules, interacting with other agents through predetermined protocols.17 One particular issue that agents may be able to solve is scheduling. This is important because assemblers need to get the right atom at the right place at the right time.

In the context of agents, planning is the process of selecting and sequencing activities such that the goals are satisfied within the constraints of the system. Scheduling is the process of selecting between different plans and assigning resources and time to the one selected in a way to maximize some metric. Since the real world suffers under the capricious rule of Murphy's law, scheduling is a difficult task, especially in an open and dynamic environment. To make things more complicated, the system may be asked to do things not originally anticipated, or it may be allowed to omit tasks. Also, resources may disappear, or additional resources may be added. Tasks may begin early, or late, or may take more or less time than anticipated. Because of the combinatorial aspects of the scheduling problem, different techniques have been applied to it, including heuristics, constraint propagation, simulated annealing, genetic algorithms, and neural networks. Currently, there are over 30 agent-based systems being applied to the problem.18

There are basically two types of scheduling systems. The first is an incremental search process in which agents schedule orders by performing local incremental searches for their orders, usually considering multiple resources. The global schedule is obtained through the merging of local schedules, making this quasi-parallel process very similar to centralized scheduling, and with similar drawbacks. In the second type of system, each agent represents a single resource and negotiates with other agents to carry out scheduling.19 One particular version of this system is known as a contract net, which uses a protocol that draws up contracts in public markets, channelled through requests for bids and evaluations of proposal. More specifically, a four-step algorithm is followed:

  1. A manager sends out a request for bids (RFB) to all the potential bidders;
  2. The bidders submit proposals to the manager;
  3. The manager evaluates the proposals and awards the contract;
  4. The winning bidder becomes the contractor and commits to carrying out the required task. Sometimes, between step 2 and 3, the winning bidder may have been awarded another contract, in which case it has to renege on its bid, and the manager must back up to step 3 for the second-best bidder.20


The Environmental Research Institute of Michigan and Deneb Robotics, Inc. are jointly developing a distributed agent-based system that can assist and supplant human decision making in efficiently allowing material flow and task scheduling to emerge in a manufacturing assembly environment. The Agent Network for Task Scheduling (ANTS) uses techniques inspired both by free market economics and insect colonies21, specifically a contract net that uses a new mechanism called least commitment scheduling that defers decisions on process sequences until the last possible moment.

Instead of the simplified manager/bidder model described above for a contract net, ANTS allows each agent to be a broker -- both a manager and a bidder. Each agent represents the elements in and around a factory:

The Density-based Emergent Scheduling Kernel (DESK) uses probabilistic committed capacity profiles of resources over time, along with realistic costs, to provide an abstract search space over which the agents can wander to quickly find optimal solutions. DESK is patented as Density-Based Emergent Scheduling System, US patent no. 5,953,229.22

Figure 3. One Engagement, showing kernel, working window, and commitment window

DESK applies the concept of least commitment scheduling to the agent environment. Using this approach, a resource does not initially schedule a job for a fixed time period. Instead, a resource makes a looser commitment on the start and stop time of a job while still maintaining a full commitment to complete the job, retaining greater flexibility in managing change. Current schedulers define a fixed window where the job will execute. Thus, a lathe might be committed to start a three-hour job at 1:00 PM and end it at 4:00 PM. With DESK, the resource might commit to starting the job as soon as the materials become available, say at noon, and completing it sometime before closing time 5:00 PM. Since the lathe has five hours of capacity during the window and is committing only three of them to this job, it need only commit 60% of its capacity to the job.

In DESK, each resource builds a commitment density graph that reflects the fraction of its total capacity that it has committed at any given time. This is based on the sum of all its commitments, which may include many overlapping jobs in the same time window. Figure 4 shows an example where the resource has over-committed capacity. The Engagements "Woytla2" and "Gyatso14", since they came in after the "Taktser1935" and "Krakow1920" Engagements, overcommit the Resource. The area above the 100% line is called the region of violation.

Figure 4. Densities detect over-commitment on a resource.

By adjusting the working windows on individual jobs (using techniques developed by DESK), the agent removes the conflict as shown in Figure 5. The agent does not have to change any commitments that it made previously. It only tightens the constraints on individual jobs. Without that flexibility, the agents would have been forced to negotiate new commitments, which involves another series of negotiations similar to the original bidding cycle.23

Figure 5. Resource adjusts commitment windows to resolve overcommitmente.


Thanks to Jonathan Schneider and Jorge Goic for reviewing this work, and for their numerous suggestions.


  1. Mark Miller and Erick Drexler, Markets and Computation: Agoric Open Systems,
  2. K. Eric Drexler, Engines of Creation, Anchor Books, 1986,
  3. Jacques Ferber, Multi-Agent Systems, Addison-Wesley, 1995.
  4. Ünsal, C. and J. S. Bay, "Spatial Self-Organization in Large Populations of Mobile Robots," 1994 IEEE International Symposium on Intelligent Control, Columbus, Ohio, August 1994.
  5. Oliver Guenther, Tad Hogg, and Bernardo Huberman, "Market Organizations for Controlling Smart Matter"
  6. Eric Bonabeau, Marco Dorigo, and Guy Theraulaz, Swarm Intelligence: From Natural to Artificial Systems
  7. Tihamer T. Toth-Fejel, LEGOs (TM) to the Stars: Active MesoStructures, Kinetic Cellular Automata, and Parallel Nanomachines for Space Applications, presented at the 1996 International Space Development Conference, The Assembler, V4, N3, Third Quarter, 1996.
  8. Mark Yim, "Polypod : A Unit Modular Reconfigurable Robot", Master's Thesis,
  9. Ralph Merkle, A New Family of Six Degree Of Freedom Positional Devices, Nanotechnology Vol. 8, No. 2, June 1997, pages 47-52
  10. Ralph Merkle, Convergent Assembly, Nanotechnology 8, No. 1, March 1997, pages 18-22,
  11. Fanuc Robotics, "New System Provides "Lights Out" Part Processing",
  12. Brian McDonough, "Considerations For Manufacturing Execution System Implementation", Semiconductor FabTech95 - May,
  13. J.S. Hall, "Utility Fog: A Universal Physical Substance", Vision-21, Westlake, OH; NASA Conference Publication 10129, pp. 115-126 , and "Utility Fog: The Stuff that Dreams are Made Of"
  14. Adapted from Ralph Merkle, "Convergent Assembly," op.cit.
  15. Ralph C. Merkle, ibid.
  16. Steve Clark, personal email
  17. John A. Sauter and H. Van Dyke Parunak, "Ants in the Supply Chain", Workshop on Agent-based Decision Support for Managing the Internet-Enabled Supply Chain, Agents 99, Seattle, Washington, May 1, 1999.
  18. Shen, W. and Norrie, D.H. Agent-Based Systems for Intelligent Manufacturing: A State-of-the-Art Survey. Knowledge and Information Systems, an International Journal, 1(2), 129-156, 1999
  19. Shen, et al. op.cit.
  20. Ferber, op.cit., pp 359-360.
  21. Note that this is different from the DARPA ANT (Autonomous Negotiating Teams) project, which utilizes highly decentralized and autonomous negotiation to solve distributed resource allocation problems, i.e. the logistics of getting materials to task centers and weapons to targets. Besides having an extra "S" in our name, our animated graphics are cooler. Note also that ANTS is to be distinguished from Dorigo and Gambardella's Ant System, Ant-Q, and other Ant Colony Optimization (ACO) systems. However, we don't solve the traveling salesman problem, for we have sales brokers.
  22. For further technical information, contact Dr. H. Van Dyke Parunak, , (734) 769-4049. For further business information, contact Mr. Steve Harris, , (734) 623-2525.
  23. John A. Sauter and H. Van Dyke Parunak, op.cit.