|Agents||1||Each agent controls one assembler (traditional robotic entity view)||One agent controls many assemblers (hierarchical view)|
|n||Many agents control one assembler is controlled by (factory view)||Many agents control many assemblers (industry view)|
Case 1:1 The traditional view of a drexlerian assembler seems to be modeled on the Von Neumann models of kinetic and cellular automata, and in fact a number of designs aim to combine these two models.7, 8 But all of them seem to simplistically assume a foundation of identical single-entities, each running a single process. At the same time, most of them acknowledge the necessity of some sort of distributed control for coordinating large numbers of them. It turns out that each of the quadrants in the table above are necessary views of a system that can do convergent assembly.
Case n:1 Starting with a programmable mechanosynthesis tool like a Stewart platform9 with some attached peripherals, it quickly becomes obviously that the added complexity of tool setup, not only in terms of materials, but also in terms of timing and scheduling will probably be considerable. By virtue of its input/output material flows and multiple operations, an assembler more resembles a factory than it does a single robot. This is fortunate, because factory control techniques using software agents that are being developed now will most likely be applicable to molecular assembly. Not only self-replication, but any nanoscale manufacturing will involve many subcomponents and many steps. The 30 convergent assembly10 steps necessary to build something the size of a breadbox out of atomic parts is somewhat oversimplified, especially when building a wide variety of parts. Getting the right molecule in the right orientation at the right time is not only an issue in mechanosynthesis, but it is also involves issues of inventory, process control, materials handling, and scheduling -- issues being addressed on a macro scale in manufacturing plants today.
Some present-day factories have achieved periodic "lights-out" manufacturing. For example, Fanuc Robotics claims a throughput of 320 parts per hour with a robot/lathe system11 while the suppliers to the integrated circuit fabrication industry are a bit more cautious with their claims.12 The problem with billions of molecular assemblers is that one must build, initialize subcomponents, and coordinate complex assembly processes completely without any manual control or supervision.
Case 1:n One good thing about agents is that, being a highly modularized software object, they can contain other agents. Otherwise, this case makes no sense. One agent may control many assemblers only if the assemblers themselves are flexible, modular, cooperative, and reliable because they are controlled by agents themselves. The 1:n case is the high-level hierarchical view taken when attempting to do massive parallel distributed control. One primitive example of such massively parallel control would be Utility Fog. Utility Fog is a simple nanosystem that cannot make or break atomic bonds -- it only contains tiny robots that link or unlink their arms to form a solid mass in any desired shape. In addition, if each robot could control its color and reflectivity; then a thin film of Utility Fog could act as a video screen.13
Case n:n Finally, there is the case of many agents controlling many agents in some sort of conflicted blob. Unless this case can be reduced to the previous one (with one subsuming agent), this case will be difficult to design, implement, or debug.
Figure 1. An idealized representation14
At this point, the most productive avenue of research seems to be at the 1:n and n:1 cases. Both are necessary in convergent assembly, which builds macroscopic objects from molecular sized parts made with bulk chemistry. Convergent assembly is based on the idea that smaller parts can be assembled into larger parts, larger parts can be assembled into still larger parts, and so forth. This process can be systematically repeated in a hierarchical fashion, creating an architecture able to span the size range from the molecular to the macroscopic.15 Figure 1 shows how simple the process of convergent assembly seems in the abstract.
Unfortunately, the real world is a bit more complex, even with the primitive technology we are currently using. Figure 2 illustrates the maximally complex object we can currently assemble with automated techniques - only 30 subcomponents. Current industrial practices use robots for welding and painting much more often than for assembly. In those cases where robots assemble a dozen or so parts, the part feeders, fixtures, and "hard" automation account for around 80% of the installation.16 In contrast, theoretical designs for a molecular assembler call for around a billion atoms.
Figure 2. A thirty-part real-world example
When viewing the assembler as a factory, with agents controlling assembler subcomponents, it is easy to see that the process of assembly requires the addressing of some of the important issues. Fortunately, many of these issues are obvious in today's factories, with much work being done to address them, though with mixed results. Early MRP (Manufacturing Resource Planning) software products were too limited and brittle in the face of the dynamic factory floor. MRP II and ERP (Enterprise Resource Planning) included added functionality as software vendors desperately tried to meet the problems of managing a complex operation. Now the software is so complex that implementations require a multi-million dollar and multi-year effort, with no guarantee of full success. In addition, scaling these complex software systems is unthinkable. Fortunately, small-grained, agent-based systems promise an alternative to the monolithic and centralized systems in use today. A small-grained agent is one that responds to its environment with simple rules, interacting with other agents through predetermined protocols.17 One particular issue that agents may be able to solve is scheduling. This is important because assemblers need to get the right atom at the right place at the right time.
In the context of agents, planning is the process of selecting and sequencing activities such that the goals are satisfied within the constraints of the system. Scheduling is the process of selecting between different plans and assigning resources and time to the one selected in a way to maximize some metric. Since the real world suffers under the capricious rule of Murphy's law, scheduling is a difficult task, especially in an open and dynamic environment. To make things more complicated, the system may be asked to do things not originally anticipated, or it may be allowed to omit tasks. Also, resources may disappear, or additional resources may be added. Tasks may begin early, or late, or may take more or less time than anticipated. Because of the combinatorial aspects of the scheduling problem, different techniques have been applied to it, including heuristics, constraint propagation, simulated annealing, genetic algorithms, and neural networks. Currently, there are over 30 agent-based systems being applied to the problem.18
There are basically two types of scheduling systems. The first is an incremental search process in which agents schedule orders by performing local incremental searches for their orders, usually considering multiple resources. The global schedule is obtained through the merging of local schedules, making this quasi-parallel process very similar to centralized scheduling, and with similar drawbacks. In the second type of system, each agent represents a single resource and negotiates with other agents to carry out scheduling.19 One particular version of this system is known as a contract net, which uses a protocol that draws up contracts in public markets, channelled through requests for bids and evaluations of proposal. More specifically, a four-step algorithm is followed:
The Environmental Research Institute of Michigan and Deneb Robotics, Inc. are jointly developing a distributed agent-based system that can assist and supplant human decision making in efficiently allowing material flow and task scheduling to emerge in a manufacturing assembly environment. The Agent Network for Task Scheduling (ANTS) uses techniques inspired both by free market economics and insect colonies21, specifically a contract net that uses a new mechanism called least commitment scheduling that defers decisions on process sequences until the last possible moment.
Instead of the simplified manager/bidder model described above for a contract net, ANTS allows each agent to be a broker -- both a manager and a bidder. Each agent represents the elements in and around a factory:
The Density-based Emergent Scheduling Kernel (DESK) uses probabilistic committed capacity profiles of resources over time, along with realistic costs, to provide an abstract search space over which the agents can wander to quickly find optimal solutions. DESK is patented as Density-Based Emergent Scheduling System, US patent no. 5,953,229.22
Figure 3. One Engagement, showing kernel, working window, and commitment window
DESK applies the concept of least commitment scheduling to the agent environment. Using this approach, a resource does not initially schedule a job for a fixed time period. Instead, a resource makes a looser commitment on the start and stop time of a job while still maintaining a full commitment to complete the job, retaining greater flexibility in managing change. Current schedulers define a fixed window where the job will execute. Thus, a lathe might be committed to start a three-hour job at 1:00 PM and end it at 4:00 PM. With DESK, the resource might commit to starting the job as soon as the materials become available, say at noon, and completing it sometime before closing time 5:00 PM. Since the lathe has five hours of capacity during the window and is committing only three of them to this job, it need only commit 60% of its capacity to the job.
In DESK, each resource builds a commitment density graph that reflects the fraction of its total capacity that it has committed at any given time. This is based on the sum of all its commitments, which may include many overlapping jobs in the same time window. Figure 4 shows an example where the resource has over-committed capacity. The Engagements "Woytla2" and "Gyatso14", since they came in after the "Taktser1935" and "Krakow1920" Engagements, overcommit the Resource. The area above the 100% line is called the region of violation.
Figure 4. Densities detect over-commitment on a resource.
By adjusting the working windows on individual jobs (using techniques developed by DESK), the agent removes the conflict as shown in Figure 5. The agent does not have to change any commitments that it made previously. It only tightens the constraints on individual jobs. Without that flexibility, the agents would have been forced to negotiate new commitments, which involves another series of negotiations similar to the original bidding cycle.23
Figure 5. Resource adjusts commitment windows to resolve overcommitmente.
Thanks to Jonathan Schneider and Jorge Goic for reviewing this work, and for their numerous suggestions.
Foresight materials on the Web are ©1986–2020 Foresight Institute. All rights reserved. Legal Notices.