Moore's Law and Robotics

Moore's Law and Robotics

One thing I was at some pains during my recent visit to Willow Garage was the likely impact of Moore’s Law on the course of robotics development in the next few years. This is of great interest to a futurist because if computation is a bottleneck, it will be loosened in a well-understood way over the next decade or so, and we will have robots of rapidly-improving capabilities to look forward to over the period.

After all, the skill, ingenuity, and technological base was available in 1900 to build steam-powered robots of human size, range of motion, and other physical characteristics (think of watchmakers and the rapidly-burgeoning capability of industrial machinery of the day). What was lacking was sensing and control.

I got mixed signals at WG. On the one hand, it was clear that the real bottleneck today is software: “We don’t sit down to have discussions about whether we should hire another person or put a bigger computer on the robot,” one person told me. The value added is clearly with the increased talent at analyzing, programming, or whatever.

On the other hand, I also heard a description of a vision/ranging module where implementations on (the current) standard processor versus a GPGPU version were compared: 13 seconds per frame versus 60 frames per second. The fast version wasn’t better in the sense of getting more detail or recognizing more objects — but it was faster, and in a regime that bumped it from much worse than, to somewhat better than, human real-time performance.

My personal take on this is that in many fields, the advances due to algorithms have matched those due to raw processing power, and that robotics is in a position to take advantage of both. WG’s open-source strategy is great for leveraging their resources in this area while benefitting the field as a whole.

Over the course of the 2010s, as robots get better able to cope with domestic environments and are more widely used there, the pressure for them to be more robust and adaptive, to learn and exhibit common sense, will increase. The big breakthrough in robust 2-D navigation came from Hans Moravec’s Bayesian grids, which changed the whole style of navigation from efficient symbolic — but brittle — algorithms to robust but computationally brute force ones. My intuition is that a similar revolution awaits in virtually everything the robot does and thinks about, and Moore’s Law will make it feasible.

By | 2017-06-01T14:21:09+00:00 June 26th, 2009|Nanodot, Robotics|2 Comments

About the Author:


  1. JamesG June 26, 2009 at 1:47 pm - Reply

    Agreed, Moore’s law is the key, even if roboticists don’t know or understand it, the massive increases in computation that will be available as time goes on will cure all problems with AI and robotics.

  2. Adam June 29, 2009 at 10:26 pm - Reply

    Moore’s Law is a key, I agree. But, as you wisely point out, processing power isn’t everything. You still need human programmers and sophisticated algorithms to approximate intelligence. After all, my pocket calculator has the same raw processing power as the Apollo spacecraft and yet it doesn’t routinely go to the moon and back. To say that everything will be solved with more and faster processors is a relatively naive view; there are many other factors.

    Also, just wanted to say thanks for directing people to my site. I know you’ve had the link up for a while now but I haven’t had a chance to come over and say thanks!

Leave A Comment