Moore's Law and Robotics

One thing I was at some pains during my recent visit to Willow Garage was the likely impact of Moore’s Law on the course of robotics development in the next few years. This is of great interest to a futurist because if computation is a bottleneck, it will be loosened in a well-understood way over the next decade or so, and we will have robots of rapidly-improving capabilities to look forward to over the period.

After all, the skill, ingenuity, and technological base was available in 1900 to build steam-powered robots of human size, range of motion, and other physical characteristics (think of watchmakers and the rapidly-burgeoning capability of industrial machinery of the day). What was lacking was sensing and control.

I got mixed signals at WG. On the one hand, it was clear that the real bottleneck today is software: “We don’t sit down to have discussions about whether we should hire another person or put a bigger computer on the robot,” one person told me. The value added is clearly with the increased talent at analyzing, programming, or whatever.

On the other hand, I also heard a description of a vision/ranging module where implementations on (the current) standard processor versus a GPGPU version were compared: 13 seconds per frame versus 60 frames per second. The fast version wasn’t better in the sense of getting more detail or recognizing more objects — but it was faster, and in a regime that bumped it from much worse than, to somewhat better than, human real-time performance.

My personal take on this is that in many fields, the advances due to algorithms have matched those due to raw processing power, and that robotics is in a position to take advantage of both. WG’s open-source strategy is great for leveraging their resources in this area while benefitting the field as a whole.

Over the course of the 2010s, as robots get better able to cope with domestic environments and are more widely used there, the pressure for them to be more robust and adaptive, to learn and exhibit common sense, will increase. The big breakthrough in robust 2-D navigation came from Hans Moravec’s Bayesian grids, which changed the whole style of navigation from efficient symbolic — but brittle — algorithms to robust but computationally brute force ones. My intuition is that a similar revolution awaits in virtually everything the robot does and thinks about, and Moore’s Law will make it feasible.

Leave a comment

    Your Cart
    Your cart is emptyReturn to Shop