Ominous Parallels

Ominous Parallels 2017-04-12T09:30:39+00:00

Nanodot: the original nanotechnology weblog

Ominous Parallels

Posted by J. Storrs Hall on January 11th, 2010

Actually, the ominous part is all over, so relax.

A week before the very first Foresight Conference, there was an earthquake — the famous1989 Loma Prieta earthquake.  The conference was moved from the Stanford campus to the Garden Court in downtown Palo Alto as a result.

Now, just a week before our 2010 Conference, celebrating the 20th anniversary of the first one, we have another California earthquake. Luckily, this one didn’t do nearly as much damage.

The first Conference was a tremendous success, enough to be a high point in the memories of many attendees.  Let’s hope the parallels continue and the anniversary conference is a worthy echo of the original.

Towers and orbits

Posted by J. Storrs Hall on January 8th, 2010

Just for fun, imagine you could build a tower up to geosynchronous orbital height. If you stepped off the top floor, you’d just hang there, in orbit.

If the tower you build is shorter, you’d fall, since (a) you aren’t going quite as fast, and (b) orbital speed is faster as you get lower. However, you’ll be in some orbit. There should be some height, short of GEO, where the ellipse you follow doesn’t quite hit the earth. We can use the Vis viva equation and some algebra to see what that height is:

Vertical scale is kilometers per second, horizontal is kilometers height above the earth’s surface.  The green line is for a circular orbit, blue for an ellipse that just misses the surface. The red line is the speed of the top of a tower of that height (at the equator).

For the blue line, elliptical orbit, you’ve saved a lot of tower height but it’s still pretty darned tall.

Suppose, though, you use a tower to launch rockets from.  It’s well understood that you save a lot by not having to punch through the atmosphere, but you can also save a lot from the height and speed of a tower of intermediate height.  The real savings shows up because of the exponential nature of the rocket equation, as we can see from the chart of the mass ratio needed in a rocket from the tower-tops:

(For a typical chemical fuel) The red line represents the part of the rocket that is the payload, and the other lines tell you how many times payload mass is the total vehicle mass.  The difference is fuel you need.  They cross, of course, at the same heights as stepping off works and no fuel is necessary.  The knee of the curve (blue is still elliptical) is only about one earth radius (~6400km) high, a tower we could actually build with nanotech materials, where you only need about 3 payload masses of fuel to get to orbit instead of nearly 20.

Autogenous or autopoietic?

Posted by J. Storrs Hall on January 7th, 2010

Back in April, I wrote:

Nanotechnology, the revolutionary technology, was always about the power of self-replication and never only about the very small.

The ability of a machine system to make more of itself, or more generally, make its own parts and be able to assemble or replace them as needed, is called autogeny.  There’s a very related concept in wider use called autopoiesis, which is essentially a description of certain biological or ecological systems.

An autopoietic machine is a machine organized (defined as a unity) as a network of processes of production (transformation and destruction) of components which: (i) through their interactions and transformations continuously regenerate and realize the network of processes (relations) that produced them; and (ii) constitute it (the machine) as a concrete unity in space in which they (the components) exist by specifying the topological domain of its realization as such a network.

There’s a key difference.  I defined autogeny (it’s a real word, or at least “autogenous” is, and I merely specialized it to this technical meaning) as a subset of what autopoiesis means.  An autopoietic system is a process,not an object.  It can not only make its own parts but does so constantly, replacing the ones it’s made of (“continuously regenerate”).  It has an identity that is more like that of the Ship of Theseus than a simple object.  You and I are autopoietic on a number of different levels: Our cells constantly rebuild and replace themselves on the molecular level; our minds constantly learn and re-integrate the ideas they’re made of — memories not regularly used and re-remembered tend to fade.

Auytogeny takes half of that idea in a more mechanistic sense and can be used to describe a permanent physical object.  My nano-manufacturing system, for example, is autopoietic in its growing phase where early early-generation inefficient machines are replaced by late-generation ones, but merely atuogenous thereafter.  Utility Fog in use would constitute an autopoietic system — individual foglets would be constantly failing and being replaced. But a simple nanofactory is merely autogenous. An autogenous system makes its own parts but doesn’t (necessarily) constantly replace them.  Like most machines, it needs to be fixed from the outside if it breaks.

But in early engineering stages, autogeny is enough.  It’s a good simplification as a halfway point in engineering toward advanced, autopoietic, nanosystems with the kind of complexity and robustness that life has.

Civilization, B.S.O.D.

Posted by J. Storrs Hall on January 6th, 2010

The other day I got a worried call from my mother-in-law.  My wife usually calls her during her commute but that day she neither called or answered her phone.

Turns out my wife’s iPhone had crashed — the software had wedged and there was no way to reboot.  The amusing, if you can call it that, fact was that her (employer-required) Windows PC had done the same thing the same day.  For the PC, which also ignored any attempt to reboot, we took the battery out and back in, forcing a cold start.

We are rapidly turning our civilization into software. As we do that, and as we build smarter and smarter AIs to do more and more important tasks, it will be important simply to be sure the software we write simply works right and is reliable.

A key design feature that makes natural control systems reliable, as well as adaptive and robust and resilient, is the inclusion of lots of feedback.  In the human brain, there is generally more feedback than forward signal along the major data pathways.  By contrast, the standard model of sequential programming has no feedback at all; in the simplest and most common coding styles, all the operations are done by dead reckoning. Consider an html renderer such as is showing you this page. There are plenty of sites whose pages come up with overprinted regions in some browsers — because the renderer doesn’t look where it’s writing.

It’s possible to do better, of course. In control systems, where feedback is essential, you have tight control loops that check and set, check and set, hundreds or thousands of times a second. Porting some knowledge of feedback in control systems back into systems software (and the rest of our software) would make it more reliable, as well as adaptive and robust and resilient. And as we turn our civilization into software, that’ll be a very good thing.

Auto-ATC for flying cars edges closer

Posted by J. Storrs Hall on January 5th, 2010

Roboplane tech can deal with air-traffic control directly • The Register.

Flying cars – or personal aircraft anyway – have moved a step nearer, as ongoing trials using robot aeroplanes and next-gen air traffic equipment in America are said to offer the option of “reduced crews” on commercial cargo flights.
US aerospace firm GE Aviation has been participating in joint trials with the Federal Aviation Administration (FAA) aimed at letting unmanned aircraft fly safely in civil controlled airspace, Flight International reports. An early option offered by the technology is the prospect of reduction from two pilots to one on commercial cargo flights.

The tests involved passing of traffic-control instructions to a Shadow roboplane, a type normally used by the US Army in warzones where civil rules and traffic aren’t an issue. Generally, air-traffic controllers give instructions to pilots by voice: nowadays, rather than translating these instructions into action via joysticks, throttles etc the pilot will simply key commands into an automated flight management system (FMS).
The next logical step is to remove the needless waste of bandwidth inherent in voice comms and the error potential and delay that comes with an on-board human pilot and his fingers. Orders can be passed directly to the FMS – in this case, part of the Shadow’s ground control station rather than on board, but with the same effect on the craft’s manoeuvring.

End of the World

Posted by J. Storrs Hall on January 5th, 2010

Aunt Polly: Tom, it’s time for your bath.  And make sure to wash behind your ears.

Tom: But gosh, Aunt Polly, I couldn’t do that.  It might cause the end of the world.

Aunt Polly: Land sakes alive, child, what on earth are you talking about?

Tom: Well, pouring water in to a tub releases several foot-pounds of energy as extra motion in the water.  Because thermal velocities are Gaussian-distributed, there will be some molecules with sufficient energy to be ionized, both producing free protons and causing the generation of photons as the molecules recombine.  Now if those ionized molecules and photons just happen to be lined up exactly right, they could cause laser acceleration of one of those protons. And the footpounds of energy I’d be adding to the water is millions of times the energy given the particles in the Large Hadron Collider. So I might create a micro-black hole and destroy the earth!

Aunt Polly:  Well, I never!  Just to be safe, maybe you’d better not have a bath after all. Let’s have a cup of tea instead. Go put a kettle on, there’s a good boy.

Software responsibility as model for nanotech?

Posted by Christine Peterson on January 4th, 2010

Foresight ally Jeff Ubois has a new book out, published by Fondazione Giannino Bassetti,Conversations on Innovation, Power, and Responsibility.  Yours truly is quoted.  An excerpt:

Peterson suggests that a closer look at the software developers might provide some clues about responsible cultures of innovation. “If you really want to know how to create a sense of responsibility, look at the software development community,” she says. “They see their work as political. They see it as ethics-based. They think of the ethical consequences of their decisions. They’re very politicized and very aware. So, why is that? Why is that true in software and not so much true in other areas?”

Of course this isn’t true of all software developers.  Just many of them.  —Chris Peterson

Towers

Posted by J. Storrs Hall on January 4th, 2010

Burj Dubai

Burj Dubai

The Burj Dubai opens today.  It’s the worlds tallest building at about half a mile high.

Except for being only half as high, it resemblesFrank Lloyd Wright’s mile-high tower in overall shape — but of course the Burj is real.  From what I can tell, it could not only house but form the complete social and economic infrastructure for at least 5000 people.  In luxury.  Scale it up to a mile and you’re talking 40,000.

Given its elegant shape, the Burj has an incredibly tiny footprint (the foundation slab, not the surrounding plaza) of less than two acres. It seems reasonable to imagine that one could build a mile-high with a 5-acre footprint.  Put only one of these per square mile — it takes up less than 1% of that area, so the land is left pretty much untouched — and you can put the current population of the Earth in about the area of Montana.

Give people flying cars and/or underground high-speed trains to get from one tower to another, and you can really turn the whole Earth into a park.

But you have to be able to do a lot of high tech building, and the people have to be pretty wealthy. Building a world full of mile-high towers would strain the world’s supply of steel and concrete significantly. What could nanotech do to bring this closer to reality?

Some years back I suggested that a good X-prize for nanotech would be to build a tower ten miles high.  The reason was that you’d have to come up with a working manufacturing method to make the material, nanotubes and diamond probably, cheaply.  You could build a 10-mile tower with current composites and/or aircraft alloys but it’d be way too expensive to be worth it.

The Burj Dubai used a recent advance, the ability to make concrete at higher strengths than before.  Use of polycrystalline diamond in that role would enable much higher towers. When can we expect mile-high, or ten-mile-high, towers?

tall towers

tall towers

The surge in building heights coincided with the industrial revolution and the use of steel in building, as exemplified by the Eiffel Tower. Here are tallest buildings on a semi-log scale:

tallest buildings

The blue line is tallest building (height in feet), the red is an eyeball-fitted trendline.  This puts the tallest building at a mile in about 2065.  However, all the structures in this trend are steel-and-concrete, and so, even though they follow an exponential curve, a shift into nanomanufacturing and materials could easily kick the curve into a different mode.  We could even see a major jump, like the Eiffel in 1889, if someone took the new capabilities and set out specifically to build a structure just to be impressive.

Y2K + 10

Posted by J. Storrs Hall on December 31st, 2009

Tonight is the tenth anniversary of the end of the world, according to some people.

May all your future angst be as groundless … and Happy New Year!

Learning from science

Posted by J. Storrs Hall on December 31st, 2009

There’s a really nice article at Wired about Kevin Dunbar’s research how science is really done and how often scientists get data they didn’t expect.

Dunbar knew that scientists often don’t think the way the textbooks say they are supposed to. He suspected that all those philosophers of science — from Aristotle to Karl Popper — had missed something important about what goes on in the lab. (As Richard Feynman famously quipped, “Philosophy of science is about as useful to scientists as ornithology is to birds.”) …

Dunbar brought tape recorders into meeting rooms and loitered in the hallway; he read grant proposals and the rough drafts of papers; he peeked at notebooks, attended lab meetings, and videotaped interview after interview. …

Dunbar came away from his in vivo studies with an unsettling insight: Science is a deeply frustrating pursuit. Although the researchers were mostly using established techniques, more than 50 percent of their data was unexpected. (In some labs, the figure exceeded 75 percent.) “The scientists had these elaborate theories about what was supposed to happen,” Dunbar says. “But the results kept contradicting their theories. It wasn’t uncommon for someone to spend a month on a project and then just discard all their data because the data didn’t make sense.”

The real world, it turns out, is a messy place, even in a completely controlled laboratory.  The job of a scientist, after all, is to abstract clean understandable rules and regularities away from the messiness.

But consider: information theory tells us that the amount of information a signal contains depends on how unexpected it is.  A signal consisting of a string of 0′s that we know is going to be a string of 0′s tells us nothing at all, and conveys no information.  So we should hopethat the data from an experiment will be unexpected.

The import of the article, and of Dunbar’s research, lies in how the new information is used.

“The scientists were trying to explain away what they didn’t understand,” Dunbar says. “It’s as if they didn’t want to believe it.”
The experiment would then be carefully repeated. Sometimes, the weird blip would disappear, in which case the problem was solved. But the weirdness usually remained, an anomaly that wouldn’t go away.
This is when things get interesting. According to Dunbar, even after scientists had generated their “error” multiple times — it was a consistent inconsistency — they might fail to follow it up. “Given the amount of unexpected data in science, it’s just not feasible to pursue everything,” Dunbar says. “People have to pick and choose what’s interesting and what’s not, but they often choose badly.” And so the result was tossed aside, filed in a quickly forgotten notebook. The scientists had discovered a new fact, but they called it a failure.

Now of course most of the time the new fact is something like “the particular bleach processing used to make this particular filter paper produces surface irregularities on the fibers that have an unexpected interaction with this particular protein when prepared this particular way”, a fact that will never be of use to anyone who’s not trying that particular experiment.  So most of the time it is the right thing for scientists to do to ignore the anomalous results and redo the experiment with different but “equivalent” equipment (or whatever).

75% anomalous results represents a huge information stream in information-theoretic terms.  But it’s mostly noise.  So scientists have filters in their minds to deal with it — as do we all (read the rest of the article for a little neuroscience about that).  The filters explain why “normal science” can proceed so long in the face of anomalies before a Kuhnian paradigm shift occurs.  It’s a perfectly reasonable bias to assume that your existing theory, that has worked in the past, is right and the contradictory evidence is noise.  It usually is.

But if the bias of your filters somehow gets set by something else — a political belief, for example — the fact that the filters control so much of what you see can steer you wrong a lot faster than you think.

In a perfect world, whenever someone did an experiment, all the data would be put online, accessible to anyone who cared to look, instead of filed away in a “quickly forgotten notebook”.  In the 20th century that would have been a utopian dream; but today, it’s possible, and tomorrow, it should be relatively easy.

Imagine a world where, say, just 1% of today’s MMORPG players spent their time and efforts crawling through lab records, analysis programs, and satellite feeds — gleaning not virtual gold, but scientific truth.

Futurisms – Critiquing the project to reengineer humanity: Happy Birthday, Nanotechnology?

Posted by J. Storrs Hall on December 29th, 2009

Futurisms – Critiquing the project to reengineer humanity: Happy Birthday, Nanotechnology?.

Adam Keiper over at the New Atlantis reminds us it’s the 50th anniversary of Feynman’s Plenty of Room at the Bottom talk.

J Storrs Hall on FastForward Radio tonight

Posted by J. Storrs Hall on December 29th, 2009

Tonight on Fast Forward Radio

J Storrs Hall, president of Foresight joins FFR to continue their special series leading up Foresight 2010. The conference, January 16-17 in Palo Alto, California, provides a unique opportunity to explore the convergence of nanotechnology and artificial intelligence and to celebrate the 20th anniversary of the founding of the Foresight Institute.

10:00 Eastern/9:00 Central/8:00 Mountain/7:00 Pacific.

Blog talk radio Call-in Number: (347) 215-8972

Life extension: taking those first steps

Posted by Christine Peterson on December 28th, 2009

Longtime readers know that we at Foresight would prefer that our members, and Nanodot readers in general, actually live long enough to experience the benefits of molecular nanotechnology personally.  In that vein, we bring to your attention America’s Wellness Challenge, which I am helping as a member of their Social Media Advisory Board.

If you are new to the idea of extending healthy lifespan, try taking their quiz called My Wellness Challenge.  If you are more experienced, take the quiz and post your suggestions for improvement as comments here, so I can pass them along to the organizers.

In any case, have a healthy and happy 2010!  And we hope to see you Jan. 16-17 at theForesight Conference and Senior Associate Reception in Palo Alto.  —Chris Peterson

Avatar

Posted by J. Storrs Hall on December 27th, 2009

The first time I met Eric Drexler, I complained to him, “You’ve ruined science fiction for me.”  (He replied, “If it’s any consolation, I ruined it for myself.”)

The reason, of course, is that understanding nanotech means that the all the classic SF projections become so piddling and simplistic in comparison that any story set after, say, 2050, looks ridiculously anachronistic, as if it had been written by Jules Verne or H. G. Wells.

The more technologically advanced the presentation of SF gets, as in the technical tour-de-force of CGI that is Avatar, the wierder this double-exposure sense of “what universe was this written in” gets.  I won’t go into the very hackneyed plot — Dvorsky has a nice review here— or how a presumably star-faring civilization (the humans) happens to still be using Vietnam-era military technology (why aren’t the fighting machines at least teleoperated, or more likely, AIs?), or even why they aren’t mining the floating mountains for the antigravity mineral.  And who bred the Smurfs with the Gentle Tasaday?  (My guess is that Cameron is angling for an Oscar and wrote the story to appeal to the Gaian sensibilities of the Hollywood elite.)

Avatar isn’t anywhere near real SF — it’s fantasy.  Let’s take it on those terms.

But the thing about the movie as a whole that struck me was that that beautiful, gorgeous, magical world … was entirely artificial.  Synthetic.  Made up. Every single bit and pixel. Produced by a corporation using lots of expensive machines. We are standing at the dawn of the era where the worlds we can produce are better than the natural one we happen to have evolved in.  Storytellers always did that in the imagination;  now we can do it in photorealistic detail.  With nanotech, we’ll be able to do it with atoms instead of bits.  This century.  If you like it you can live there.  But only if we build it.

Is the brain a reasonable AGI design?

Posted by J. Storrs Hall on December 25th, 2009

Shane Legg seems to think so:  Tick, tock, tick, tock… BING.

Having dealt with computation, now we get to the algorithm side of things. One of the big things influencing me this year has been learning about how much we understand about how the brain works, in particular, how much we know that should be of interest to AGI designers. I won’t get into it all here, but suffice to say that just a brief outline of all this information would be a 20 page journal paper (there is currently a suggestion that I write such a paper next year with some Gatsby Unit neuroscientists, but for the time being I’ve got too many other things to attend to). At a high level what we are seeing in the brain is a fairly sensible looking AGI design. You’ve got hierarchical temporal abstraction formed for perception and action combined with more precise timing motor control, with an underlying system for reinforcement learning. The reinforcement learning system is essentially a type of temporal difference learning though unfortunately at the moment there is evidence in favour of actor-critic, Q-learning and also Sarsa type mechanisms — this picture should clear up in the next year or so. The system contains a long list of features that you might expect to see in a sophisticated reinforcement learner such as pseudo rewards for informative queues, inverse reward computations, uncertainty and environmental change modelling, dual model based and model free modes of operation, things to monitor context, it even seems to have mechanisms that reward the development of conceptual knowledge. When I ask leading experts in the field whether we will understand reinforcement learning in the human brain within ten years, the answer I get back is “yes, in fact we already have a pretty good idea how it works and our knowledge is developing rapidly.”

(emphasis added.) Shane is one of the leading AGI researchers out there. I tend, in general, to agree with his analysis and predictions.

Tiptoe or dash to the future?

Posted by J. Storrs Hall on December 24th, 2009

Over at Overcoming Bias, Robin Hanson wonders whether we should go fast or slow with tech development as we move toward a level of development (solar-system wide or interstellar civilization) where we are reasonably not likely to be wiped out in a single incident.

He bases his analysis on how likely we are to stumble (or be otherwise wiped out) along the way.

I’d personally reject that as a valid concern.  We don’t have a clue what, if anything, is actually going to wipe us out.  If you really wonder what we think now is going to look like 1000 years from now, consider what the medieval philosophers were worrying about 1000 years ago.  Yep, we’re that clueless.

A better way to look at the problem is to compare what it was like to live in brave (fast-advancing) vs cowardly (slow-advancing) times.  The brave times (e.g. just a century ago) were optimistic times, when people were full of promise and possibilities.  The cowardly times were despondent and depressed.

Shakespeare put it like this:

Cowards die a thousand deaths. The valiant taste of death but once.

None but the brave deserve nanotechnology.

Scientists Create World’s First Molecular Transistor

Posted by J. Storrs Hall on December 24th, 2009

Scientists Create World’s First Molecular Transistor.

Very nice writeup of the research over at Next Big Future.

To my mind what’s new here isn’t the transistor per se — semiconducting and conductive states have been known in CNTs for over a decade, and FET and diode-like arrangements of them have been around for the same.  What’s new is the ability to synthesize and characterize them in a circuit (as opposed to pushing bits around in an AFM until a transistor happened).

Note this is Mark Reed’s group at Yale (and collaborators).

A Visit from Saint Assembler

Posted by J. Storrs Hall on December 24th, 2009

Historical note: back when I ran sci.nanotech, it was my tradition to post this poem every Christmas, in a spirit of light-hearted fun.

We here at Foresight wish all our readers the merriest of seasons greetings, and hope that you all are safe, warm, and enjoying your holidays with family and friends!

 

A Visit from Saint Assembler

(With Apologies to Clement Moore)
by J. Storrs Hall

‘Twas the night before Breakthrough; when all through the house,
Not a creature was stirring, not even a mouse.

The smocks were hung up in the lab for the night,
In hopes that a rest would bring some new insight.

The children were nestled all snug in their beds,
While visions of molecules danced through their heads.

Ma in her kerchief, and I in my cap,
Had just settled our brains for a long winter’s nap–

When logical inference struck me so hard
I let down my everyday common-sense guard.

The mind, on the crest of this new point of view
Took wild flights of fancy and made them seem true.

My wondering eyes, as I stood there agape,
Saw a miniature robot complete with a tape;

Of such a micronic molecular mass,
I knew in a moment it must be Saint … well, it must be a molecular assembler.

More rapidly than I could figure it out,
He built more of himself from stuff lying about.

He built Dasher and Dancer; they, Prancer and Vixen;
And then Comet and Cupid and Donder and Blitzen.

Now faster than I could match each with his name,
they doubled and doubled–and they all were the same.

As dry leaves that before the wild hurricane fly,
(or more, rather, like smoke) they took off to the sky.

And I could imagine I heard on the roof
the prancing and pawing of each tiny hoof.

Down the chimney they came, eating all of the soot,
As carelessly diamonds were dropped on my foot.

Another small cloud of atomic erectors
Were turning the roof into solar collectors.

I looked at one closely: a jolly old speck,
He had plenty of arms, and a bivalent neck.

His tape told him what he was programmed to do;
He was fast and efficient–self-referent too.

He looked like a gang of maniacal boys
Had been put in a room full of wee tinkertoys,

And making a mechanical jest of their teacher,
Allowed it to mutate into an odd creature.

Benzene rings on his fingers, propellors for toes,
Bucky ball for a belly, and lithium nose.

His arms moved like twinkling magical wands,
and his ears were connected by hydrogen bonds.

A wink of his eye, and a twist of his head,
Soon gave me to know I had nothing to dread;

Though New Jersey, the previous hour or two,
Had melted to form a sweet, sticky, gray goo.

He said not a word, but went straight to his work,
Built three more just like him, and turned with a jerk.

It was hard to see whether he gestured or beckoned,
For he did it a million or more times a second.

Not a bit of the household escaped from his hustle,
Even the doors received eyes, ears, and muscle.

I’d just gotten used to a toaster with brains;
I now must contend with intelligent drains.

Then most of them left through the skin of my hands,
to do a refurbishing job on my glands–

But I heard them exclaim, ere they dove out of sight,
“Happy Future to all, and to all a good night!”

Robin Hanson and Brian Wang Tonight on Fast Forward Radio

Posted by J. Storrs Hall on December 23rd, 2009

(h/t Next Big Future)
Tonight on Fast Forward Radio

Economist Robin Hanson and futurist Brian Wang join us as we continue our special series leading up Foresight 2010. The conference, January 16-17 in Palo Alto, California, provides a unique opportunity to explore the convergence of nanotechnology and artificial intelligence and to celebrate the 20th anniversary of the founding of the Foresight Institute.

10:00 Eastern/9:00 Central/8:00 Mountain/7:00 Pacific.

Robin Hanson  Considers Transhumanism

Truth be told, folks who analyze the future but don’t frame their predictions or advice in terms of standard ideological categories are largely ignored, because few folks actually care much about the future except as a place to tell morality tales about who today is naughty vs. nice. It would be great if those who really cared more directly about the future could find each other and work together, but alas too many others want to pretend to be these folks to make this task anything but very hard…

Blog talk radio Call-in Number: (347) 215-8972

Note that Robin will debate his online critic Mencius Moldbug at the Senior Associates reception at the conference, as well as giving his formal talk on the economics of nanotech and AI during the conference proper.

Martian Graffiti

Posted by J. Storrs Hall on December 22nd, 2009

One more comment on the post by Mike Treder that I addressed last time.  Recall he wrote

Techno-rapturists among our reading audience might be quick to respond with glib answers about miraculous nanotechnology solutions that are just around the corner …

To understand Foresight’s actual point of view on this issue (which is actually a lot closer to that of CRN, which Mike co-founded, than is implied in the quote), it is necessary to understand what the power of a mature nanotechnology is really like.  I fear that this often gets lost in the detailed discussions of the proto-nanotech that we see today in the labs.

Here’s one simple way to say it: the accidental impact on climate of current technology is at least three orders of magnitude smaller than the intentional impact of a maturenanotechnology.  If you are Really Worried about a roughly one watt per square meter influence on the Earth, I urge you to consider the Weather Machine which could interdict and/or redirect the full roughly one kilowatt /m^2 that’s available. Or, to judge by the diurnal warming and cooling rates, change the global surface temperature by 10C per day.

Now frankly, this worries me a lot more than natural climate change.  My worry goes as follows:  people are so worried about climate that they actually build a Weather Machine.  Their models of how climate work are not as good as they think they are; let’s say that they are in roughly the same state of actual to perceived knowledge that the Federal Reserve was in macroeconomics in 1929 (or 2008, for that matter).  They turn it on and there’s, well, a depression. Or multiple governments build them and have wars.  Nuclear weapons are trivial by comparison.

Let me demonstrate.  In the original Weather Machine post I claimed that a Weather Machine Mark II could “shoot down the moons of Mars.”  I took a guess at the energy available, resolving power, etc and figured I was well on the safe side.

Just for fun I did the math and impressed myself a little bit.  At closest approach, with an active spot of 10,000 km diameter (remember the WM is a cloud of balloons in the stratosphere with optical-wavelength antennas that are synchronized to be a coherent optical phased array), using violet light for the beam, you could focus a petawatt beam on a 2.7mm spot on Phobos.  A petawatt is about a quarter megaton per second. 2.7mm is about a tenth of an inch.  I.e. you could blow Phobos up, write your name on it at about Sharpie handwriting size, or ablate the surface in a controlled way, creating reaction jets, and sending it scooting around in curlicues like a bumper car.