Gilmore on nanotech & copy-protection

from the what-do-you-mean-replication-is-illegal? dept.
Senior Associate John Gilmore of EFF has written this item on the problem of copy-protection including the connection to nanotech. John prefaced it with: "My latest missive about copy-protection. I tie in the big nanotech angle toward the end. I have to sneak them up to it because they think I'm crazy if I lead with it. Feel free to reproduce this. If you publish it far and wide, let me know so I can feed you corrections as they come in from the critics…" Sounds like John would appreciate feedback, so add your comments below.

Hawking predicts design of improved human race

from the lack-of-advanced-aliens dept.
PatrickUnderwood writes "Stephen Hawking speaks out on genetic engineering, space colonization, and overpopulation: "Hawking said a more advanced race on other planets was unlikely. 'If that's so, then why hasn't it spread through the galaxies and visited us. Or could it be they are watching us and letting us stew in our own primitive juices?' he joked, adding, 'I doubt they would be so considerate to a lower life form.' " http://www.cnn.com/2001/ASIANOW/south/01/14/india.stephenhawking.ap/index.html"

Arthur C. Clarke on nanotech and AI

from the megabrains-via-nanotech dept.
Scientific American reports Arthur C. Clarke's views on machine intelligence via nanotechnology: "Quoting himself (Clarke's third law), Clarke remarks that 'any sufficiently advanced technology is indistinguishable from magic; as technology advances it creates magic, and [AI is] going to be one of them.' Areas of research that target the ultimate in miniaturization, he adds, may be the key to making good minds. 'When nanotechnology is fully developed, they're going to churn [artificial brains] out as fast as they like.' "

CIA's "Global Trends 2015" on nanotechnology

from the spooks-are-watching dept.
Included in the CIA report Global Trends 2015: A Dialogue About the Future With Nongovernment Experts: "Discoveries in nanotechnology will lead to unprecedented understanding and control over the fundamental building blocks of all physical things. Developments in this emerging field are likely to change the way almost everythingófrom vaccines to computers to automobile tires to objects not yet imaginedóis designed and made. Self-assembled nanomaterials, such as semiconductor 'quantum dots,' could by 2015 revolutionize chemical labeling and enable rapid processing for drug discovery, blood content analysis, genetic analysis, and other biological applications." The International Herald Tribune comments: "The CIA's analytical work is sometimes breathtakingly mediocre, but this survey actually is worth reading. Compiled with help from prominent experts outside government, the study is blunt, provocative and full of surprising observations."

The Economist vs. life extension

from the non-visionary dept.
A normally-sensible publication, The Economist has come out against life extension in an article titled "Who wants to live for ever?" Excerpts: "Average life expectancy has risen greatly. The span of individual life has not. Would it be a good thing if it did? No…If people were to live a lot longer, and everything else stayed the same, old people would soon end up a huge majority. Ugh…Who wants it anyway? A world of seen-it-all-before, weary crumblies would be a depressing place to live in."

Difficulty of enforcing ethical standards

from the tough-question dept.
Sharad Bailur writes …I don't see the development in the forseeable future, of a system which is valid the world over, in which ethical standards which everybody agrees should be followed, are enforced. For instance how does one explain the millions of computer viruses floating around in the internet ether? If ethical standards cannot be enforced by some sort of international agreement, they will be followed more as an exception rather than as a rule. Besides, how does one enforce international agreements in the face of competing national sovereignties? We can at best by today's means isolate and blockade certain countries like Libya, Iraq and North Korea. In the face of a nanotech future these measures are surely hopelessly inadequate. How do we deal with this?" Read More for the full post.

On ethics: engineers compared to programmers

from the is-there-a-programmers-code-of-ethics dept.
An engineer points out that the engineering professions have codes of ethics that guide them in designing for safety and in communicating with the public, and that these are not being followed by some programmers: "It seems to me that as a nanomechanical engineer (or rather a mechanical engineer with an interest in nanomechanics), I was permanently "conditioned" to always think of the ethics behind a design by "upping the safety factor" on my design. If your daily dealing with deadly machines — or rather machines that can cause harm to the public — then as an engineer I am obligated to design something that has a higher safety factor than someone who designs air conditioning systems. This fact, or rather a standard rule of engineering practice, has been overlooked by Joy and his counterparts…My point is this, that as a general rule engineers create highly reliable systems that must have a certain safety factor included whenever there is the slightest chance of harm to the public. I, or rather we, as engineers (Civil, mechanical, (High power) Electrical), are always worried about the safety factors we have set on a design…I think I might be over simplifying the issue but ultimately I have a hard time understanding why so many programmers are a) claiming to have a solid understanding of safety factors when their job has only a few instances for life or death of the public and b), if they follow the same code of ethics set forth by the state governmental systems, why is it they are not lumped together with the rest of the classically trained engineers? One other thing, if they are, then didn't they see that part in the ethics section about slandering or speaking out to the public on something in which they have no educated knowledge?… I would love to hear what the software "engineers" have to say about this particular essay in order for me to learn more about their trade." Read More for the full post.

Engineers seen as unable to make moral decisions

from the who-else-is-even-paying-attention? dept.
From a Newsweek article on MSNBC on the coming age of cyborgs: "Who, then, can speak on moral issues? Certainly not the engineers. Ellen Ullman, a former computer programmer and the author of the 1997 book Close to the Machine: Technology and Its Discontents, says that 'the problem is not the technology, which in any event canít be stopped. The problem is that engineers are making decisions for the rest of us.' Programmers are hired guns, she says, and rarely understand in a nuanced way their clientsí actual work. They are, she says, the last people 'to understand what is an acceptable risk.' " CP: In Foresight's experience, programmers and engineers are far more attentive to ethical issues in technology than members of other professions.

Fictional polymaths debate destructive nanotech

from the that's-a-long-time-from-now dept.
Found by GoTo.com: The essay The World in 2050 by Yale lecturer Nick Bostrom features an imaginary dialogue, set in 2050 and broadcast in virtual reality, in which three polymaths debate various issues, especially risk of destructive nanotechnology. The discussion closes with: "We need greater-than-human intelligence to build defenses against nano-attacks. We would not reduce the danger by slowing down; on the contrary, that would make the risks even bigger. The best we can do is to press onward with all possible speed, using as much foresight as we can muster, and hope that there is an other side that we can get to."

Rate of progress: slowing or speeding up?

from the US-News-vs-Reason dept.
Tony (asnapier) writes "Here are two articles that are very much worth reading: #1 The Slowing Rate of Progress and #2 [mentioned on nanodot earlier] More More More Nanotechnology and the Law of Accelerating Returns Which one is correct? The initial reaction is to say #2. I have come to the conclusion that geometric growth will only happen if strong AI is realized — which of course, is the very definition of Vinge's singularity. What if strong AI does not emerge? Then the argument for slowing growth appears valid — human minds and mundane information systems would be the bottleneck to rapid advancement (you have to read the articles for the proper context — of course science and technology advance every day — but a new dvd player does not count as a fundamental improvement of the human condition)." CP: The US News piece ends: "Perhaps another Thomas Edison is hard at work, using nanotechnology or bioengineering to invent new machines that are truly revolutionary and transforming. But he or she has not succeeded yet."

0
    0
    Your Cart
    Your cart is emptyReturn to Shop