0
    0
    Your Cart
    Your cart is emptyReturn to Shop

        CIA's "Global Trends 2015" on nanotechnology

        from the spooks-are-watching dept.
        Included in the CIA report Global Trends 2015: A Dialogue About the Future With Nongovernment Experts: "Discoveries in nanotechnology will lead to unprecedented understanding and control over the fundamental building blocks of all physical things. Developments in this emerging field are likely to change the way almost everythingófrom vaccines to computers to automobile tires to objects not yet imaginedóis designed and made. Self-assembled nanomaterials, such as semiconductor 'quantum dots,' could by 2015 revolutionize chemical labeling and enable rapid processing for drug discovery, blood content analysis, genetic analysis, and other biological applications." The International Herald Tribune comments: "The CIA's analytical work is sometimes breathtakingly mediocre, but this survey actually is worth reading. Compiled with help from prominent experts outside government, the study is blunt, provocative and full of surprising observations."

        The Economist vs. life extension

        from the non-visionary dept.
        A normally-sensible publication, The Economist has come out against life extension in an article titled "Who wants to live for ever?" Excerpts: "Average life expectancy has risen greatly. The span of individual life has not. Would it be a good thing if it did? No…If people were to live a lot longer, and everything else stayed the same, old people would soon end up a huge majority. Ugh…Who wants it anyway? A world of seen-it-all-before, weary crumblies would be a depressing place to live in."

        Difficulty of enforcing ethical standards

        from the tough-question dept.
        Sharad Bailur writes …I don't see the development in the forseeable future, of a system which is valid the world over, in which ethical standards which everybody agrees should be followed, are enforced. For instance how does one explain the millions of computer viruses floating around in the internet ether? If ethical standards cannot be enforced by some sort of international agreement, they will be followed more as an exception rather than as a rule. Besides, how does one enforce international agreements in the face of competing national sovereignties? We can at best by today's means isolate and blockade certain countries like Libya, Iraq and North Korea. In the face of a nanotech future these measures are surely hopelessly inadequate. How do we deal with this?" Read More for the full post.

        On ethics: engineers compared to programmers

        from the is-there-a-programmers-code-of-ethics dept.
        An engineer points out that the engineering professions have codes of ethics that guide them in designing for safety and in communicating with the public, and that these are not being followed by some programmers: "It seems to me that as a nanomechanical engineer (or rather a mechanical engineer with an interest in nanomechanics), I was permanently "conditioned" to always think of the ethics behind a design by "upping the safety factor" on my design. If your daily dealing with deadly machines — or rather machines that can cause harm to the public — then as an engineer I am obligated to design something that has a higher safety factor than someone who designs air conditioning systems. This fact, or rather a standard rule of engineering practice, has been overlooked by Joy and his counterparts…My point is this, that as a general rule engineers create highly reliable systems that must have a certain safety factor included whenever there is the slightest chance of harm to the public. I, or rather we, as engineers (Civil, mechanical, (High power) Electrical), are always worried about the safety factors we have set on a design…I think I might be over simplifying the issue but ultimately I have a hard time understanding why so many programmers are a) claiming to have a solid understanding of safety factors when their job has only a few instances for life or death of the public and b), if they follow the same code of ethics set forth by the state governmental systems, why is it they are not lumped together with the rest of the classically trained engineers? One other thing, if they are, then didn't they see that part in the ethics section about slandering or speaking out to the public on something in which they have no educated knowledge?… I would love to hear what the software "engineers" have to say about this particular essay in order for me to learn more about their trade." Read More for the full post.

        Engineers seen as unable to make moral decisions

        from the who-else-is-even-paying-attention? dept.
        From a Newsweek article on MSNBC on the coming age of cyborgs: "Who, then, can speak on moral issues? Certainly not the engineers. Ellen Ullman, a former computer programmer and the author of the 1997 book Close to the Machine: Technology and Its Discontents, says that 'the problem is not the technology, which in any event canít be stopped. The problem is that engineers are making decisions for the rest of us.' Programmers are hired guns, she says, and rarely understand in a nuanced way their clientsí actual work. They are, she says, the last people 'to understand what is an acceptable risk.' " CP: In Foresight's experience, programmers and engineers are far more attentive to ethical issues in technology than members of other professions.

        Fictional polymaths debate destructive nanotech

        from the that's-a-long-time-from-now dept.
        Found by GoTo.com: The essay The World in 2050 by Yale lecturer Nick Bostrom features an imaginary dialogue, set in 2050 and broadcast in virtual reality, in which three polymaths debate various issues, especially risk of destructive nanotechnology. The discussion closes with: "We need greater-than-human intelligence to build defenses against nano-attacks. We would not reduce the danger by slowing down; on the contrary, that would make the risks even bigger. The best we can do is to press onward with all possible speed, using as much foresight as we can muster, and hope that there is an other side that we can get to."

        Rate of progress: slowing or speeding up?

        from the US-News-vs-Reason dept.
        Tony (asnapier) writes "Here are two articles that are very much worth reading: #1 The Slowing Rate of Progress and #2 [mentioned on nanodot earlier] More More More Nanotechnology and the Law of Accelerating Returns Which one is correct? The initial reaction is to say #2. I have come to the conclusion that geometric growth will only happen if strong AI is realized — which of course, is the very definition of Vinge's singularity. What if strong AI does not emerge? Then the argument for slowing growth appears valid — human minds and mundane information systems would be the bottleneck to rapid advancement (you have to read the articles for the proper context — of course science and technology advance every day — but a new dvd player does not count as a fundamental improvement of the human condition)." CP: The US News piece ends: "Perhaps another Thomas Edison is hard at work, using nanotechnology or bioengineering to invent new machines that are truly revolutionary and transforming. But he or she has not succeeded yet."

        Turing code For nanomachines?

        from the please-not-in-Java dept.
        vik writes "I was attracted by a slashdot article on 8-bit Java VM's implemented using a Turing Machine backend. With Turing Machines being conceptually simple, the design put forward by Bernard Hodson has relevance to nanotechnology in that we'll want to get the simplest possible hardware running the smallest possible software. Probably not in Java, but the principles still hold. If construction command sequences can be compressed in a similar way, assembler control machinery could be greatly simplified."

        Kurzweil vs. Dertouzos debate future technology

        from the who-won? dept.
        Joseph Sterlynne writes "MIT's Technology Review has printed an exchange between Ray Kurzweil and Michael Dertouzos regarding the latter's recent article on reasonable expectations of technological progress." Kurzweil: "As for nanotechnology-based self-replication, that's further out, but the consensus in that community is this will be feasible in the 2020s, if not sooner." Dertouzos: "We have no basis today to assert that machine intelligence will or will not be achieved…Attention-seizing, outlandish ideas are easy and fun to concoct."

        More nanotech skepticism

        from the nanotech-not-interesting-enough dept.
        Sharad Bailur writes "I read David Coutts's review of Matt Ridley's opinion on Nanotech with interest. Coincidentally I also am reading the book [Genome] and have just gone thru the chapter he mentions. I think Ridley's scepticism is shared by many other established scientists. Dr M. Vidyasagar, the former head of the Centre for Artificial Intelligence and Robotics of the Defence Research and Development Organisation of the Ministry of Defence, here in India, said that while nanotech is feasible it will have to prove itself over time and that he found the concept of reverse logic operations more interesting. There have been similar reactions from others about nanotech, Michio Kaku's being the most famous one which was posted here some days ago. I think a healthy scepticism and an open mind are necessary. Nanotech is not a religion. Nor does it need convinced acolytes." CP: However, a large engineering project does need those who are committed to making feasibility into reality, and it is they who will win the race.

        Privacy Overview

        This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.