Foresight Nanotech Institute Logo

« Go Back

You are viewing
Foresight Archives

Image of nano


Challenges addressed


    Nanotechnology, Manufacturing and Environment    
    Nanotechnology, Computing and Defense    
    Nanotechnology and Space    
    Nanotechnology, Biotechnology and Medicine    
    Nanotechnology, Surveillance and Openness    
    Building Complex Systems that Work    
    Nanotechnology and Machine Intelligence    
    Nanotechnology and Design Ahead    
    Technology and Ethics    
    Unregulated Cyberspace Markets    
    "Intellectual Property"    
    Cooperation-Enabling Software    

Workgroup Instructions

  1. Choose a person to deliver the workgroup report.
  2. Distribute the challenges.
  3. Read and discuss.
  4. If the challenge provided don't appeal to the group, choose another interesting aspect of the topic to explore.
  5. Once the discussion is concluded, one member of the group should fill out the results form.
  6. Take a break and then return for the workgroup reports.
  7. Above all, have fun with this.

Nanotechnology, Manufacturing and Environment

Challenge #1: Overcoming polarization over environment and technology concerns

Environmental organizations often tend towards being anti-technology oriented. Pro-technology organizations are often perceived as "anti-environmental", and some actually are insensitive to legitimate environmental concerns. This polarization is extremely counterproductive. Outline a realistic agenda for Foresight and/or IMM addressing this problem.

Engineer some "sticky memes" that could be propagated to address this issue. What vector could be used to carry these memes effectively?

What organizations could we partner with to break through the simplistic notions that we can have technology or environmental quality, but not both?

What environmentally oriented product could early nanotechnology companies produce?

How should technologists address the concern that nanotechnology itself could become an environmental problem, given humanity's current record of managing powerful new technologies wisely?

Challenge #2: Pathways to molecular manufacturing systems engineering

Most of the R&D in nanotechnology is not directed towards the bottom-up approach to molecular manufacturing. Of the fraction of research that is relevant to the bottom-up approach, most of it is directed towards the design of specific elements or components. Very little manufacturing systems engineering has been attempted at the molecular level.

What approach do you recommend to increase the quantity of R&D on bottom-up molecular manufacturing?

What approach do you recommend to increase the quality and quantity of molecular systems engineering?

Challenge #3: Defending the feasibility and desirability of Molecular Manufacturing

There appears to be a persistent disconnect, even in the technical community, concerning the engineering feasibility and viability of molecular manufacturing. Members of the Foresight and IMM community have addressed these issues directly in Science, Scientific American, books, and lectures. It appears that nothing less than an existence proof will do, but in order to provide an existence proof, we need more resources and R&D.

Is the underlying problem technological illiteracy, or narrow mindedness?

Is there an intermediate step towards an existence proof that would make a difference?

What are we missing?

What other media or approaches might be cost-effective?

What else should Foresight or IMM be doing?

Challenge #4: Improve the Foresight Guidelines on Nanotechnology Development

The Foresight Guidelines have gone through many iterations and improvement cycles, but they are still in their infancy. There are several missing elements:

The Guidelines do not address the serious problem of potential deliberate abuse or terrorist acts. What would you recommend to deal with this possibility?

The Guidelines do not take a position on the development of offensive nanoweapons. Should they, or would that be a mistake?

The Guidelines suggest possible enforcement mechanisms, but they actually envision voluntary enforcement. Is this naïve? What enforcement mechanisms might actually work and make economic and pragmatic sense?

The Guidelines suggest several technological mechanisms that would reduce the likelihood of accidental or even deliberate abuse. However, none of these mechanisms has been worked out in operational molecular detail. How can we deliver the goods in this area? What incentives or alternatives exist for getting these safety-oriented mechanisms addressed in detail — sooner rather than later?

Nanotechnology, Computing and Defense

Challenge #5: Which nation would be the most responsible developer of nanotechnology?

Scenario: On New Year's Day, 2027, the Chinese announced to the world that they had resolved the final obstacles preventing their deployment of molecular nanotechnology. Their announcement came with a sonic boom, one associated with the abrupt disassembly of national monuments around the world. As the world screamed for explanations, the Chinese government announced their success and calmly informed the outraged masses that the monuments would be restored with honor. A day later, the monuments were reassembled and grew, seemingly from nothing, with billions of witnesses.

We saw something when Pakistan detonated a nuclear device, in the face of an outright ban and international criticism. Pakistan wanted to leave no room for doubt of its capacity to build bombs. Is it a stretch to think that proof may be required of nanotechnological nations? But what form might such proof take?

What would it mean for the U.S. defense strategy if China were first to achieve nanotechnology? What shape would the technology take if China were to be in charge? Would this lead to an arms race; would China try to suppress research abroad?

How would this scenario look different if the first to reach nanotech is instead Israel, Japan, the U.S., or other nation you find more plausible?

Based on your debate on the above, can you suggest which nations seem safest, or least safe, as leaders in nanotechnology?

Challenge #6: Will "hardware overhang" be a problem?

Hardware has been growing in sophistication along an exponential curve. Software to run on that hardware has been evolving at a much slower rate.

MIT professor Ed Fredkin has pointed out that this excess of hardware capacity could be a big problem if and when someone comes along with the ability to suddenly make use of it.

This insight is especially valid given the poor level of computer security — not only could someone suddenly make much better use of his/her own hardware, but of everyone else's as well, whether they consent or not. The prospect is of a very rapid concentration of computational power in the hands of one entity.

Do you agree that hardware has advanced much faster than software, and that therefore this hardware overhang does exist? Is computer security really so bad that a large fraction of the world's computing power could be captured by a sufficiently advanced software attack? (Computer viruses provide an analogy here.)

Overall: Do you find this concern plausible?

Challenge #7: Which organizational type would develop nanotechnology most responsibly?

As time passes and tools improve, the development of molecular nanotechnology gets easier and cheaper, requiring smaller teams and shorter timeframes. What would take a major national or international project slowly but inevitably becomes closer to a garage startup or science fair project.

Which type of organization should be preferred for developing nanotechnology?

  • University consortium
  • Corporation
  • Private company consortium
  • NATO-sponsored project
  • National defense lab (e.g. in the US: Naval Research Lab, various national labs such as Livermore and Sandia)
  • Public/private consortium
  • International multi-government effort among democracies
  • Multi-agency program by one government (e.g. the current NASA/National Cancer Institute project)
  • Private lab funded by foundation or tycoon
  • Other

Which would be your most-preferred and least-preferred organizational type to be first in reaching molecular nanotechnology, from the standpoint of a stable outcome, successfully avoiding aggressive military use?

Within your most-preferred category, can you suggest specific examples of organizations that you prefer?

Should Foresight actively encourage development of nanotechnology by the entities you select?

Challenge #7 1/2: Is nanotechnology compatible with stable defenses?

Picture an arms race based on nanotechnology.

Assume that the "good guys" — the defensive side — succeed in designing and deploying a suite of defensive technologies. The bad guys are stymied.

Clearly, the good guys could be mistaken; their defenses may be permeable. This could happen repeatedly: seeming stability followed by instability.

But project this forward — can we imagine an end to this race, a time when their is no longer a need to frantically come up with new defenses, a time of true stability? After all, there are only so many arrangements of atoms that can be made.

What might such a world look like? How compartmentalized, how closely monitored, how free?

Does the group have any suggestions on how to go about describing such a state, preferably in a reassuring manner?

Nanotechnology and Space

Challenge #8: How should ownership of space resources be determined?

In the early 1980's, a United Nations Treaty was proposed that declared the Moon to be the "common heritage of all mankind". Sounds good, but it set up an ownership regime that would basically make private ownership not feasible — the treaty drafters were not envisioning a time when people would actually live and work in space, and need to own their homes and workplaces. Some of today's Foresight members were instrumental in keeping the US from fully endorsing this flawed treaty.

But how should space resource ownership be determined? Various options come to mind:

  1. You touch, you own. The first human to land on an object in space gets to own it.
  2. You touch, your sponsor owns. Whoever pays for the first human visit to a space object gets to own it.
  3. Your machine touches, you own. The first organization to land a automated probe on a space object gets to own it.
  4. You homestead it, you own.
  5. The Reverse Polish Moon Treaty: The former communist countries had to divide up resources among their people. Some did it more successfully than others. This option involved seeing who did it best, and having a similar distribution among all humans who exist at a given "heritage delivery" day.

All the options requiring human visitation seem antiquated — these space objects may not merit visits from humans, much less homesteading.

The option requiring machine visitation has a flaw given the arrival of nanotechnology — whoever starts cranking out these automated probes first could gain virtually everything within reach.

Foresight has favored option 5. Do you find this option appealing? What are the practical difficulties, and could these be overcome?

Can you suggest a better method?

Should Foresight actively support the method you select?

Challenge #9: How do we attract environmentalists to space expansion and to nanotchnology?

It's 2025, and your group is organizing an off-planet settlement expedition. This is a for-profit venture, and your largest investors have stated clearly that the main reason they have invested so heavily in your business plan is that they wish to relieve some of the stresses on the Earth's environment. They feel that a strong campaign will attract other environmentalists, and your group has been assigned responsibility for creating the ad campaign for this market segment.

What will attract environmentalists to space settlement ventures? What shape will this ad campaign take? How will colonizing space improve the condition of the Earth? What stresses will be relieved?

Extend this logic to nanotechnology. It too should help Earth's environment, yet many in the environmental community have been disillusioned with "technological fixes" and now focus on more difficult pathways, such as reducing population or encouraging everyone to return to simpler lifestyles involving less consumption.

How can we convince the more skeptical members of the environmental community that nanotechnology — with appropriate safety standards — is more a solution than a problem with respect to our shared goals for improving the environment?

Challenge #10: What happens if there is a "deep biosphere" on Mars?

Picture this scenario:

There is life on Mars, but it is quite different from most of the life we've studied on our own planet. Martian life operates on a much smaller level than we do — bacteria living deep underground.

This scenario may turn out to be true. In fact, it's plausible that life on Earth originally evolved on Mars and was brought here by rocky objects arriving here from that planet.

If we find this is the case, should Mars become a wildlife preserve, or should it be considered fair-game for Earthers to colonize? What would factor into this type of decision? If such a decision is to apply to Mars, should it also extend to other primitive (and perhaps non-primitive) biospheres we find as we expand into space?

Challenge #11: How far do you have to go to escape Earth's problems?

Access to space is desirable for many reasons. We've seen that nations are likely to support expanding into space slowly, given the risks and expense currently involved in developing space-faring technology. As our presence in space expands, the ability of nations to control people in space will decline, once the complete dependence on Earth's resources is lifted. To date, this has not been an issue, since all manufacturing and food production occurs on terra firma. But as our space-faring capacity expands, manufacturing to support space exploration will be expanded to space.

The hope is that the space stations and colonies will become independent in time, but what engenders hope for some causes fear in others. Some wish to escape Earth's problems by moving to space, but given nanotechnological abilities back home, those still on Earth will be able to reach out after those attempting to leave.

How far do you have to go to escape Earth's problems? To be more accurate, how fast do you have to go? Is it plausible that migrating to space could be a successful escape from Earth problems, if one isn't willing to travel almost at lightspeed for the indefinite future?

Nanotechnology, Biotechnology and Medicine

Challenge #12: How can nanotechnology avoid the bad PR of genetic engineering?

The public is increasingly nervous about genetic engineering, especially genetically-engineered food and the transfer of genes from one species to another.

Perhaps most controversial is human germline engineering — passing new traits down to later generations.

Without some serious work, the public is very likely to extend all these fears and more to nanotechnology as it approaches.

How can nanotechnology avoid being tainted with the problems of genetic engineering?

Various strategies have been suggested. These include introducing the following ideas to the public:

  1. Nanotechnology can clean up problems created by biotech.
  2. Nanotechnology will enable desired changes to be made without affecting the germline, thus making germline genetic engineering obsolete.
  3. Germline changes that prove to be undesirable can be reversed using nanotechnology.

The basic strategy being suggested above — which clearly needs more work — is to portray nanotechnology not as an extension of genetic engineering, but as something quite different, that eliminates the incentive to play about with genes.

Do you find this strategy worth pursuing, or at least exploring? If so, how could the public be educated that nanotech is very different from genetic engineering?

The public learns through the media, especially soundbites, slogans, and (for longer material) TV shows and movies. Can you suggest how to use one or more of these to further this strategy?

If this strategy is unappealing, can you suggest another way for nanotech to avoid being grouped with genetic engineering and suffering from its PR problems?

Challenge #13: Would renaming "life extension" help it gain widespread acceptance?

As we venture into new territory, sometimes we apply old words in a new way. Sometimes the need for new terminology is thrust upon us.

We have seen this kind of thing before, like when the Pro-Life movement waged its campaign in the days before its opposition found a cohesive voice. Naming their movement was easy. Supporters of a woman's right to abortion needed to oppose the Pro-Life movement without giving in to the easy (and incorrect) stigma that it was death they supported. When they positioned themselves as Pro-Choice, they reframed the debate. And they did it in a way that enabled others to get right to the heart of the underlying issues. They were able to present their cause in a way that partially side-stepped direct opposition, and this has worked to their advantage.

The "free software" movement found its work being much more acceptable to commercial use once it was renamed "open source". Foresight played a role in that successful renaming, so let's try again:

Today when people hear that nanotechnology will greatly extend their lives, they picture extended old age — not a very appealing picture. Somehow the term "life extension" is not succeeding at communicating what will be happening.

How could we reframe the life extension debate so that we can emphasize the "youth extension" aspects? Youth extension is not a great term either, implying that the benefits of maturity are not obtained. Can you think of a better term?

Are there any other concepts that we're trying to promote that might benefit from similar re-positioning through re-naming, and can you suggest some ideas for these new names?

Challenge #14: To freeze or not to freeze?

It's a few years from today, and you are in the hospital. You have contracted pulmonary fibrosis, an illness where the alveoli in the lungs are gradually replaced with scar tissue. The scarring in your lungs is making every breath a challenge, and no cure yet exists for this disease.

A friend has approached you about cryonics as an alternative to certain death, and you have been considering it. When mentioning it to your family and colleagues, you have received reactions ranging from mild interest to outright shock. You are trying to understand why cryonics is still unpopular, despite the growing understanding that nanotechnology should enable the repair of those in cryonic suspension. What major objections do people have? How might public perception of cryonics be changed?

Foresight may be in a position to help the cryonics movement today, and it is within our mission statement to prepare for emerging technologies. The question is whether this is an issue that we should try to help.

Should Foresight become more actively involved in the propagation of cryonics as an alternative to conventional medical inadequacies — or is this is distraction from our core concerns? If Foresight doesn't step forward to help, who should take on this task of assisting those who otherwise will be "the last generation to die"?

Challenge #15: Can we learn from biotech's mistakes?

President Bush has weighed in on cloning, and he's against it. According to White House Press Secretary Ari Fleischer, the president believes that no research to create a human being by cloning should take place in the United States. He opposes it on moral grounds, and is prepared to sign legislation banning human cloning research as soon as Congress lays it on his desk.

Whether or not the U.S. Government has the authority to ban scientific research of any kind is a matter for a different debate. That the biotechnology industry has generated a wide range of enemies — people in positions of power who choose to actively thwart scientific progress — is a cause for concern in the nanotechnology community. What moral objections exist to biotech research, and will these objections also apply to the development of nanotechnology?

Genetically engineering food is another example, this time of safety concerns rather than moral ones. Here, the biotech industry clearly underestimated the amount of objection that would be raised, especially in Europe.

What mistakes did biotech industry make as their control of new processes grew? Were they mistakes of public education and marketing, of research design, of ignorance? How can we avoid making similar mistakes as we develop nanotechnology? What precautions can we take to reduce the likelihood that nanotechnology encounters similar obstacles?

Or is the problem much more basic — a fundamental lack of understanding of science by the public, including the political leadership? If this is the case, is there anything to be done to mitigate the problems we can expect for nanotechnology?

Challenge #16: What forms of physical augmentation are desirable?

Nanotechnology will make possible extensive modifications to the human body, without the need for genetic changes. Presumably, adults will be able to make such changes legally, just as they do now with cosmetic surgery.

What modifications seem useful, for what purpose, in what environments?

Are there any additional senses that you'd like to have available for upgrades? Obvious candidates include better vision, since some animals already can see at different wavelengths than we do.

How fast could such modifications be made, or removed?

Given the current estimates for when nanotechnology will be developed, these options seem likely to be available to many of us in our lifetimes. This will be a fairly bizarre world, for those of us used to traditional human limitations. How well do you expect 20th century-born people to adjust to the new options? In contrast, how well will young people adjust — those born into a world already offering these options?

Nanotechnology, Surveillance and Openness

Challenge #17: Can we avoid The Transparent Society?

It may have begun in Germany. In the mid-1980's, cameras began appearing in intersections. They photographed people who sped through yellow lights. (The laws in Germany are slightly more restrictive on this point than those in America.) Tickets were mailed to the owners of the offending vehicles, and only if the owners themselves were not driving, was there any recourse for avoiding the fines. Accident rates dropped and income for the city rose.

In Britain, several dozen towns followed the example first set by King's Lynn, near Norwich, where 60 remote-controlled video cameras were installed to scan known "trouble spots," reporting directly to police headquarters. The resulting reduction in street crime exceeded all predictions, dropping to one-seventieth of the former amount in or near zones covered by surveillance. The savings in patrol costs alone paid for the equipment within a few months. Today, more than 250,000 cameras are in place throughout the United Kingdom, transmitting round-the-clock images to 100 constabularies, all of them reporting decreases in public misconduct. Polls reported that the cameras became extremely popular with citizens, though British civil libertarian John Wadham and others bemoaned this proliferation of snoop technology. "It could be used for any other purpose," he said, "And of course it could be abused."

In the U.S., after initial experiments garnered widespread public approval, the city of Baltimore installed police cameras at 106 downtown intersections by the end of 1996.

With the success of trial projects around the globe, there appear to be few reasons for the government to inhibit the spread of surveillance technology. Clearly there are downsides to such widespread surveillance. But before taking a stand in favor or against, let's consider whether stopping these changes is even an option. Keep in mind that this technology is rapidly decreasing in cost already, long before nanotech which will push this trend much further.

Is there any way to stop the approaching Transparent Society? If yes, how — and if not, how can we start getting used to this new way of living?

Challenge #18: Can we adjust to transparency?

With the success of trial projects around the globe, there appear to be few reasons for the government to inhibit the spread of surveillance technology. And given decreases in cost, not just governments and companies, but also private individuals will have and use this equipment.

What aspects of the Transparent Society will require adjustment — individual behavior, the legal system? What methods can we use to ease into transparency?

Challenge #19: Will it be possible to limit transparency, or make it more palatable?

The recent Chines fighter plane vs. US spy plane incident arguably began because China does not like the US using sophisticated equipment to peer inside their borders.

Transparency in the physical world is growing, and with it the demand for increasingly advanced surveillance equipment. As we can see and hear farther, and remember with more clarity, the more we will come to question the boundaries of privacy.

When does surveillance go too far? Who should have access to surveillance data, and what are they allowed to do with it? Is automation of data review — where bots, and not beings review tapes — a partial solution to preserving some privacy and reducing abuse of the information? What will transparency mean for the definition and enforcement of boundaries?

It has been suggested that the US, to reduce complaints from those we spy upon, freely release surveillance information about the US itself. Do you think this would help, it is a good idea? If so, should Foresight advocate it as part of our position in favor of openness?

Challenge #20: What happens when we have internal recording devices?

It is some time in the future, and we have developed rudimentary nanotechnology. Medical technology has grown in sophistication and scope. Already chips are being implanted into humans to give sight to the blind. Designs exist for the creation of organic memory chips that are theoretically capable of interfacing with hearing and sight. A broadcast architecture is being built to allow for external download of the data stored on the chip. Experimentation is poised to go from laboratory to clinical studies within months.

An undertow is forming, as agents are uniting to halt to the clinical studies of these chips. People are wondering what will happen when every person passing by might be recording even random interactions on the street. When person with a chip implant, giving perfect memory for sound, hears and remembers music, is that making a copy or is it fair use? Can they then distribute the images and sound files they collect? What if a person is reading copyrighted material (which is still enforced despite strenuous attempts to remove this aspect of the legal code in early 2005)? Do people have the right to do more than store information for personal access? What about backups? Will these files be treated like every other file, or will there be restrictions?

Under this scenario, every person's physical movements will be observed. The DNA in the skin cells they're shedding is being analyzed. Their sneezes and breath are sensed and the chemical components analyzed. All this is done by inexpensive equipment owned by individuals — and perhaps located within their bodies — not just by large companies and governments. Given this expected situation, who owns the data about your life? It's being freely observed and detected by humans using their (new) senses and memory recording devices, and being shared in the form of free speech.

Is prohibiting such detection, recording, and information sharing even an option, without intolerable infringments on traditional freedoms? If not, is it time to take a new attitude to physical privacy — to "get over it", as some are increasingly suggesting?

Building Complex Systems that Work

Challenge #21: Is open development really faster and safer?

Foresight has been advocating open development of technology in general, and of nanotechnology in particular, because we assume that:

  1. Open projects move faster, bringing the benefits of the technology faster,
  2. Open projects are less risky, since many more eyes are watching, and
  3. Open projects tend to spread benefits more widely, since the intellectual property generated is more widely distributed — perhaps even public domain.

But are these assumptions really warranted? For example, open source software development may result in higher-quality software, but is it faster? Does it depend on whether what is being done is a bug fix (uncontroversial correction) or new feature (more controversial)? Perhaps open projects are faster once started, but take longer to get going — if so, are they really faster overall?

Do you find persuasive the claim that open development is less risky? Are the results of open projects usually highjacked by closed efforts, so that the closed ones are always at least as far along as the open ones? Or does the "Not invented here" syndrome prevent this?

Do open projects really result in more widely distributed benefits, and if so, is this due to intellectual property arrangements or something else? Today's academic R  is increasingly tied up with corporate agreements — will this limit the future spread of benefits from what has been a dynamic generator of knowledge, or does corporate-owned knowledge work just as well at spreading benefits widely?

Would you recommend that Foresight continue to advocate open development of nanotechnology? If not, what's better?

Challenge #22: How could we build an open project to develop nanotechnology?

Foresight has been advocating open development of nanotechnology, on the assumption that it would be safer, faster, and would spread the benefits more widely. For now, let's assume that these assumptions are correct, and that an open development project is an important goal.

What would such a project look like? Would it be like an open source software project, where all work is posted openly on the Internet as soon as it's ready?

Who would participate and — most important — why would they help?

The project would need to overcome significant objections from those we'd like to involve:

  1. Academics lose the opportunity to publish their results in journals. They also lose the ability to attract corporate funding of their work in exchange for early access to or ownership of results.
  2. Companies lose confidentiality and exclusive intellectual property rights in their work.
  3. Governments lose the exclusive national advantage of getting the technology in advance of others. Defense interests in particular may have issues with this.

What benefits could the project offer to these entities to get their cooperation? Or do open projects appeal only to independent developers — individuals — who personally own the tools they need to work, such as software programmers? In that case, how could an open nanotechnology development project get beyond the design stage, into the laboratory?

How much might it cost to carry out such a project, and where could these funds come from?

Should Foresight try to make this happen?

Challenge #23: Should an "open project" to develop nanotechnology include, or exclude, entities known to be risky?

Foresight has been advocating open development of nanotechnology, on the assumption that it would be safer, faster, and would spread the benefits more widely. For now, let's assume that these assumptions are correct, and that an open development project is an important goal.

Let's assume further that we get to design how the project operates.

The most popular model for such a project is open source software, in which (in principle) any individual is welcome to contribute, and results are published openly on the Internet for all to see and use.

Is this practical for a technology which has abusive applications? Would we really welcome contributors from non-democratic, authoritarian, and totalitarian countries, where the resulting knowledge might immediately be put to use for destructive, aggressive purposes?

If the results are openly published on the Internet, does it matter whether contributors are permitted from these problematic countries � would they be able to use the results immediately even without direct participation in the project?

Are the risks of open development so large that it must be abandoned as a strategy, despite its attractions? If so, what's a better alternative — who should be first?

What would you advise that Foresight advocate — an open project without restrictions, a quasi-open project with restrictions, or a closed project carried out by the most trustworthy entity we can identify?

Nanotechnology and Machine Intelligence

Challenge #24: Can defense systems using AI be made stable?

To live, humans need a world with some stability. For example, our physical environment must be safe from attack. We need secure defenses.

As technology advances, those defenses need to advance in parallel with (and preferably in advance of) the offensive systems they are defending against. If the offensive side has a powerful technology — such as machine intelligence — then the defensive side had better have it too, and use it as well as the other side, or better.

But the idea of using machine intelligence to create stability is counterintuitive, to say the least. How can an intelligent system be counted on not to change over time? Call it evolution if you wish, or not, it doesn't matter — either way we'd be trying to design using a component that seems likely to change in unexpected ways.

Does it seem plausible that machine intelligence can be designed not to evolve, or to evolve safely somehow?

If so, what mechanisms can you suggest to enforce these limitations?

If not, can you suggest other approaches to this problem?

Challenge #25: Can we coexist peacefully with machine intelligence?

Assume "real" artificial intelligence is possible — sometimes called "strong" AI.

Such entities would be superhuman in cyberspace-based activities, and these strengths could convert into power in the physical world (today, "meatspace") as well.

Some have seen such a development as being equivalent to a new species — one which could, in principle, out-compete Homo sapiens.

(This view is rather depressing, considering the fate of many species at the hands of humanity in the past — or perhaps it's cheering, since more recently we are increasingly focused on preserving other species.)

Do you find it plausible that such entites would have such a large competitive advantage in "fitness" that their existence would inevitably lead to the end of the human species?

Or, do you see the two as occupying separate niches, requiring different resources, and able to coexist rather than compete to the extinction of one or the other?

More difficult — can we find grounds for trusting them, such that we can have confidence that our niche will continue, no matter how smart AI's get?

Challenge #26: Do uploaded entities maintain their "human" rights?

Advocates of uploading propose that it will be possible to recreate human functionality in a machine environment — i.e. to take a specific individual human brain and rebuild it inside a computer.

What are the minimum requirements for a transfer system? Do you find it plausible that uploading will be possible, and if so, when? Will this transfer require a stage where the original structure of a person must be destroyed; or will the recreation of a human in uploaded form allow for the original to remain intact and functional?

Assume that at some point, uploading will be successful. Some human will choose to create a life based on silicon technology — or what ever building blocks are state of the art of the time. What advantages will uploaded humans possess? What disadvantages will they have?

Once we have uploaded humans, should they have the same rights as those humans who choose not to upload? How will the rights of the uploaded be protected and enforced, and by whom?

Challenge #27: What barriers exist to the creation of thinking machines?

When Foresight was founded in the mid-1980's, we felt that it was far too early to discuss artificial intelligence. Almost no one was willing to discuss it seriously.

Even now, few seem able grapple with these ideas. The closest we've seen lately are the discussions of "intelligent robots" initiated by Ray Kurzweil and followed up by Bill Joy. But note the need to include the word robot — people seem to need to think of AI's being embodied in mobile hardware for them to seem real.

Let's take a sceptical view on AI. What techical barriers exist between our current state of technology and the implementation of "real" AI, also known as "strong" AI?

Note that this need not include "consciousness", the definition of which is not agreed on anyway. Also, for our purposes here, assume these real AI's are great at thinking, but not at pretending to be human.

Do the barriers between here and AI consist of physical laws — pretty convincing barriers! — or are they matters of engineering challenge? Or do we not yet have the needed hardware — if so, try using Moore's Law to figure out when we will have that hardware.

Is the best case against AI that its building involves too much complexity? Are there measures of complexity we can use to project how hard the task really is? A recent book argued that the latest computer chips (CPUs) are already the most complex things on the planet. If that book is right, we already make things more complex than people — do you find this plausible?

Given all of the above, what convincing technical arguments are there — if any — for assuming AI is impossible or extremely far off?

Challenge #28: What non-uploaded AI's deserve rights?

If a human mind could be "uploaded" into a computer, presumably it should keep its rights. What about other kinds of AI's?

Machines are becoming increasingly intelligent, and all indications are that this will continue. Eventually, a machine will bear rudimentary intelligence, and be capable of making independent decisions. At what point will a machine intelligence be deserving of legal status? Will they be treated within the same legal structure as humans, or will a new legal code be required? What non-uploaded AI's deserve rights?

Assuming that there will be barriers to creating legal rights for a machine-based species, what form might these barriers take? What individuals or organizations might object to endowing rights upon this new species?

Today, the creators of a new intelligent entity (colloquially known as "parents") are legally required to assist their creation to attain financial independence. Should this be required of those who create AI's? If so, how could it be enforced — would we even know these AI's exist inside the creator's hardware?

What qualifications will intelligent machines need to meet before legal status is applied to these new entities? The Turning Test is often mentioned, but seems non-optimal. What would you suggest?

Nanotechnology and Design Ahead

Challenge #29: Are there convincing arguments that the deployment of nanotechnology, once developed, will not be very rapid?

One area of controversy within the nanotech-aware community has been speed of deployment once the technology is developed.

Some point out that major technologies take decades to be fully deployed, and point to the personal computer as an example. Certainly it is true that slow deployment has been the norm for many excellent technologies.

Can the group think of any major technologies that didn't take decades to be deployed? What factors enabled them to move so fast?

What about nanotechnology — is it more like the personal computer, or more like the fast-deploying technologies?

Specifically, what barriers can you think of to extremely rapid deployment of nanotechnology, once achieved? (Keep in mind that scenarios involving such rapid deployment tend to be unsettling — the group may have to work fairly hard to focus on them seriously.)

Based on your analysis, would you recommend that Foresight keep, or abandon, its current position that deployment could be extremely rapid?

This acceptance of rapid deployment has not been prominent in Foresight's educational efforts, but is used internally as part of policy development. Do you agree that this is a prudent way to handle this issue?

Challenge #30: Proplets - Building Blocks for Lawful Matter?

History shows us that the laws of physics are not designed to be laws of peace or freedom. Disputes over matter have led to great abuse. Nick Szabo proposes a new architecture for organizing matter, the objective of which is to make the raw materials and finished works both function according to the laws and agreements of free people, in addition to being constrained by the laws of physics.

Nick suggests the key is building a new code — in both the legal and the software sense — that allows widely distributed people to cooperate within known, mutually agreeable, and strongly enforced constraints (as defined by law). All involuntary interaction between matter or "proplets" of disparate ownership would be governed by tort laws.

Proplet system architecture starts by making every molecule an owned molecule and every region of space an owned region of space. All matter becomes the right and the responsibility of a free person. Proplets control matter, and protect it from non-owners.

A proplet is a nanotech device with the following abilities: it knows who owns it; knows where and when it is; can communicate securely with nearby proplets and with its owner; can securely recognize nearby proplets with the same owner; and can control nearby inanimate matter, within the boundaries of law. No proplet can be read or controlled through physical tampering, as it will shut down, erase itself, or self-destruct.

Proplets would control electronics directly from ownership or guest modules, and would control machinery via entanglement. As proposed, entanglement would take two forms: firing sequences, without which the machinery cannot function, or direct nanomechanical linkages. Entanglement designs would theoretically make it too expensive for an attacker to steal the electronics or machinery by severing it from the controlling proplet.

Given that this is a limited introduction to proplets, are you in favor of coding law into our technology? How do you feel about dividing up all matter among individuals? Can you add anything to make this proposal more robust?

Challenge #31: How can we turn nanotech's massive production of conventional goods into a benefit instead of a problem?

We normally picture the design-ahead process being used to come up with entirely new products, never possible before nanotechnology.

But design-ahead can be used more easily to come up with ways to quickly and cheaply make massive amounts of existing products.

The downside of this expected ability is the possibility that it will be used to make large amounts of conventional offensive weaponry, such as huge amounts of precision-guided munitions.

Rarely examined is the upside — how could this ability be used for positive purposes?

For example, many expect nanotech to cause economic disruption. Are there goods that could be produced that would help people get past this difficult time of transition between bulk technology (with its traditional jobs) and nanotech?

A harder question: are there conventional products that could be made that would counteract the offensive weapons problem mentioned above? Or will this require new product designs (e.g. "weapons" with a strong defensive bias)?

Challenge #32: Open Arms

Perhaps the most troubling aspect of the early days of nanotechology is the prospect that it may be easier to design offensive nanotech weapons than to design the defenses against them — possibly producing a time gap in which only the offensive weapons exist, with no defenses available. Although it has not been proved, this seems likely to be the case...without proactive work on our part.

This potential time gap is not due to a difficulty in manufacturing the defenses — it would result from the longer time needed to design the more complex defensive systems, especially since the assumption is that this work cannot be started until the offensive systems are already built and available to study.

Senior Associate Mark Miller has proposed a possible solution for this problem, termed "Open Arms." This would be an open source-style project to "design-ahead" both the offensive and defensive weapons in parallel, thereby eliminating the dangerous timegap between when the two can actually be built.

The risky part of this proposal is that the offensive weapons designs would be public knowledge — at least among those within the project.

Do you find the Open Arms proposal a plausible one, either as is or with modifications you suggest? If so, should Foresight actively advocate it — even sponsor it? If not, can the group come up with another idea on how to close the postulated dangerous time gap between offensive and defensive nanotech weapons availability?

Technology and Ethics

Challenge #33: How can the Hippocratic Oath be updated to guide medical nanotechnology?

"I SWEAR by Apollo the physician, and Aesculapius, and Health, and All-heal, and all the gods and goddesses, that, according to my ability and judgment, I will keep this Oath and this stipulation to reckon him who taught me this Art equally dear to me as my parents, to share my substance with him, and relieve his necessities if required; to look upon his offspring in the same footing as my own brothers, and to teach them this art, if they shall wish to learn it, without fee or stipulation; and that by precept, lecture, and every other mode of instruction, I will impart a knowledge of the Art to my own sons, and those of my teachers, and to disciples bound by a stipulation and oath according to the law of medicine, but to none others. I will follow that system of regimen which, according to my ability and judgment, I consider for the benefit of my patients, and abstain from whatever is deleterious and mischievous. I will give no deadly medicine to any one if asked, nor suggest any such counsel; and in like manner I will not give to a woman a pessary to produce abortion. With purity and with holiness I will pass my life and practice my Art. I will not cut persons laboring under the stone, but will leave this to be done by men who are practitioners of this work. Into whatever houses I enter, I will go into them for the benefit of the sick, and will abstain from every voluntary act of mischief and corruption; and, further from the seduction of females or males, of freemen and slaves. Whatever, in connection with my professional practice or not, in connection with it, I see or hear, in the life of men, which ought not to be spoken of abroad, I will not divulge, as reckoning that all such should be kept secret. While I continue to keep this Oath unviolated, may it be granted to me to enjoy life and the practice of the art, respected by all men, in all times! But should I trespass and violate this Oath, may the reverse be my lot!"

If you got bored while you were reading that, or perhaps laughed to yourself, then it is clearly time to update this classic. Consider yourself a modern day Hippocrates. Much as he did, you see the need for a code of conduct that addresses the ethical questions of the medical community. The potential for abuse is much greater than in Hippocrates' time, and the Oath you design will take many of them into account. What behaviors should be covered in the updated version?

Challenge #34: Case Study: Professional Society Code of Ethics

It's been suggested that one partial answer to heading off nanotechnology abuse is to encourage nanotechnologists to have a strong code of ethics.

Enclosed are copies of the Australian Computer Society's Code of Ethics and of the IEEE Code of Ethics.

How well would these translate into a code of ethics for nanotechnologists? Which document would you prefer to use as a base from which to evolve something new? ? What would you delete, and what would you add?

Do you believe that established codes of ethics make a difference? Would you agree with Foresight's current position that, while they are only a small part of an effort to prevent nanotech abuse, they may provide at least some help toward that goal and are therefore worth working on?

Foresight and IMM have already prepared guidelines for nanotechnology safety. Would it be useful to do another version that translates these into "Code of Ethics" format?

Challenge #35: How can we educate society's traditional ethicists — religious leaders?

Nanotechnology and other powerful technologies on the horizon raise some new and tough ethical issues. These kinds of issues trigger strong emotions, which can lead a population into wrong choices on how to handle technologies.

The majority of people in the US, and in most other countries as well, regard their religious leaders as the experts on ethical issues, regardless of their level of technological understanding.

These religious leaders will inevitably be making recommendations on nanotechnology, as they are increasingly doing about genetic engineering.

How can we help religious leaders make useful recommendations on nanotechnology issues? Is there a way to reach out to them early on, before they commit themselves in public to a particular view, and educate them on what the real choices are? What specific facts do they need to learn?

Should Foresight try to do this, and if so, how should we go about it?

Challenge #36: What widely-recognized responsibilities would we be shirking if we DON'T develop nanotechnology?

When discussions of nanotechnology and ethics come up, everyone involved seems to focus on why ethical considerations might lead society to attempt to slow or avoid developing nanotechnology.

But this considers only half the question — it ignores the potential downsides of NOT developing nanotechnology.

For those of us who've been assuming nanotechnology will be along soon to help out, picturing an Earth that doesn't reach nanotech soon is a useful thought experiment.

Which current problems are we counting on nanotechnology to help solve? Are they plausibly solvable another way, or is nanotech the only serious way to address them?

Is this case compelling enough that Foresight should use this argument in our educational outreach efforts?

Unregulated Cyberspace Markets

Challenge #37: Can we discourage extortion and terrorism in cyberspace?

Assume private encrypted digital currencies come into widespread use. Suddenly the traditional point at which criminals are caught while committing kidnapping and extortion — the delivery of the payoff — is made invisible. This strongly tilts the balance in favor of kidnappers and extortionists.

One of the strongest forces against terrorism is moral repugnance on the part of those with the knowledge to perform it properly — this is why terrorist acts often include stupid errors, such as the attempt to interact with the van rental company after a van was used to blow up the World Trade Center. If technical experts can be paid off anonymously, enabling them to never meet in person those who hire them for technical advice, will this moral repugnance factor be seriously undermined, enabling terrorists to get the excellent technical advice they now seem to lack?

This is the downside of private, encrypted digital currencies — financial transactions involved in serious crimes are facilitated and made invisible.

Is this problem likely to be so severe that government will try to ban these new currencies, or will that just be their excuse for carrying out a ban they actually desire for other purposes, namely keeping a eye on all our financial transactions for tax and other purposes?

Is there anything to be done to either head off this problem, or at least reduce it? If so, should Foresight actively advocate these solutions? For example, if we can tolerate physical transparency (i.e. widespread private surveillance and sensing), will this reduce kidnappping, extortion, and terrorism sufficiently that the easier payoffs don't matter

Challenge #38: Will the government try to ban private, digital exchange?

Assume that encryption is commonplace, and becoming more sophisticated with each passing month. People are encrypting even the most innocuous of documents, and any attempt to find specific information within the volumes of private data changing hands is becoming increasingly futile. Along with letters to the family and pictures of the kids, people are encrypting financial transactions and contract data.

Even money itself has evolved into privately-guaranteed, encrypted digital currencies. (Not surprising, since even Alan Greenspan liked the idea of private currencies — and frequent flyer miles were an early example of untaxed private currency.) This makes it easier to hide income from the tax authorities.

Income taxes sent to the government are declining, and government officials are becoming concerned. Projections indicated that in less than ten years, income tax receipts will drop to 50% of current levels.

Will the government try to ban private, digital currencies? What tactics might they use, and what would be the result? Will we see a Napsterization of money, in which a large segment of the population — perhaps younger people — routinely break the law to carry out transactions they feel should be legal?

Should Foresight try to head off what are likely to be attempts to squash digital exchange? What might be the ramifications of engaging in a campaign to defend private digital currencies?

Challenge #38 1/2: If income tax becomes obsolete, how should government be funded?

Assume that private encrypted currencies come into being and become a popular way to make payments to individuals. Knowledge workers who produce primarily information — which can be delivered encrypted — may find it easy to hide income. Revenues from income tax could decrease dramatically.

In this case, where would governments look for funding? Traditional methods include real estate taxes, other property taxes (requiring tedious and intrusive inventories), sales taxes (regarded as regressive) and customs fees (which penalize international trade).

One suggestion has been that government could be funded through an endowment, rather than an ongoing tax.

Which of these does the group find least offensive? Most practical? For example, it is sometimes said that sales taxes are less intrusive on privacy grounds than are income taxes — do you agree?

If none of the above appeal, can you come up with another method of funding government that makes more sense?

Challenge #39: Is physical transparency with cyberspace secrecy a stable situation?

In the physical world, we are being observed and our actions recorded any time we enter a government building, shop in a public area, or drive on the streets. Cameras are widespread and more are being deployed on a daily basis. Eventually, sensing and recording technology will be so cheap that individuals will record their entire lives, including analyzing the DNA they encounter.

In the digital world, we can make and break a new identity on that same daily basis, if we so choose. Encryption technology can be applied, if desired, and the odds are that more and more people will choose to take advantage of encryption as time passes. Eventually even money — private currencies — could be encrypted and invisible.

What will the world look like if we have complete physical space transparency, and privacy is only protected within cyberspace? Is this likely to be a stable situation? Is there a limit to the amount of physical transparency that people will tolerate?

Consider that privacy is quite recent for our species — in our traditional tribes and villages, everyone's business was known. Even royalty had retainers walking through their bedrooms. In Japan, practices evolved for enabling socially-constructed privacy despite physical transparency.

With great transparency, today's lawbreakers will be detected, including the huge numbers violating drug prohibition. Do you agree with David Brin that this will force reform of bad laws?

Given the coming of cheap surveillance technology in private hands, Foresight has been taking the attitude that physical transparency is on its way and we might as well get used to the idea. Do you agree, or would you recommend a change of policy on this issue?

"Intellectual Property"

Challenge 40: Is copyright dying, does it matter, and could tipping compensate?

Computers are made to copy information. Attempts to prevent this are ultimately doomed to fail, even with special hardware support. Eventually the information —music, film, etc.— must be delivered to the user, so that experience can be recorded and duplicated.

Napster and its more advanced distributed relatives, such as MojoNation, have turned many people into routine violators of copyright. Our species shares information with our friends, and trying to get us to stop, when the downside is so remote and theoretical, is very difficult.

There is a real downside to turning a large segment of the population into scofflaws — respect for all law diminishes, and civil society is gradually undermined. Confucius claimed that the more laws a nation has, the less law-abiding its people will be.

If technology is pushing copyright into obsolescence, we'd better see how that plays out.

Imagine a world without copyright. What current creations would no longer be made? Would researchers still write up their work, would novelists and others write books, would musicians write music? Would these be published online? (Seems likely, since it's so cheap.) On paper and physical media? Who would pay for this physical publication? Does on-demand publishing provide the answser?

Much of what such creators most desire is credit for their work — and the corresponding higher incomes for their in-person work — rather than royalties, which often are negligible anyway. Could this reputation value be delivered in a world without copyright? Does open source software give us any models here?

Without copyright, would funding of high-expense projects such as expensive films be discouraged, and if so, do we care?

First Paypal, and later Amazon, have enabled voluntary micropayments on the web — a form of tipping. Tipping works in restaurants. Could it ameliorate or even solve the challenge of compensating creators, in a world where copyright doesn't work?

Can you recommend what Foresight's postition should be on the threat to copyright?

Challenge 41: Could intellectual property laws slow nanotechnology applications?

Molecular nanotechnology, sometimes called "strong" nanotechnology, promises to make obsolete a huge number of today's physical technologies.

Major industries are based on these current technologies.

Would it be in the interest of major corporations whose technologies are threatened to buy up patents on nanotechnology work?

If so, would it be likely that this new patented nanotech work would be aggressively developed, or might it be tempting for these companies to put the technologies on the shelf and continue to enjoy their current profits based on earlier technologies?

Do we have any historical examples of this kind of thing actually occurring in the real world, or is this negative scenario just a fantasy?

Could this scenario come about due to the natural incentives on companies, with no malice or and no conscious decision to delay the new technology?

If such a thing did happen, is the maximum delay just the 20 year patent lifetime, or could companies use the usual technique of applying for related patents to extend the time of their patent-based monopoly on the new technology?

Would it work to require patents to be actually used in order for them to continue to be valid, or would this only cause pseudo-uses to be invented to keep the patent in force?

If the overall problem described above does occur, which entities in society have an interest to intervene, and do they have the real influence needed to make a change?

Do you think this is a real problem? If so, should Foresight take a position on it, and what should this position be?

Challenge 42: Would a technical AI, good only at technology, cause a sudden drastic concentration of intellectual property-based power?

Put aside the question of whether "real" artificial intelligence can be built, and if so, when. Consider instead a highly-advanced software system which is superior to humans at technological design. Call that a technical AI.

This technical AI can be considered an nth-generation CAD (computer automated design) tool — so automated that the "designer" using the software need only point in the general direction of what is desired, and the software takes it from there, rapidly suggesting advanced designs. Assume such a system would outperform even a large number of human designers.

Under the current intellectual property (IP) system, the owner of this new tool could rapidly develop an immense portfolio of patents, leading to a possibly unsurpassed concentration of technological monopoly power, and the wealth needed to exploit it. This amount of power would be convertible into political and even military power.

Is this a plausible scenario? If not, why not? If so, what should be done to head it off, if anything?

Do you think Foresight should take a position on this question, and if so, what should it be?

Challenge #43: What happens when humans have internal recording devices?

Already chips are being implanted into humans to give sight to the blind. Project these developments and see what happens to copyright:

It is sometime in the future, and we have developed rudimentary nanotechnology. Medical technology has grown in sophistication and scope. Designs exist for the creation of organic memory chips that are theoretically capable of interfacing with hearing and sight. A broadcast architecture is being built to allow for external download of the data stored on the chip. Experimentation is poised to go from laboratory to clinical studies within months.

An undertow is forming, as agents are uniting to halt to the clinical studies of these chips. People are wondering what will happen when every person passing by might be recording even random interactions on the street. Who owns that data, and may it be redistributed?

When a person with a chip implant, giving perfect memory for sound, hears and remembers music, is that making a copy or is it fair use? Can they then distribute the images and sound files they collect — i.e. may they sing

What if such a person reads copyrighted material (which is still enforced despite strenuous attempts to remove this aspect of the legal code in early 2005) and repeats it verbatim to another? Do people have the right to do more than store information for personal access? If not, doesn't that conflict with freedom of thought and speech?

Do you feel that copyright will survive the coming ability for individuals to record and repeat information "perfectly" by today's standards? Or should we move on to another system and if so, what?

Should Foresight actively advocate alternatives to the current copyright laws?

Cooperation-Enabling Software

Challenge #44: How can we improve Idea Futures?

Foresight has been hosting an Idea Futures market for Senior Associates for about two years (http://nanodot.org/if/Trade). To make the market more useful, we need to increase the number of participants and volume of trades. Several aspects of the current experiment could be improved to accomplish this:

  1. Teaching the concept — The operation of an Idea Futures market isn't intuitive, and has usually been introduced to Senior Associates via personal tutorials at the gatherings. How can we reduce barriers to participation? Examples might include: having a tutorial on the site, online help links, etc.
  2. User interface — The existing user interface has been confusing to many people [see screen shots]. How can it be improved and/or simplified, while still allowing access to all the market's features? Or do we need to give up some features?
  3. Claim wording and judging issues — In other IF markets, ambiguous claim wording and contentious judging decisions have aversely affected the market. What claim creation and judging processes should Foresight use to mitigate such risks?
  4. Access to the market — Although the market remains operational between gatherings, it has a low profile. For example, gatherings are the only times at which Senior Associates can easily put more money into the system. Should we increase the market's exposure by newsletters/updates between gatherings, the ability to deposit money throughout the year, etc.?

Output:

  • Suggest specifics on what steps you would recommend to improve the Idea Futures system.

Challenge #45: Does ANY of today's collaboration software work well?

To solve the complex challenges raised by coming powerful technologies, we need to be able to discuss complex issues online. Foresight has been disappointed with the tools now available for doing this.

What software have you tried using for complex online discussions, and has ANY of it worked well for this purpose?

For those systems that worked or partially worked, which features did you find most useful? Which were most used by participants?

Of the systems that didn't work, why do you think they failed? Can you identify specific features they needed to work, or specific features that seemed to cause their failure?

If you can identify a good system, is it cross-platform? Open source? Without these properties it's hard to (1) get everyone participating who needs to be there, and (2) correct problems with the software and add needed features. Discussions in proprietary formats are at risk of becoming unreadable if the vendor drops the product — this outcome is totally unacceptable.

Can the group recommend a specific existing system for Foresight and our extended community to use? If not, can you compile a list of features that you recommend we include in new software to be commissioned by Foresight?

Challenge #47: Can we envision a great 20-person collaboration tool?

To solve the complex challenges raised by coming powerful technologies, we need to be able to discuss complex issues online. Foresight has been disappointed with the tools now available for doing this.

Usually we've focused on large-group tools — systems open to hundreds or thousands of people. But perhaps that's not what's really needed. Especially for design tasks, presumably a smaller group would be more effective.

Perhaps more valuable would be a really great tool for online collaboration on complex issues by a much smaller team — say, about 20 highly cooperative, highly motivated people. The system used by such a team would not need to solve many of the problems faced by public systems.

Have you personally used a system that handled complex online work by a team this size? What is it and why did it work well?

If you can identify a good system, is it cross-platform? Open source? Without these properties it's hard to (1) get everyone participating who needs to be there, and (2) correct problems with the software and add needed features. Discussions in proprietary formats are at risk of becoming unreadable if the vendor drops the product — this outcome is totally unacceptable.

Can the group recommend a specific existing system for Foresight and our extended community to use? If not, can you compile a list of features that you recommend we include in new software to be sponsored by Foresight?

Challenge #48: Can proper credit be given to those who do collaborative work?

It's often claimed that what creative people most want for their work is not money — it's credit, recognition from their peers, reputation value. As society overall becomes more affluent, this seems to becoming more obvious, such as the growing open source software movement.

But the mechanisms for assigning credit are still primitive. Consider a collaboratively-written document, design, or software program — how can the user tell who has contributed what?

Right now it's not easy. How inhibiting is this, in your view?

How can it be fixed? Will this need to vary based on the type of collaborative product?

Is the overhead required to track credit so high that it overwhelms the supposed benefits? Or can it somehow be made an automatic part of the collaborative process?

If you think of a system you like, should Foresight advocate it, and perhaps even commission its design and production?

 

Foresight Programs

 

Home About Foresight Blog News & Events Roadmap About Nanotechnology Resources Facebook Contact Privacy Policy

Foresight materials on the Web are ©1986–2024 Foresight Institute. All rights reserved. Legal Notices.

Web site developed by Stephan Spencer and Netconcepts; maintained by James B. Lewis Enterprises.