Presenter
Kevin Esvelt, MIT Media Lab
Kevin Michael Esvelt is an American biologist. He is currently an assistant professor at the MIT Media Lab and leads the Sculpting Evolution group. After receiving a B.A. in chemistry and biology from Harvey Mudd College, he completed his PhD work at Harvard University as a Hertz Fellow. Esvelt developed phage assisted continuous evolution (PACE) during his PhD as a graduate student in David R. Liu's laboratory. As a Wyss Technology Fellow, Esvelt was involved with the development of gene drive technology. He focuses on the bioethics and biosafety of gene drives. In 2016, Esvelt was named an Innovator Under 35 by MIT Technology Review.
Summary:
Civilization is demonstrably vulnerable to pandemic pathogens, which are becoming increasingly accessible due to our growing skill at programming biology and poorly aligned AI assistants capable of walking untrained individuals through complex tasks. Worse, natural pandemic pathogens are not optimized to cause harm, so future engineered agents could be far more destructive. These are symptoms of an overarching problem: expanding access to increasingly powerful technologies can permit individuals to threaten civilization. One possible solution is to construct trustworthy systems that can privately receive dangerous knowledge and use it to mitigate harms without necessitating disclosure to fallible humans. SecureDNA is an automated cryptographic platform designed to screen global DNA synthesis for hazards without learning anything about harmless orders or disclosing what is considered hazardous. Developed by an international team of academic biotechnologists, cryptographers, and information security specialists, it offers a way to crowdsource threat identification without requiring scientists to disclose harmful information to anyone other than a single system curator, who can use that knowledge to restrict access to the hazard (and numerous decoy sequences) to everyone except those with permission to work with the genetic construct in question.
Revised challenge (after seeing the example): How can we avoid discovering potentially catastrophic technologies until we have constructed adequate defenses… especially when disclosing the existence of a threat could make it accessible?