We have been busy, and we have a lot to catch you up on. This update will tell you how in the past few months, we have been fulfilling our mission of advancing revolutionary technologies by rewarding research excellence, raising awareness against recklessness, and creating a community to promote beneficial uses and reduce risks associated with these technologies.
This update is divided in five parts:
1. Our recent work advancing beneficial technology…
2. …and preventing risks and pitfalls of technology.
3. We’ve been cogitating on hard problems, with the help of our community (which we are grateful for!).
4. We have been steering the change we want to see in the world.
5. Announcements: please meet two new humans we are co-creating positive futures with.
1. We have been bent on advancing beneficial technology…
We announced the winners of the 2018 Feynman Prize!
It was a great honor to announce Lutz, Heinrich, and von Lilienfeld as winners of the 2018 Foresight Institute Feynman Prizes in nanotechnology, presented by two of our former fellows (and not the least of them): Nobel Laureate Sir Fraser Stoddart and Jonathan Barnes. Find more information in the press release.
Consider helping our mission by applying to the 2018 Atomic Precision for Longevity Workshop.
Are you actively working to extend healthy human lifespan? We invite you to apply to our 2018 Research Workshop on Atomic Precision for Longevity. After our upcoming Spring workshop with honorary Chair Nobel Prize Laureate Fraser Stoddart, this Fall workshop will focus on working on novel approaches to extending human longevity. More info and application form here.
2. …and working hard on preventing risks and pitfalls of technology.
Through a multidisciplinary Seminar on Artificial General Intelligence & Corporations:
If we don’t know yet how to align Artificial General Intelligences with our goals, we might gain some insight by studying corporations. Indeed, some argue corporations are in fact Artificial Intelligences – legally at least we treat them as persons already. We spent an afternoon examining AI alignment, especially whether corporation’s status allows insights into how to align AI goals with human goals. While this meeting focused on AI safety, it merged AI safety, law, policy, philosophy, and computer security, and is highly relevant for anyone working in or interested in those areas. All the seminar talks and session are available here. White paper coming soon.
Keynote speakers included (hyperlink to video recording):
Brewster Kahle, Founder of the Internet Archive
Tom Kalil, Chief Innovation Officer at Schmidt Futures, former Deputy Director for Technology & Innovation at OST
Mark Nitzberg, Executive Director of the UC Berkeley Center for Human Compatible AI
Mark Miller, Senior Fellow of the Foresight Institute, pioneer of agoric computing, designer of several object-capability programming languages
Elizabeth Enayati Powers, Counsel at Turner Boyd LLP
Peter Scheyer, Foresight Institute Fellow in Cybersecurity & Corporate AGI, Cybersecurity Veteran
Allison Duettmann, AI Safety Researcher at Foresight Institute, Advisor to EthicsNet
And our AI Safety Overview @SXSW which got us featured right there, with Elon Musk:
On March 13th, at SXSW in Austin, Allison Duettmann held a workshop on AI Safety : Why it’s Hard & State of the Art, an overview of the state-of-the-art of the field of AI safety, divided by ethics, technical alignment, cybersecurity & coordination You can watch the full video below or access it just here.
We were quite excited to see Allison’s workshop featured alongside Elon Musk’s interview, and David Chalmer’s session in the AI overview published by SXSW.
3. We’ve been cogitating on hard problems, with the help of our community (and we are grateful for it!).
Our salons aim at helping our civilization endure through a healthy critique of our current paradigms.
The monthly salon series in 2018 focuses on strengthening civilization – an approach we first discuss as strategy for AI safety in our paper on Decentralized Approaches to Reducing Existential Risks. Whether or not one believes our proposal for AI safety that civilization is the relevant superintelligence which needs strengthening – an antifragile civilization seems like a valuable goal regardless of which existential risks one considers. Since strengthening civilization is a mouth-full, we divide the topic up into manageable pieces, and debunk one error of civilization per month, investigate the historical genealogy of the practice at hand, scrutinize the underlying incentives, and examine whether we can do better in the future.
Topics we discussed to date:
We look forward to continuing the good work, with your help! Our next salon, “Making the most of our humanity” with David Eagleman and Arvind Gupta is on June 13th at IndieBio – the salon has the very practical goal of establishing a roadmap and moral framework for the future of Biotechnology and Neurotechnology. Come, question, participate: the invite is just below.
We brought together scientists & technologist to work across disciplines, and it was delightful.
Foresight Institute Vision Weekend gathered 200 future-minded members, makers, and doers—along with the brightest minds in key fields—to debate what humanity’s best path could look like, and what we personally could do to help steer things in that direction. It was candy for the mind. Keynote debates were held at Gray Area on Saturday, December 2 and followed by breakout sessions at The Laundry on Sunday, December 3. You can watch all four Keynote sessions, and the Vision weekend summary below for an outline of the event.
Foresight Vision Weekend summary.
4. We have been steering the change we want to see out in the world.
Allison Duettmann has been all over the place, inspiring a positive long-term future for humanity:
Mark S. Miller, Foresight Senior Fellow on Computation, and his team launched Agoric, backed by Zooko Wilcox and Naval Ravikant.
Together with Mark, Christine Peterson, and Allison Duettmann have been advocating for tackling our unsafe cybersecurity foundations for a while. We are therefore very excited for this company, which goes after blatant cyber vulnerabilities in today’s smart contracts. Read this Coindesk article announcing the launch. For more information, watch Allison Duettmann in discussion with Mark S. Miller on cybersecurity during our April’s seminar on AGI, and read our paper on decentralized approaches to mitigating cyber risk on Google Research.
5. We are excited to collaborate with two humans to co-create positive futures.
Sonia Arrison, has joined Board of Directors, Foresight Institute:
We are so honored that Sonia Arrison is joining the Foresight team as a member of our Board of Directors. Sonia is a best-selling author, analyst, entrepreneur, and investor. She is founder of 100 Plus Capital, co-founder of Unsugarcoat Media, and associate founder and advisor to Singularity University in Mountain View, California. Her research focuses on exponentially growing technologies and their impact on society. Her most recent book, 100 Plus: How the Coming Age of Longevity Will Change Everything, From Careers and Relationships to Family and Faith, addresses the social, economic, and cultural impacts of radical human longevity. It gained national best-seller status and keeps Sonia busy speaking all over the world. She has inspired Foresight to start planning a series of salons on the topic of longevity.
Lou Viquerat, Director of Community:
The Foresight team is growing, and Lou Viquerat is joining the ranks as Director of Community. Since Lou is editing the present update, she can very directly tell you how thrilled she is to be joining Foresight’s team, grateful for the opportunity to make a difference in driving positive change in the world, and looking forward to help advance technologies beneficial to the human future – with all of you.