• About Us
    • About Us
    • Team
    • Directors & Advisors
    • Open Positions
    • Contact
  • Seminars
    • Nanotech: Molecular Machines
    • Biotech: Health Extension
    • Computation: Intelligent Cooperation
    • Neurotech: Improving Cognition
    • Space: Expanding Outward
    • Foresight: Existential Hope
      • Existential Hope Page
  • Prizes & Fellowships
    • Fellowship
    • The Longevity Prize
    • Foresight Feynman Prizes
      • Press Release for Winners 2022
    • Foresight Accelerators
  • Events
    • Global Meetups
    • Past Member Gatherings
    • 2023 Longevity Frontiers Workshop
    • 2023 Whole Brain Emulation Workshop
    • 2023 Foresight Space Workshop
    • 2023 Intelligent Cooperation: Cryptography, Security, AI
    • 2023 Foresight Molecular Systems Design Workshop
    • 2023 Vision Weekends
  • Publications
    • Nanotech: Molecular Machines
    • Biotech: Health Extension
    • Computation: Intelligent Cooperation
    • Youtube Channel
    • Podcast
    • Newsletters
  • Foresight X
    • Existential Hope
    • Gaming the Future
    • Tech Trees
  • Donate & Join
    • Personal Longevity Group
Menu
  • About Us
    • About Us
    • Team
    • Directors & Advisors
    • Open Positions
    • Contact
  • Seminars
    • Nanotech: Molecular Machines
    • Biotech: Health Extension
    • Computation: Intelligent Cooperation
    • Neurotech: Improving Cognition
    • Space: Expanding Outward
    • Foresight: Existential Hope
      • Existential Hope Page
  • Prizes & Fellowships
    • Fellowship
    • The Longevity Prize
    • Foresight Feynman Prizes
      • Press Release for Winners 2022
    • Foresight Accelerators
  • Events
    • Global Meetups
    • Past Member Gatherings
    • 2023 Longevity Frontiers Workshop
    • 2023 Whole Brain Emulation Workshop
    • 2023 Foresight Space Workshop
    • 2023 Intelligent Cooperation: Cryptography, Security, AI
    • 2023 Foresight Molecular Systems Design Workshop
    • 2023 Vision Weekends
  • Publications
    • Nanotech: Molecular Machines
    • Biotech: Health Extension
    • Computation: Intelligent Cooperation
    • Youtube Channel
    • Podcast
    • Newsletters
  • Foresight X
    • Existential Hope
    • Gaming the Future
    • Tech Trees
  • Donate & Join
    • Personal Longevity Group

Gaming the Future: The Book!

Foresight Update: June 2018

This update will tell you how in the past month, we have been fulfilling our mission of advancing revolutionary technologies by rewarding research excellence, raising awareness against recklessness, and creating a community to promote beneficial uses and reduce risks associated with these technologies.

We brought the top experts together around coordinating progress in Artificial General Intelligence Safety

After the encouraging response to last year’s Artificial General Intelligence (AGI) strategy meeting on Timelines & Policy (white paper), this year Foresight Institute organized another one-day AGI strategy meeting around the Effective Altruism Global Conference, June 8-10, to gather important AGI safety organizations.

In addition to examining strategies for impact, this meeting served as point of contact to coordinate efforts with other AI safety organizations. 

Given the deserved attention this topic has recently received by Good AI, FHI, FLI, OpenPhil, and other organizations (via grants, prizes, and opening positions), we dedicated this year’s strategy meeting to discuss AI/AGI Coordination, especially amongst Great Powers.

Topics addressed:

  • Near-term opportunities to influence AI policy
  • Emergent race dynamics between great powers, e.g., between the US & China
  • Historic strategies for achieving cooperation between different players in different games
  • Advantages of multilateral vs. unilateral scenarios in AI development
  • General concerns regarding open vs. closed research
  • Potential for increased coordination between AI safety organizations

Stay put, white paper is coming soon!

The participants of this meeting truly made it a proficient venture into coordination. Heartfelt thank you:

Olga Afanasjeva – Good AI
 Stuart Armstrong – Future of Humanity Institute
 Seth Baum – Global Catastrophic Risk Institute
 Haydn Belfield – Centre for the Study of Existential Risk
 Rob Bensinger – Machine Intelligence Research Institute
 Malo Bourgon – Machine Intelligence Research Institute
 Niel Bowerman – 80,000 Hours
 Ryan Braley – Lightbend
 Tom Brown – Google Brain
 Samo Burja – Bismarck Analysis
 Ryan Carey – Ought, Foresight Fellow
 Betsy Cooper – Center for Long-Term Cybersecurity
 Owen Cotton-Barratt – Future of Humanity Institute
 Miron Cuperman – Base Zero
 Jessica Cussins – Future of Life Institute, Center for Long-Term Cybersecurity
 Jeffrey Ding – Future of Humanity Institute
 Allison Duettmann – Foresight Institute
 Peter Eckersley – Electronic Frontier Foundation
 Kevin Fischer – Crypto Lotus
 Carrick Flynn – Future of Humanity Institute
 Benjamin Garfinkel – Future of Humanity Institute
 Melody Guan – Stanford 
 Geoffrey Irving – OpenAI
 De Kai – Hong Kong University of Science & Technology
 Alex Kotran – AI Initiative, Harvard Kennedy School
 Victoria Krakovna – DeepMind
 Janos Kramar – DeepMind
 Tony Lai – Legal.io

Jade Leung – Future of Humanity Institute
 Matthew Liston – ConsenSys
 Terah Lyons – Partnership on AI
 Matthijs Maas – FLI,  Global Catastrophic Risk Institute
 Richard Mallah – Future of Life Institute
 Fiona Mangan – Justice and Security in Transitions
 Tegan McCaslin – AI Impacts
 Joe McReynolds – China Security Studies Fellow, Jamestown Foundation
 Eric Michaud – Rift Recon
 Mark Miller – Foresight Institute,  Agoric
 Ali Mosleh – John Garrick Institute for the Risk Sciences
 Mark Nitzberg – Center for Human Compatible AI 
 Jim O’Neill – Mithril Capital
 Catherine Olsson – Google Brain
 Michael Page – OpenAI
 Christine Peterson – Foresight Institute
 Peter Scheyer – Foresight Fellow
 Carl Shulman – Future of Humanity Institute
 Tanya Singh – Future of Humanity Institute
 Jaan Tallinn – Future of Life Institute, Center for the Study of Existential Risk
 Alyssa Vance – Apprente
 Michael Webb – Stanford 
 Qiang Xiao – School of Information UC Berkeley, China Digital Times
 Mimee Xu – UnifyID
 Roman Yampolskiy – University of Louisville, Foresight Fellow

We had a fantastic time probing the minds of David Eagleman and Arvind Gupta on the future of Neurotech and Biotech

Representing IndieBio, David Eagleman and Arvind Gupta joined Foresight agents Allison Duettmann and Lou Viquerat on a mission to explore the paths forward for human biology.

The evening had two practical purposes: a. laying out a roadmap of biotech and neurotech in order to b. establish an ethical framework to guide the development of those technologies in the very near future. All participants were encouraged to inform the discussion, and our audience questions proved to be on point! Watch both salon videos and enjoy this engaging and participative discussion.

Part A:  Roadmap for Human Biology.

Part B: Ethics of Future Human Biology

Our salons aim at helping our civilization endure through a healthy critique of our current paradigms.

The monthly salon series in 2018 focuses on strengthening civilization —an approach we first discuss as strategy for AI safety in our paper on Decentralized Approaches to Reducing Existential Risks. Since it is a mouthfull, we divided the topic up into manageable pieces.

Topics we discussed to date (all salon videos available in this YouTube playlist):

  • Debunking Social Motives with Robin Hanson – video here.
  • AI & Human Morality with Allison Duettmann – video here.
  • Creating Counterculture with Joon Yun – video here.
  • Updating Rationality with Allison Duettmann – event here.

We cannot wait to read “Artificial Intelligence Safety and Security”

Foresight Fellow R. Yampolskiy‘s book with contributions from Eric Drexler, Nick Bostrom, Max Tegmark, Bill Joy, Ray Kurzwell, Ellezer Yudkowsky, Ian Goodfellow, David Brin, Kevin Warwick, Edward Frenkel, Samy Benglo, and many more. 

Roman V. Yampolskiy, 2018 Fellow in Artificial Intelligence Safety & Security is publishing a book on AI. We recommend you pre-order now, lest you forget!

Features
 

  • Introduces AI Safety and Security and defines concepts necessary to formalize its study
  • Describes a number of AI Safety and Security mechanisms
  • Defines the field of AI Safety and Security which is at the intersection of computer security and AI
  • Serves as a reference for Cybersecurity experts working on AI
  • Addresses reward engineering and the theory of Value Alignment

Summary

“The history of robotics and artificial intelligence in many ways is also the history of humanity’s attempts to control such technologies. From the Golem of Prague to the military robots of modernity, the debate continues as to what degree of independence such entities should have and how to make sure that they do not turn on us, its inventors. Numerous recent advancements in all aspects of research, development and deployment of intelligent systems are well publicized but safety and security issues related to AI are rarely addressed. This book is proposed to mitigate this fundamental problem. It is comprised of chapters from leading AI Safety researchers addressing different aspects of the AI control problem as it relates to the development of safe and secure artificial intelligence. The book is the first edited volume dedicated to addressing challenges of constructing safe and secure advanced machine intelligence.”

The chapters vary in length and technical content from broad interest opinion essays to highly formalized algorithmic approaches to specific problems. All chapters are self-contained and could be read in any order or skipped without a loss of comprehension.

About the Author: Roman V. Yampolskiy, Foresight Fellow in Artificial Intelligence Safety & Security
 

Roman is a Tenured Associate Professor in the department of Computer Engineering and Computer Science at the Speed School of Engineering, University of Louisville. He is the founding and current director of the Cyber Security Lab and an author of many books including Artificial Superintelligence: a Futuristic Approach. Roman holds a PhD degree from the Department of Computer Science and Engineering at the University at Buffalo. He was a recipient of a four year NSF (National Science Foundation) IGERT (Integrative Graduate Education and Research Traineeship) fellowship.

And we wanted to share with you Ryan Carey’s last research

Check out this paper from Ryan Carey, 2018 Fellow in Artificial Intelligence & Safe Machine Learning. Incorrigibility in the CIRL framework discusses whether you can make redirectable agents via various mechanisms.

Abstract

A value learning system has incentives to follow shutdown instructions, assuming the shutdown instruction provides information (in the technical sense) about which actions lead to valuable outcomes. However, this assumption is not robust to model mis-specification (e.g., in the case of programmer errors). We demonstrate this by presenting some Supervised POMDP scenarios in which errors in the parameterized reward function remove the incentive to follow shutdown commands. These difficulties parallel those discussed by Soares et al. 2015 in their paper on corrigibility. We argue that it is important to consider systems that follow shutdown commands under some weaker set of assumptions (e.g., that one small verified module is correctly implemented; as opposed to an entire prior probability distribution and/or parameterized reward function). We discuss some difficulties with simple ways to attempt to attain these sorts of guarantees in a value learning framework.

Ryan Carey, Foresight Fellow in Artificial Intelligence & Safe Machine Learning

Ryan is a research contractor at Ought Inc. His current work focused on aggregating answers from safe question-answering systems. Previously, he has worked on the task of predicting slow human judgments. In his past work at the Machine Intelligence Research Institute, he touched on issues such as how systems ought to behave if they have bugs in their code, and how systems ought to learn and explore if they occasionally encounter catastrophes.

Donate today for a better tomorrow

Your support enables us to push for better futures on three frontiers:
1. Advocating for neglected risks arising from technologies
2. Selectively advancing beneficial technologies
3. Fostering ongoing debate on to decide which risks to advocate for and which beneficial technologies to advance

Because we value accountability, here you will find the list of our 2017 achievements at one glance – and how we could continue each project in 2018 with your support.

Donate to Foresight

If you would like to keep up with Foresight Institute’s events and publications, follow us on on Twitter, on LinkedIn, befriend us on Facebook, or subscribe to our YouTube channel.


Thank you,

Foresight Institute 

  • Mission
  • Join
  • Contact
  • About Nanotechnology
    • Foresight Nanotechnology Roadmap
  • Do Not Sell My Information
Menu
  • Mission
  • Join
  • Contact
  • About Nanotechnology
    • Foresight Nanotechnology Roadmap
  • Do Not Sell My Information
Facebook Twitter Youtube Linkedin Podcast Discord Spotify

Search Foresigh Institute