Presenter
Jonathan Passerat-Palmbach, Imperial College
Jonathan is a senior research scientist at Flashbots (https://www.flashbots.net/). He is exploring the application of Privacy Enhancing Technologies to solve hard problems such as decentralised collaborative learning and Maximum Extractable Value (MEV) in blockchains. He has grown a strong expertise in Secure Computing (Trusted Execution Environments - TEEs, FHE, ...), Federated Learning and Verifiable Computing (TEEs, Zero-Knowledge Proofs, ...). Jonathan is also a research fellow at Imperial College London ([BioMedIA](https://biomedia.doc.ic.ac.uk)) and City, University of London ([CitAI](https://cit-ai.net)) where he co-supervises research students on the topics Privacy-Preserving Machine Learning and Federated Learning. He formerly lead the R&D arm of ConsenSys / Equideum Health (https://equideum.health/), where the team focuses on bringing together privacy-preserving machine learning and blockchains to build a new generation of healthcare systems.
Summary:
GPT-3 is a foundation model for many AI applications. AI such as GPT-3 are highly centralized due to a number of logistical and security reasons. However, Jonathan believes such AI wastes resources and is inherently unfair. To build a decentralized AI, we will need the right governance, privacy, and incentives. Jon works on building the right models to understand these concepts in the context of cryptography. Apart from the structure of the underlying logic, one of the larger practical problems for decentralized AI is how to finance it as a public good.