Presenter
Chhi’mèd Künzang, Protocol Labs
Protocol labs created a new breed of zero knowledge proofs – SNARKs (Zero-Knowledge Succinct Non-Interactive Argument of Knowledge) – used for computation of functions using cryptographically secured data. Building upon the previous work to create decentralized data storage via the InterPlanetary File System, they are improving SNARKs by using recursion to have SNARKs act as prerequisites for validating other SNARKs. This new system is being ...
Joel Thorstensson, Ceramic Network
Joel is the CTO and co-founder of 3Box which is a user centric data system based on distributed web tech. He got into blockchain as he was studied complex adaptive systems at Chalmers in 2015. Later joined Consensys to work on uPort, which was the first self-sovereign identity system based on Ethereum. Most recently he helped co-create the Ceramic protocol, which is a decentralized network for an interconnected data web, and user centric data control.
Jonathan Passerat-Palmbach, Imperial College
Jonathan is a senior research scientist at Flashbots (https://www.flashbots.net/). He is exploring the application of Privacy Enhancing Technologies to solve hard problems such as decentralised collaborative learning and Maximum Extractable Value (MEV) in blockchains. He has grown a strong expertise in Secure Computing (Trusted Execution Environments - TEEs, FHE, ...), Federated Learning and Verifiable Computing (TEEs, Zero-Knowledge Proofs, ...) ...
Deepak Maram, Cornell University
I am broadly interested in computer security and applied cryptography. My recent focus has largely been around decentralized identity. Some of the research highlights are below: - DECO is a privacy-preserving oracle protocol licensed from Cornell by Chainlink. CanDID is a Sybil-resitant decentralized identity system that builds on DECO to port legacy credentials. - GoAT is a file geolocation protocol useful to prove location of files.
Remco Bloemen, Worldcoin
Head of Blockchain at Worldcoin
Summary:
What are you trying to do?
We want to facilitate and incentivize large-scale cooperative machine learning while preserving individual data and
model privacy.
How is it done today? What are the limitations of the current system?
Current federated learning uses privacy-enhancing techniques, but it is centralized and people rightly don’t trust it.
What is new in your approach and why do you think it will be successful?
The full realisation of our project would see the creation of a decentralised environment where autonomous and human actors would contribute to improve and leverage on various instances of artificial intelligence.
If successful, what difference will it make?
We have seen a growing number of initiatives deploying various forms of collaborative learning in the scientific community, thus highlighting the appeal of early adopters to mine more than just their own data sources to train more powerful and unbiased models.
How long will it take?
The mid-term will take 6-12 months while the full vision may require multiple years to complete.
What are the mid-term and final exams to check for completeness?
Verifiable evaluation of ML models (succinct zk proofs) will enable innovative use cases such as an Iterated prediction market (to establish a model’s value) and a Market for model queries (verified use of known models as a service.)
The final product is fully decentralised model training. Model IP will be protected via either FHE or MPC. The owner obtains strong guarantees that the decentralised training was performed according to their preset conditions.
Individual contributors and institutions are rewarded in accordance with the significance of their contribution.