Summary
- In the first half of this year, we started from scratch: what is cooperation, why is it important, what does it allow humans to do?
- Then we addressed what could happen if we port the cooperative arrangements that we already have into the digital world enhanced by crypto commerce.
- Now, what will happen to these same cooperative arrangements when there are artificial intelligences in the mix? And what sorts of arrangements will artificial intelligences enter into themselves?
- Before diving into that, we revisit the idea that civilization is ALREADY a superintelligence composed of human and artificial intelligences cooperating towards various goals.
- Civilizations emergent phenomena don’t fully satisfy the traits of a true superintelligence, but there may be ways to measure society’s intelligence by its ability to accomplish various goals and solve certain problems.
- We don’t generally think of entities like companies and countries as “intelligent”, but nevertheless, they interact and behave in ways that exhibit what can be called intelligence to coordinate resources and accomplish goals together.
- Today we hear from Richard Craib, the founder of a company called NumerAI, a decentralized hedge fund acting as a superintelligent collective of stock traders.
This meeting is part of the Intelligent Cooperation Group and accompanying book draft.
Presenters
Richard Craib
Richard Craib is the founder and CEO of Numerai, a hedge fund built by a network of data scientists. Previously, Craib was a data scientist at an asset management firm with $15 billion under management. He has a degree in pure mathematics with a focus on abstract algebra at Cornell University….
Presentation
- Numerai is the first hedge fund that gives away all its data, so it’s users can build models on this data and contribute information back in to improve the hedge fund.
- Publishing the data publicly creates a tension: with the public data, the community can use that data to improve the hedge fund model for everyone, but non-teamplayers could also take the data and start their own fund.
- To mitigate this, the data is obfuscated: there are signal identifiers but not descriptions (like “the P/E ratio of a stock” or which stock is which). Users are submitting predictive models built on this data to earn for the hedge fund, while getting paid out rewards for models that do well.
- If you had a site like Numerai for building back tests, you’d have a problem: it’s easy to build good backtest but very hard to know if they will hold on live data.
- In a machine learning situation like detecting faces, the machine is always fitting to the data (which is static) and will nearly always converge, but in finance the future (and the data) is always changing.
- But model builders have a sense of whether they are over-fitting on certain signals and can build models that compete on the same data by predicting the future. How do we surface that information?
- One thing that happened; when we started NumerAI we were sibyl attacked. Thousands of accounts were created and they swamped the signal market with random signals hoping they’d get lucky.
- In response, a cryptocurrency and staking system was introduced to align incentives and prevent trolls: users staked their NMR tokens against their models and could have their stake slashed if the model does not do well.
- What’s become clear is that the models with the most stake tend to be the best performing, and the aggregate of the models that are not slashed and perform best are the best market signals.
Q&A
One big complaint about backtests is that they don’t take into account the market impact of trades they’re simulating, transacton fees and other costs. Is that an issue for NumerAI?
- A: Backtests are always simulations, and many don’t work well on live data because they’ve simulated something poorly. We mitigate that by only trading the most liquid stocks in the world so large orders have limited impact.
How do you decide how much to slash bad models?
- A: There is correlation with the targets of the model: if your model is 1% correlated with the target, you stake will increase the same. If you’re -1% correlated, you lose that much.
How does the NumerAI modle compare to general prediction markets with no AI in the mix?
- A: Any market is a kind of money weighted market of what the thing should be worth. That’s what we’re trying to do with the staking: if there’s a lot staked on a prediction, we want to believe them because they have a lot to lose.
- But the form of NumerAI is all quantitative. There aren’t sentimental or behavioral anylses about companies, all strictly crunching numbers.
Do you think there are any other data realms for this type of system besides financial?
- A: I don’t actually. Finance lends itself to this kind of model because the margins can be extremely tight: if you can get a model from 51% to 52%, that’s a huge difference. This works less well for things like detecting cancer from imaging.
- Crowdsourced models aren’t useful for every realm of machine learning, but staking is absolutely a huge deal for almost any company. The ability to create a negative incentive online is a big deal as a way to combat bad actors.
We’ve taken a look at ErasureBay and AMIX, information exchanges that incorporate staking bounties to reward information.
Can I buy a stake and benefit from others models?
- A: No, you are always staking your prediction, so putting money down on your own predictions.
NumerAI seem like this interesting hybrid of a centralized and decentralized system. Yiu have a fleet of data scientists competing on models with your AI metamodels processing the result of the competition. How would you describe NumerAI’s makeup?
- A: We’re distributed for sure. And the NMR tokens and the staking are running on a blockchain that is outside of our control, so that’s decentralized.
- Stocks aren’t traded on the blockchain yet, so the company NumerAI is the entity with financial access to trading. Once stocks can be traded on the blockchain, NumerAI could become a totally decentralized DAO that’s running on chain with no employees.
How does your metamodel work?
- A: There are very few things in finance that are “laws”, but one thing is that if there are two uncorrelated modles, you want to trade both to lower your volatility and icrease your returns.
- Towards this, NumerAI is shooting to be the fund with the most models, and the metamodel is the stake weighted average supermodel of all the models that are working on the NumerAI platform.
Do you have any thoughts on how to better fund longevity research?
- A: It seems like anyone doing longevity research faces the same problem: regulatory environments.
- There is a lot of overlap between people who made money on cryptocurrency and people interested in longevity. So now that countries have begun to court the crypto-rich and change laws to be more attractive to them, there may be a chance.
Can you make any medium or longer term predictions (maybe 5-7 years) about qualitiatively different insights to arise from the NumerAI model?
- A: I think that’s possible. In the quant finance world, a lot of it is very low-tech. Simple arbitrages and such, not highly intelligent. Now with machine learning models in finance, it’s getting weird. Models are making money that can’t have their risk explained.
- A: In regard to AGI, when the AGI folks are pressed they will eventually say “it doesn’t even need to be general, it could be a narrow AI that trades the stock market”. So I built a platform for whoever finds that narrow AI to trade on 🙂
- Traditional quant funds all track each other: new machine learning funds are decoupling in interesting ways. In 3 years NumerAI could have billions in the fund and a much different impact on the market.
Could the biggest earners on NumerAI become non-human model makers?
- A: Yeah, we’re already seeing automated model making and relearning, restaking, so that’s a short step to totally automated AI traders.
You use “intelligence” in the way a military might, as in gathering bits of intelligence for strategizing, and in the sense that this group focuses on.
- A: Each model has a “metamodel contribution” that’s calculated by whether the model actually contributes to the metamodel or is a subset of it.
- We are always looking for the hole in our knowledge, that’s always the most important thing to learn, and we pay based on this to encourage local knowledge being discovered.
This reminds me of Decentralized Autonomous Hiveminds: we can maybe tackle the problem of AGI being too centralized by incentiviizing a group of DAOs to bring together local human knowledge to out-compete them.
- A:
If you’re obfuscating the data, does that prevent users from doing qualitative stuff like natural language processing on news articles and such?
- A: The data we give out, there are many fundamental variables in there but you can’t do qualitative analysis because of the obfuscation. Users with one or two market insights aren’t really useful to the kind of quant strategy that the fund is taking.
If you could do an attention based on model instead of just a simple stake-weighted mode, you could potentially make a system for finding these predictions.
- A: We have a new thing called NumerAI Signals that is kind of towards what you’re describing, and it brings in new data in new ways because the normal platform is constrained by the data that NuerAI is obfuscating and sharing.
- We’ve tried to beat the simple stake-weighted model… and it’s always a little less robust and a little worse.
Seminar summary by James Risberg.