Hyrum Anderson is Distinguished ML Engineer at Robust Intelligence. He received his PhD in Electrical Engineering from University of Washington, with an emphasis on signal processing and machine learning, and BS and MS degrees in Electrical Engineering from Brigham Young University. Much of his career has been focused on defense and security, having directed research projects at MIT Lincoln Laboratory, Sandia National Laboratories, Mandiant, as Chief Scientist at Endgame (acquired by Elastic), and Principal Architect of Trustworthy Machine Learning at Microsoft. While at Microsoft, he organized Microsoft’s AI Red Team and oversaw the first exercises on production AI systems as chair of the AI Red Team governing board. Hyrum cofounded the Conference on Applied Machine Learning in Information Security (CAMLIS), and co-organizes the ML Security Evasion Competition (mlsec.io) and the ML Model Attribution Challenge (mlmac.io). He has spoken at numerous academic and industry conferences at the intersection of security and machine learning, including RSA, BlackHat and DEFCON. He has authored over 60 peerreviewed academic publications, and coauthored the book “Not With a Bug, But With a Sticker: Attacks on Machine Learning Systems and What To Do About Them”.
The newfound convenience in developing AI applications has outpaced most organizations’ abilities to secure it. Among the security concerns is the AI supply chain risks that developers face when using software, data and models from third-party sources. While software supply chain has been a known and growing risk, the AI supply chain can be trickier to navigate–it inherits the vulnerabilities in the software supply chain, but adds additional complexities in risk that must manage as organizations embrace AI and pass it along to consumers. In this talk, we’ll review three facets of AI supply chain risk, and provide participants with tools to begin to manage it.
- Developing risk management culture in AI development
- Technical solutions to measure and mitigate security risks for AI