Event

Jacob Steinhardt, Stanford University

Friday, February 16, 2018 10:30to11:30
Room 3195, Pav. André-Aisenstadt, CA

Colloque du DIRO

Provably Secure Machine Learning

The widespread use of machine learning systems creates a new class of computer security vulnerabilities where, rather than attacking the integrity of the software itself, malicious actors exploit the statistical nature of the learning algorithms. For instance, attackers can add fake data (e.g. by creating fake user accounts), or strategically manipulate inputs to the system once it is deployed. So far, attempts to defend against these attacks have focused on empirical performance against known sets of attacks. I will argue that this is a fundamentally inadequate paradigm for achieving meaningful security guarantees. Instead, we need algorithms that are provably secure by design, in line with best practices for traditional computer security. To achieve this goal, we take inspiration from robust statistics and robust optimization, but with an eye towards the security requirements of modern machine learning systems. Motivated by the trend towards models with thousands or millions of features, we investigate the robustness of learning algorithms in high dimensions. We show that most algorithms are brittle to even small fractions of adversarial data, and then develop new algorithms that are provably robust. Additionally, to accommodate the increasing use of deep learning, we develop an algorithm for certifiably robust optimization of non-convex models such as neural networks.

Follow us on

Back to top