Skip navigation
Please use this identifier to cite or link to this item: http://arks.princeton.edu/ark:/88435/dsp01k3569738d
Title: The Role of Data Geometry in Adversarial Machine Learning
Authors: Bhagoji, Arjun Nitin Nitin
Advisors: Mittal, Prateek
Contributors: Electrical Engineering Department
Keywords: Machine Learning
Security
Subjects: Artificial intelligence
Issue Date: 2020
Publisher: Princeton, NJ : Princeton University
Abstract: As machine learning (ML) systems become ubiquitous, it is critically important to ensure that they are secure against adversaries. This is the focus of the recently developing sub-field of adversarial machine learning, which aims to analyze and defend ML systems. In this thesis, we uncover the crucial role that data geometry plays in adversarial ML. We show that it helps craft effective attacks against real-world ML systems, develop defenses that are robust to adaptive attacks and is instrumental in deriving fundamental bounds on the robustness of ML systems. Our focus is mainly on evasion attacks carried out using adversarial examples, which are maliciously modified inputs that cause catastrophic failures of ML systems at test time. The \emph{first part of the thesis} deals with black-box attacks on ML systems. These are attacks carried out by adversaries with only query access to the systems under attack. Nevertheless, we show that these are as pernicious as attacks with full knowledge of the system, demonstrating that adversarial examples do indeed represent a serious threat to deployed ML systems. We use data geometry to increase the query efficiency of these attacks and leverage this to carry out the first effective attack on a commercially deployed ML system in an ethical manner. The \emph{second part of the thesis} considers the use of dimensionality reduction to defend against evasion attacks. These defenses are effective against a variety of attacks, crucially including those with full knowledge of the defense. We use Principal Component Analysis to carry out this dimensionality reduction and also propose a variant of it known as anti-whitening, both of which improve the security-utility trade-off for ML systems. The \emph{third part of the thesis} steps away from the attack-defense arms race to develop fundamental limits on learning in the presence of evasion attacks. Our first result uses the underlying geometry of the data and the theory of optimal transport to find an upper bound on classifier performance in the presence of an adversary. We provide exact results for the case of Gaussian distributions and completely characterize adversarial learning in this case. We further use our results to demonstrate the gap from optimality that exists for current defenses. The second result looks at how such classifiers can be learned. We extend the theory of PAC-learning to account for an adversary and demonstrate sample complexity bounds in the case of learning with an evasion attack by defining the Adversarial VC-dimension. We characterize learning with linear classifiers exactly and provide examples to show how Adversarial VC-dimension differs from the standard VC-dimension. In the \emph{final part of the thesis}, we relax two critical assumptions about adversaries that we make throughout to expand the scope of attacks that are possible. First, we introduce out-of-distribution adversarial examples, which relax the assumption that adversarial examples have to be generated from the same data distribution used during training. This allows us to analyze the security properties of open-world ML systems. Second, we consider the impact of training-time adversaries on practical distributed learning systems. Since these aggregate models from multiple clients during the learning process, they are particularly vulnerable to an adversary controlling malicious clients. We show that with the use of model poisoning attacks, the learned model can be induced to misclassify points chosen by the adversary. In summary, we have shown how data geometry can be leveraged to find, analyze and mitigate the vulnerability of ML systems to evasion attacks. Nevertheless, as the scope of possible attacks increases, new theoretical insights and defenses have to be developed to engineer truly robust ML systems. We hope this thesis points the way forward for fundamental and actionable research in this domain.
URI: http://arks.princeton.edu/ark:/88435/dsp01k3569738d
Alternate format: The Mudd Manuscript Library retains one bound copy of each dissertation. Search for these copies in the library's main catalog: catalog.princeton.edu
Type of Material: Academic dissertations (Ph.D.)
Language: en
Appears in Collections:Electrical Engineering

Files in This Item:
File Description SizeFormat 
Bhagoji_princeton_0181D_13515.pdf5.3 MBAdobe PDFView/Download


Items in Dataspace are protected by copyright, with all rights reserved, unless otherwise indicated.