Skip navigation
Please use this identifier to cite or link to this item: http://arks.princeton.edu/ark:/88435/dsp01tx31qm83d
Title: Security Meets Deep Learning
Authors: He, Zecheng
Advisors: Lee, Ruby RL
Contributors: Electrical Engineering Department
Keywords: Anomaly Detection
Deep Learning
Security and Privacy
Subjects: Computer science
Issue Date: 2021
Publisher: Princeton, NJ : Princeton University
Abstract: Recent years have witnessed the rapid development of deep learning in many domains. These successes inspire using deep learning in the area of security. However, there are at least two main challenges when deep learning meets security. First, the availability of attack data is a problem. It is challenging to construct a model that works well with limited attack data. Second, the deep learning systems themselves are vulnerable to various attacks, bringing new concerns when using deep learning to improve security in computer systems. To address the first challenge, this dissertation shows how to use deep learning techniques to improve the security of computer systems with limited or no attack data. To address the second challenge, we show how to protect the security and privacy of deep learning systems. Specifically, in the first part of this dissertation, we consider a practical scenario where no attack data are available, i.e., anomaly detection. We propose a new methodology, Reconstruction Error Distribution (RED), for real-time anomaly detection. Our key insight is that the normal behavior of a computer system can be captured through temporal deep learning models. Deviation from normal behavior indicates anomalies. We show that the proposed methodology can detect attacks with high accuracy in real-time in power-grid controller systems and general-purpose cloud computing servers. The second part of this dissertation focuses on protecting the security and privacy of deep learning. Specifically, we first show that in a Machine Learning as a Service (MLaaS) system, the integrity of a deep learning model in the cloud can be dynamically checked through a type of carefully designed input, i.e., Sensitive-Samples. In another scenario, e.g., distributed learning in edge-cloud systems, we demonstrate that an attacker in the cloud can reconstruct an edge device's input data with high fidelity under successively weaker attacker capabilities. We also propose a new defense to address these attacks. In summary, we hope the work in this dissertation can shed light on using deep learning for improving security and help improve the security of deep learning systems to attacks.
URI: http://arks.princeton.edu/ark:/88435/dsp01tx31qm83d
Alternate format: The Mudd Manuscript Library retains one bound copy of each dissertation. Search for these copies in the library's main catalog: catalog.princeton.edu
Type of Material: Academic dissertations (Ph.D.)
Language: en
Appears in Collections:Electrical Engineering

Files in This Item:
File Description SizeFormat 
He_princeton_0181D_13904.pdf14.07 MBAdobe PDFView/Download


Items in Dataspace are protected by copyright, with all rights reserved, unless otherwise indicated.