The architecture information of a Deep Neural Network (DNN) model is considered a valuable, sensitive piece of property for a company. Knowledge of a DNN’s exact architecture allows any adversary to build a substitute model and use this substitute model to launch devastating adversarial attacks. Side-channel based DNN architecture stealing can take place where an outsider can extract a DNN architecture through side-channel information leakage. Specifically, when a company hosts their application on a third-party cloud computing platform or on a local device with GPU support, they open it up to architecture stealing through side-channel attacks.
Previous efforts for preventing DNN architecture stealing have focused on hardware solutions to eliminate information leakage. Though hardware modifications are effective countermeasures, they are not beneficial to existing devices and have high performance overhead. What is needed is a full-stack tool which obfuscates neural network execution to effectively mitigate neural architecture stealing.
Researchers at Arizona State University (ASU) have developed a full-stack obfuscating tool to protect model architecture information from being stolen by adversaries with access to leaky side-channels. ASU’s tool can be deployed in both cloud-based GPUs and on edge computing devices. The tool consists of a set of obfuscating knobs chosen across scripting, optimization, and scheduling stacks of neural network model compilation. To achieve the best obfuscating performance with minimal overhead, the ASU tool uses a genetic algorithm as an optimization method to find the combination of obfuscating knobs to maximize the obfuscation performance given the user’s latency budget. Use of this method results in a neural network architecture that has the same functionality as the original trained mode but generates a hardware trace that cannot be used by the adversary to derive the architecture model.
Related publication: NeurObfuscator: A Full-stack Obfuscation Tool to Mitigate Neural Architecture Stealing
Potential Applications:
- Provide security for:
- Deep Neural Networks
- Artificial Intelligence
Benefits and Advantages:
- Pure software-based tool applied easily to hardware with no need to upgrade hardware
- Provides both sequence obfuscation and dimension obfuscation
- Suitable for use on both cloud-based platform and edge computing devices
- Adds architecture obfuscation during DNN compilation to prevent DNN architecture stealing
- Experiments demonstrate low-overhead and effectiveness on most common GPU devices
- For example, given 2% latency overhead, ASU’s obfuscating tool can obfuscate a ResNet-18 ImageNet model to a totally different architecture (with a 44-layer difference) and can obfuscate a convolution layer with 64 input and 128 output channels to a layer with 207 input and 93 output channels.