• Safety-critical deep neural networks
• Autonomous vehicles
Benefits and Advantages
• Over 125× improvement in resistance to bit-flip attack
• 2-8% improvement of clean accuracy (i.e., with no adversarial perturbation)
Researchers at Arizona State University have developed a neural network that significantly improves robustness against bit-flip attack (BFA) while also increasing clean accuracy (i.e. accuracy with no attack). Defense against BFA is accomplished through the completely binary nature of the neural network. Typically, improved robustness by binarization comes at the expense of lower clean accuracy. However, this is avoided with the use of a novel and efficient two-stage network growing method.
Recently, deep neural networks (DNNs) have been deployed in many safety-critical applications. The security of DNN models can be compromised by adversarial input examples, where the adversary maliciously crafts and adds input noise to fool a DNN model. The perturbation of model parameters (e.g., weight) is another security concern, one that relates to the robustness of the DNN model itself.
An adversarial weight attack occurs when an attacker perturbs target DNN model parameters in computing hardware to achieve malicious goals. Among the prevalent adversarial weight attacks is bit-flip attack (BFA), which has been proven to be highly successful in hijacking DNN functionality (e.g., degrading accuracy to as low as random guessing) by flipping an extremely small number (e.g., tens out of millions) of weight memory bits.