Publication Types:

Sort by year:

Hardware-assisted Black-box Adversarial Attack Evaluation Framework on Binarized Neural Network

Article
Navid Khoshavi, Arman Roohi, Yu Bi
41st IEEE Symposium on Security and Privacy
Publication year: 2020

Entropy-Based Modeling for Estimating Soft Errors Impact on Binarized Neural Network Inference

Article
Navid Khoshavi, Saman Sargolzaei, Arman Roohi, Connor Broyles, Yu Bi
arXiv preprint
Publication year: 2020

Over past years, the easy accessibility to the large scale datasets has significantly shifted the paradigm for developing highly accurate prediction models that are driven from Neural Network (NN). These models can be potentially impacted by the radiation-induced transient faults that might lead to the gradual downgrade of the long-running expected NN inference accelerator. The crucial observation from our rigorous vulnerability assessment on the NN inference accelerator demonstrates that the weights and activation functions are unevenly susceptible to both single-event upset (SEU) and multi-bit upset (MBU), especially in the first five layers of our selected convolution neural network. In this paper, we present the relatively-accurate statistical models to delineate the impact of both undertaken SEU and MBU across layers and per each layer of the selected NN. These models can be used for evaluating the error-resiliency magnitude of NN topology before adopting them in the safety-critical applications.

Entropy-Based Modeling for Estimating Soft Errors Impact on Binarized Neural Network Inference

Article
Navid Khoshavi, Arman Roohi, Saman Sargolzaei, Connor Broyles, Yu Bi
Publication year: 2020

Abstract

Over past years, the easy accessibility to the large scale datasets has significantly shifted the paradigm for developing highly accurate prediction models that are driven from Neural Network (NN). These models can be potentially impacted by the radiation-induced transient faults that might lead to the gradual downgrade of the long-running expected NN inference accelerator. The crucial observation from our rigorous vulnerability assessment on the NN inference accelerator demonstrates that the weights and activation functions are unevenly susceptible to both single-event upset (SEU) and multi-bit upset (MBU), especially in the first five layers of our selected convolution neural network. In this paper, we present the relatively-accurate statistical models to delineate the impact of both undertaken SEU and MBU across layers and per each layer of the selected NN. These models can be used for evaluating the error-resiliency magnitude of NN topology before adopting them in the safety-critical applications.

Keywords

  • Fault Injection,
  • Deep Neural Network Accelerator,
  • Machine Learning,
  • Soft Error,
  • Statistical Model

Reconfigurable Spintronic Fabric using Domain Wall Devices

Article
Ronald F DeMara, Ramtin Zand, Arman Roohi, Soheil Salehi, Steven Pyle
Publication year: 2014

Abstract

GWhile spintronic-based neuromorphic architectures offer analog computation strategies [2], in this proposal we exploit reconfigurability and associative processing using a Logic-In-Memory (LIM) paradigm. LIM is compatible with conventional computing algorithms and integrates logical operations with data storage, making it an ideal choice for parallel SIMD operations to eliminate frequent accesses to memory, which are extreme contributors to energy consumption. Spin-based LIM architectures have the capability to increase computational throughput, reduce the die area, provide instant-on functionality, and reduce static power consumption [3]. Feasibility of a low power spintronic LIM chip has recently been demonstrated in [4] for database applications.