APEX Reviews

Neuromorphic computing enabled by physics of electron spins: Prospects and perspectives

and

Published 14 February 2018 © 2018 The Japan Society of Applied Physics
, , Citation Abhronil Sengupta and Kaushik Roy 2018 Appl. Phys. Express 11 030101 DOI 10.7567/APEX.11.030101

1882-0786/11/3/030101

Abstract

"Spintronics" refers to the understanding of the physics of electron spin-related phenomena. While most of the significant advancements in this field has been driven primarily by memory, recent research has demonstrated that various facets of the underlying physics of spin transport and manipulation can directly mimic the functionalities of the computational primitives in neuromorphic computation, i.e., the neurons and synapses. Given the potential of these spintronic devices to implement bio-mimetic computations at very low terminal voltages, several spin-device structures have been proposed as the core building blocks of neuromorphic circuits and systems to implement brain-inspired computing. Such an approach is expected to play a key role in circumventing the problems of ever-increasing power dissipation and hardware requirements for implementing neuro-inspired algorithms in conventional digital CMOS technology. Perspectives on spin-enabled neuromorphic computing, its status, and challenges and future prospects are outlined in this review article.

Export citation and abstract BibTeX RIS

1. Introduction

In this era of data deluge, with information being generated by an increasing number of wearables and Internet of Things devices, embedding on-chip intelligence in these devices is becoming a crucial requirement. While research in designing brain-inspired networks and algorithms has produced artificial-intelligence platforms capable of outperforming humans at several cognitive tasks,1) an often-unnoticed cost is the extensive computational expenses required for running these algorithms on hardware.

The human brain performs a host of activities — including reasoning, decision-making, and recognition — concurrently at a mere few tens of watts. In comparison, CMOS hardware implementing brain-inspired algorithms has energy requirements orders of magnitude higher, because of the significant mismatch between the computational units of the brain and the architecture of the underlying CMOS transistors. As on–off switches, CMOS transistors are more suited to implement Boolean logic than neuro-inspired algorithms. Bridging this efficiency gap requires the exploration of alternative device physics where the neural and synaptic functionalities can be naturally matched to the device operation.

Recent research efforts have demonstrated that simple spintronic device structures based on experiments performed in lateral or vertical spin valves can offer direct mapping to the neural and synaptic units. A brief discussion on the computational primitives of these neuro-inspired algorithms is presented in order to outline the potential advantages that spin-based neuromorphic technology has to offer. Neuromorphic architectures consist of memory-coupled compute units where the computing elements (neurons) receive inputs through synaptic memory junctions (synapses) that encode the importance value of various inputs. While such brain-inspired computing models have gradually evolved over the years, with more understanding of neuroscience mechanisms in the brain, in this text, we focus on the most bio-realistic computing model: spiking neural networks (SNNs).2) In SNNs, neurons receive and transmit binary signals or spikes as a function of time. Spiking neurons keep track of their membrane potential, which accumulates the weighted summation of synaptic inputs at each time step and fires an output spike whenever it crosses the threshold. Consequently, hardware implementation of spiking neurons would involve an accumulator and comparator, thereby requiring a few tens of transistors for each neuron. Additionally, synaptic units require a 6-T/8-T static random access memory (SRAM) unit per bit, thus requiring more than 20 transistors even for a 4-bit discretization. With future neuromorphic chips aimed at achieving brain-scale, i.e., 10,000 synapses per neuron, nanoelectronic devices mimicking synaptic functionality would eventually become a necessity. Furthermore, direct emulation of neuron functionality by an electronic device at low terminal voltages would enable the fan-in synaptic devices to be run at a lower voltage, thereby reducing the energy consumption of the entire system.

Furthermore, neuromorphic architectures involve a significant amount of memory-intensive operations. Therefore, conventional digital CMOS implementations are constrained by the memory-bandwidth limitations involved in accessing the synaptic weights from an SRAM module to the neural computing core. In contrast, non-volatile spintronic devices can be used to store the synaptic weights and may be arranged in a crossbar array fashion to perform "in-memory" computing with interfaced spintronic neurons. Drastic reductions in the area consumption (due to direct mapping of core neuron and synaptic functionalities to single spintronic devices) coupled with low operating voltages of spin devices can potentially lead to a quantum leap in the efficiency of brain-inspired neuromorphic systems. Herein, we review recent developments in this field and evaluate the prospects of the related technologies.

2. Device preliminaries

The basic underlying device structure that this article focuses on is the magnetic tunnel junction (MTJ), which consists of two nanomagnets sandwiching a spacer layer (typically an oxide such as MgO). Let us initially consider the magnet to be mono-domain, i.e., the spin polarization of each layer is uniform in a particular direction. While the magnetization of one of the layers is magnetically "pinned" or "hardened" in a particular direction, the magnetization of the other layer can be switched by an external stimulus, such as a spin current or magnetic field. The two layers are denoted as the "Pinned" layer and "Free" layer, respectively in Fig. 1(a). Depending on the relative orientation of the two magnets, the device exhibits a high-resistance anti-parallel (AP) state (when the magnetizations of the two layers have opposite direction) and a low-resistance parallel (P) state (when the magnetizations of the two layers have the same direction).3) The stability of the magnet in the two extreme resistive states is determined by the barrier height, which is measured as the product of the magnet anisotropy and volume. The magnetization of the "Free" layer can be switched by passing a charge current through the magnetic stack, owing to spin-transfer torque effect.3)

Fig. 1.

Fig. 1. (a) MTJ consists of a "Free" layer separated from a "Pinned" layer by a tunneling oxide barrier. The device can be switched from a high-resistive AP state to a low resistive P state by passing charge current from the "Free" layer to the "Pinned" layer and vice-versa. The two extreme resistive states are stabilized by an energy barrier. (b) Multi-level resistive states can be encoded in an MTJ structure where the "Free" layer consists of a FM with a domain wall separating two magnetic domains with opposite spin polarities.

Standard image High-resolution image

While the mono-domain magnet is characterized by only two stable states, as shown in Fig. 1(a), multiple resistive states can be encoded in an MTJ device structure where the "Free" layer magnet has an elongated shape to stabilize a transitory region or a domain wall between two oppositely polarized magnetic domains in the "Free" layer. Depending on the particular position of the domain wall, the relative proportions of the P and AP domains in the MTJ vary, modulating the device conductance. The domain-wall position can be programmed by passing an appropriate amount of charge current between the WRITE and GND terminals [Fig. 1(b)], because the domain-wall displacement increases in proportion to the magnitude of the input charge current. The direction of the domain-wall motion is that of the electron flow; hence, the device conductance can be increased or decreased by ensuring an appropriate direction of the input current.

To reduce the critical current density required for magnetization reversal or domain-wall displacement and thereby eliminate undesirable power dissipation and heating effects, alternative device physics for switching are currently being investigated. Spin–orbit torque47) is one such spin-torque phenomena that has recently been employed as an energy-efficient mechanism for magnet switching in ferromagnet–heavy metal (FM–HM) bilayer structures. In such magnetic heterostructures, passing an input in-plane charge current through the HM layer results in the repeated scattering of in-plane spin polarized electrons at the magnet–HM junction, thereby transferring multiple units of spin-angular momentum to the nanomagnet lying on top. The direction of the spin polarization of the injected electrons is transverse to the directions of the spin injection and input current flow, as a consequence of spin-Hall effect.8) An MTJ device based on spin–orbit torque driven switching is shown in Fig. 2(a). One of the major advantages in such a device structure is the decoupled "write" and "read" current paths, which allow independent optimization of "write" and "read" circuits. Additionally, the input current density required for magnetization reversal (due to repeated spin scattering at the magnet–HM interface) and the input resistance during "write" operation (as the "write" current flows through the low-resistance HM instead of the tunneling junction) are greatly reduced compared with standard spin-transfer torque-induced switching in two-terminal MTJ structures. Typical HMs under exploration are platinum, tantalum, and tungsten in the β phase.

Fig. 2.
Standard image High-resolution image
Fig. 2.

Fig. 2. (a) "Write" current, Jq, flowing between terminals T2 and T3 results in the injection of in-plane polarized spin current, Js, at the FM–HM interface. In-plane spin polarized electrons scatter at the FM–HM interface and transfer multiple units of spin-angular momentum to the magnet. The "read" current between terminals T1 and T3 is used to detect the final FM magnetization after the "write" operation. The "write" and "read" current paths are decoupled. (b) Spin–orbit torque driven domain-wall motion has been observed in FM–HM bilayers with perpendicular magnet anisotropy where the DMI stabilizes chiral Néel domain walls. The figure presents a left-handed chiral system where domain-wall displacement is observed in the direction of charge current flow.

Standard image High-resolution image

Spin–orbit torque-driven domain-wall displacement has also been observed in magnetic nanowires with perpendicular magnetic anisotropy.9,10) In FM–HM bilayers, spin–orbit coupling and broken inversion symmetry results in the stabilization of a chiral Néel domain wall owing to an exchange interaction termed as the Dzyaloshinskii–Moriya interaction (DMI).911) DMI-stabilized domain walls can be displaced in the presence of spin–orbit torque generated by current flowing through the underlying HM. In samples with negligible DMI and Bloch wall orientation, domain-wall displacement has been also achieved experimentally in the presence of an external in-plane magnetic field.12) While most of the experiments on domain-wall motion have been based on Hall-bar structures, multi-level MTJ resistive states were recently observed in FeB–MgO junctions where the "Free" layer consisted of a domain wall separating oppositely polarized domains.13)

In addition to spin–orbit torque-induced magnetization switching, alternate device physics such as the magnetoelectric effect,14) the voltage-controlled magnetic anisotropy effect,15) and skyrmionic motion16) are currently under exploration. Although spin–orbit torque is the main underlying effect that we consider in this text for spin manipulation, the core concepts behind the neural or synaptic device structures can be easily extended to incorporate these physical phenomena.

3. Synaptic and neuronal device structures: Toward all-spin networks

In this section, we discuss synaptic and neural device structures based on domain-wall motion in nanomagnets. Device structures based on mono-domain magnets (exploiting thermal noise-induced probabilistic switching) is discussed in a later section. As shown in Fig. 3(a), spin orbit torque-driven domain-wall motion can be utilized to envision a device structure where the position of the domain wall determines the conductance of the device.17) Such a device can directly emulate the synaptic scaling operation, as the current flowing through the synaptic device is weighted by the device conductance (for a fixed voltage applied across the "read" terminals T1 and T3). The domain-wall position can be programmed by passing a "write" current between terminals T2 and T3 of the device through the underlying HM. An alternative synaptic device using Bloch domain wall-based motion was also proposed.18) Such device structures can be also used as multi-level memory units for on-chip cache applications19) and as receivers for long-distance charge-based interconnects.20)

Fig. 3.
Standard image High-resolution image
Fig. 3.

Fig. 3. (a) Three-terminal synaptic device structure where the programmed domain-wall position encodes the device conductance. The domain-wall position is controlled by the magnitude and direction of the current flowing through the underlying HM of the device between terminals T2 and T3. (b) Integrate-fire (IF) spiking neuron functionalities can be implemented in a slightly modified device structure (Neuron MTJ) where the MTJ is located at the extreme edge of the magnetic "Free" layer. The Neuron MTJ is interfaced with a Reference MTJ to drive an output inverter. Input spikes (x) and output spikes (y) are shown with respect to time (t). The domain-wall position (v) integrates the incoming current pulses and undergoes proportionate displacement. The domain-wall position is reset to the opposite edge whenever it reaches the right edge of the magnet (vth).

Standard image High-resolution image

The current integrating property of domain-wall motion is in direct correspondence to the IF characteristics of a spiking neuron. Input current pulses or spikes flowing through the HM can displace the domain wall proportionately in accordance to the magnitude of the spikes. Instead of the entire "Free" layer, the MTJ can be now located at the extreme edge to detect whether the domain wall has been displaced to the opposite edge of the FM [Fig. 3(b)].21) The Neuron MTJ is interfaced with a Reference MTJ (whose orientation is fixed) to form a voltage divider (activated during "read" cycle following a "write" cycle) that drives an output inverter. The inverter generates an output high level or spike when the Neuron MTJ switches or fires. The domain-wall position is reset to the left edge of the magnet (by passing current in the opposite direction) after every spiking event.

Spintronic neurons and synapses can be arranged in a crossbar fashion, as shown in Fig. 4, to realize an "All-Spin" spiking neural architecture. Such an "in-memory" computing architecture efficiently implements the parallel dot-product computing kernel typically required in neuromorphic recognition platforms. Inputs are encoded as applied voltages along the rows of the crossbar array. Considering the input resistance of the magneto-metallic spin neurons to be lower than the synaptic resistances in the crossbar array, the current flowing through each synapse is weighted by the device conductance. The resultant currents provided by the synapses are summed along the column and provided as the input "write" current through the neuron. Consecutive "write" and "read" cycles of the spin-neurons implement multiple time steps of the spiking network.

Fig. 4.

Fig. 4. "All-Spin" neuromorphic architecture where magneto-metallic spin neurons enable the low-voltage operation of the crossbar array of spintronic synapses. Input spikes or voltages drive the rows of the array that causes the neurons to receive a weighted summation of the incoming spikes. The neurons generate spikes (high voltage level) at the output of the inverters of the neuron MTJs. Detailed synaptic connections for synaptic learning are shown for a 2 × 2 portion of the array for pre-neurons A and B connecting to post-neurons C and D. The control signals and learning mechanism are described in detail in Sect. 4.

Standard image High-resolution image

Output spikes are generated at the output inverters of the crossbar array, which can be latched to provide spike inputs for the fan-out crossbar arrays. Interested readers are referred to Ref. 22 for an extensive discussion on the design space exploration of such "All-Spin" networks. The energy efficiency of the system stems mainly from two factors:

  • (i)   
    Magneto-metallic spin neurons inherently require very low currents for switching. This enables the large crossbar arrays of spintronic synapses to be operated at very low terminal voltages (typically 100 mV).
  • (ii)   
    Unlike CMOS neuromorphic architectures, where a significant amount of energy is expended in memory (used to store the synaptic weights) access and leakage, crossbar-based "in-memory" computing architectures comprising non-volatile spintronic synapses alleviate the memory-access bottleneck.

Our analyses indicate that "All-Spin" neuromorphic architectures can achieve energy consumption almost 2–3 orders of magnitude lower than that of conventional digital CMOS implementations.23) Alternative non-spiking neural computing models based on "Step" transfer function perceptron models (devices based on lateral spin valve-based structures,24) spin–orbit torque-based clocking effect25) and even single mono-domain magnets26)) and "Non-Step" models (devices based on domain-wall motion17)) have been explored in the literature.

4. Synaptic plasticity

The discussion thus far has not accounted for the determination of the magnitude of the synaptic weights. The plasticity or learning of the synaptic junctions lends cognitive abilities to these neuromorphic architectures. In this article, we focus on the implementation of unsupervised online learning based on spike-timing-dependent plasticity (STDP) that stems from the seminal work by Bi and Poo.27) The rule is based on measurements performed on rat hippocampal glutamatergic synapses and postulates that the synaptic junctions joining a pre-neuron to a post-neuron potentiates (depresses) if the post-neuron spikes after (before) the pre-neuron, thereby strengthening (weakening) the time-domain correlation between the neurons. The magnitude of potentiation or depression varies exponentially with the magnitude of time difference between the pre-neuron and post-neuron spikes (Fig. 6).

Let us explain the implementation of the positive timing window for synaptic potentiation (post-neuron firing after pre-neuron). We previously mentioned that the domain-wall displacement is directly proportional to the magnitude of the programming current. Furthermore, because the MTJ resistance is a parallel combination of the P and AP domain resistances, the equivalent device conductance is a linear function of the domain-wall position. Consequently, implementation of STDP entails that the programming current should vary in a similar fashion (exponentially) with spike-timing difference. As shown in Fig. 5(a), this is achieved by interfacing the device with a transistor MSTDP biased in subthreshold saturation (as the supplied current varies exponentially with respect to the gate voltage of the transistor). Transistors MA1MA4 are used to decouple the "write" and "read" current paths of the device, while POST is the "write" control signal (activated when the post-neuron fires). Whenever the pre-neuron spikes, the spike voltage VSPIKE passes through the MTJ to the post-neuron and is modulated by the MTJ conductance. Concurrently, the gate voltage PRE of the programming MSTDP transistor starts increasing linearly (achieved by charging a capacitor using a constant current source). Considering that the rise time of the PRE signal (∼µs) is sufficiently longer than the duration of the POST signal (∼ns ensuring that the programming current is approximately constant during the "write" operation), the programming current magnitude flowing through the HM is exponentially correlated to the timing difference of the pre- and post-neuron spikes. The corresponding timing diagram is shown in Fig. 5(b). Synaptic depression can be achieved by an additional interfaced transistor to pass current through the "write" terminals of the device in the opposite direction. The synapses can be arranged in a crossbar fashion, as depicted in Fig. 4, interfaced with neurons and programming circuits to perform unsupervised learning.29) The access transistors MA2 and MA4 can be shared by devices in a particular row of the array. The potential advantages offered by such spintronic synapses are the decoupled "write" and "read" current paths (crucial for online learning) and low programming energies for learning operation (1 order of magnitude lower than SRAM synapses and other typical memresistive technologies29)). It is worth noting here that, in addition to non-volatile STDP learning, other volatile plasticity mechanisms such as short-term plasticity (STP) and long-term potentiation (LTP) can be potentially implemented in mono-domain magnets.28) Considering the time-domain evolution of the magnetization of a mono-domain magnet, the magnetization or MTJ conductance integrates and starts switching to the opposite stable state at each incoming stimulus (Fig. 7). However, if the strength or duration of the pulse is not sufficient, it starts leaking back. Such a mechanism can be engineered to implement frequency-dependent STP–LTP mechanism in MTJ synapses.28) We conclude this section by noting that such leaky-integrate magnetization dynamics can be utilized for realizing spiking neural functionalities as well. In addition, thermal noise prevalent in nanomagnets at non-zero temperatures enables the abstraction of such magnet functionalities as a stochastic spiking neuron30,31) (discussed in next section).

Fig. 5.
Standard image High-resolution image
Fig. 5.

Fig. 5. (a) Spintronic synapse is interfaced with access transistors MA1MA4 to decouple the "write" and "read" current paths. The peripheral circuits are shown for only the positive window of STDP implementation. The transistor responsible for implementing STDP, MSTDP, is biased in the sub-threshold saturation regime. (b) Detailed timing diagrams explaining the STDP implementation. Whenever the pre-neuron spikes, the PRE voltage (gate voltage of programming MSTDP transistor) starts increasing linearly. Whenever the post-neuron fires (POST signal activated), a programming current flows through the device whose magnitude is related exponentially to the delay of the pre-neuron and post-neuron spikes.

Standard image High-resolution image
Fig. 6.

Fig. 6. STDP measurements taken on rat hippocampal glutamatergic synapses.27) The synaptic weight is updated depending on the timing difference of pre- and post-neuron spikes.

Standard image High-resolution image
Fig. 7.
Standard image High-resolution image
Fig. 7.

Fig. 7. (a) Incoming stimulus and (b) MTJ conductance variation with respect to time for the typical magnetic stack described in Ref. 28. The magnetization dynamics integrates incoming current pulses and leaks back to the initial stable state in the absence of the stimulus. It switches to the opposite stable state only if the frequency of the input stimulus is sufficiently high, thereby implementing frequency-dependent synaptic plasticity.28)

Standard image High-resolution image

5. Stochastic neural inference and synaptic learning

Although traditionally artificial neural or synaptic models inspired by neuroscience studies have been deterministic, the brain is characterized by stochastic components that perform computation and inference probabilistically. There has been evidence that neurons in the human brain perform probabilistic Bayesian inference.33) This disparity between deterministic neuromorphic models and actual stochastic cognitive computing occurring in the brain can be mainly attributed to the fact that the underlying CMOS hardware used to implement such models has been traditionally deterministic.

Interestingly, spintronic devices exhibit increasing stochasticity as device dimensions start scaling down owing to the inherent time-varying thermal noise. Hence, neuromorphic computing platforms that embrace the device stochasticity (rather than viewing it as a disadvantage) are currently under exploration. A potential pathway to such brain-like stochastic neural computing is to replace the multi-bit information encoded in neurons and synapses with single-bit values that are updated probabilistically over time. This allows the neural and synaptic operations to be implemented by compact mono-domain magnets instead of domain-wall motion-based devices discussed previously. For instance, Fig. 8 depicts typical probabilistic MTJ switching characteristics as a function of the input "write" current magnitude. Implementation of stochastic neuron30,31) and STDP functionalities32) in single-domain FM–HM bilayer structures have been explored in literature. Rate encoding stochastic neurons can be implemented using such devices because the probability of spike output of the neuron at each time step can be modulated depending on the rate and magnitude of the input spikes.30,31) The multi-bit STDP formulation can be modified in the stochastic single-bit scenario to represent the probability of synaptic state change in response to the spike-timing difference.32) The synaptic state change probability can be modulated by appropriate peripheral circuitry (similar to the one described for the domain-wall motion-based devices) that ensures proper variation of the programming current magnitude with spike-timing difference. The operation of the crossbar array of stochastic synapses driving stochastic neurons is similar to the array described for domain-wall motion based devices (depicted in Fig. 4) except that the core neuron and synaptic devices have single-bit resolution in contrast to the domain-wall motion-based devices. Reference 34 demonstrates an "All-Spin" Stochastic SNN where a crossbar array of stochastically plastic synapses drives spintronic neurons performing probabilistic inference. The concepts are equally valid for stochastic magnets scaled to the superparamagnetic regime but require a re-thinking of peripheral design owing to their telegraphic switching behavior.35) We conclude this section by mentioning that alternative neuromorphic computing paradigms such as associative memory operations36) and audio processing37) enabled by spintronic devices have also been explored.

Fig. 8.

Fig. 8. Probabilistic switching characteristics of an MTJ as a function of "write" current magnitude for a typical magnetic stack described in Refs. 3032. The switching probability increases non-linearly with increasing "write" current magnitude. The characteristics undergo more dispersion with reduction in the duration of the "write" current, Tw. Such characteristics can be exploited to implement stochastic single-bit neurons30,31) and synapses.32)

Standard image High-resolution image

6. Conclusions

To conclude, spintronic devices can enable low-power compact neuromorphic computing that is nearly two orders-of-magnitude more efficient than conventional CMOS implementations. Improving the readability of the magnet state through the tunneling magnetoresistance effect and exploring alternative device physics for improving the efficiency of magnetization manipulation are possible pathways at implementing increased-area and power-efficient spin torque-based neuro-mimetic devices. For instance, recent studies have indicated that voltage-driven MTJs based on magneto-electric effect can be used to implement stochastic spiking neuron functionalities at a notably lower energy consumption.38) Furthermore, continued device scaling can provide a potential avenue to achieve brain-like stochastic cognition functionalities that can be directly emulated by the underlying spintronic hardware.

Acknowledgments

The work was supported in part by the Center for Spintronic Materials, Interfaces, and Novel Architectures (C-SPIN), a MARCO and DARPA sponsored StarNet center, by the Semiconductor Research Corporation, the National Science Foundation, Intel Corporation, and by the US Department of Defense Vannevar Bush Faculty Fellowship.

Please wait… references are loading.

Biographies

Abhronil Sengupta

Abhronil Sengupta received the B.E. degree from Jadavpur University, India in 2013. Currently he is pursuing Ph.D. degree in Electrical and Computer Engineering at Purdue University since Fall 2013. He is presently a Research Assistant to Professor Kaushik Roy with the Nanoelectronics Research Laboratory, Purdue University. He worked as a summer intern and DAAD (German Academic Exchange Service) Fellow at the University of Hamburg, Germany in 2012. He also worked as an intern at Intel Labs in 2016 and Oculus Research, Facebook in 2017. His primary research interests lie in low-power neuromorphic computing using spintronic and emerging devices. He was the recipient of the Birck Fellowship from Purdue University in 2013 for his academic excellence.

Kaushik Roy

Kaushik Roy received B.Tech. degree in electronics and electrical communications engineering from the Indian Institute of Technology, Kharagpur, India, and Ph.D. degree from the electrical and computer engineering department of the University of Illinois at Urbana—Champaign in 1990. He was with the Semiconductor Process and Design Center of Texas Instruments, Dallas, where he worked on FPGA architecture development and low-power circuit design. He joined the electrical and computer engineering faculty at Purdue University, West Lafayette, IN, in 1993, where he is currently Edward G. Tiedemann Jr. Distinguished Professor. His research interests include spintronics, device-circuit co-design for nano-scale Silicon and non-Silicon technologies, low-power electronics for portable computing and wireless communications, and new computing models enabled by emerging technologies. Dr. Roy has published more than 600 papers in refereed journals and conferences, holds 15 patents, supervised 75 Ph.D. dissertations, and is co-author of two books on Low Power CMOS VLSI Design (John Wiley & McGraw Hill). He received the National Science Foundation Career Development Award in 1995, IBM faculty partnership award, ATT/Lucent Foundation award, 2005 SRC Technical Excellence Award, SRC Inventors Award, Purdue College of Engineering Research Excellence Award, Humboldt Research Award in 2010, 2010 IEEE Circuits and Systems Society Technical Achievement Award, Distinguished Alumnus Award from Indian Institute of Technology (IIT), Kharagpur, Fulbright-Nehru Distinguished Chair, DoD National Security Science and Engineering Faculty Fellow (2014–2019), Semiconductor Research Corporation Aristotle award in 2015, and best paper awards at 1997 International Test Conference, IEEE 2000 International Symposium on Quality of IC Design, 2003 IEEE Latin American Test Workshop, 2003 IEEE Nano, 2004 IEEE International Conference on Computer Design, 2006 IEEE/ACM International Symposium on Low Power Electronics & Design, and 2005 IEEE Circuits and system society Outstanding Young Author Award (Chris Kim), 2006 IEEE Transactions on VLSI Systems best paper award, 2012 ACM/IEEE International Symposium on Low Power Electronics and Design best paper award, 2013 IEEE Transactions on VLSI Best paper award. He was a Purdue University Faculty Scholar (1998–2003). He was a Research Visionary Board Member of Motorola Labs (2002) and held the M. K. Gandhi Distinguished Visiting faculty at Indian Institute of Technology (Bombay). He has been in the editorial board of IEEE Design and Test, IEEE Transactions on Circuits and Systems, IEEE Transactions on VLSI Systems, and IEEE Transactions on Electron Devices. He was Guest Editor for Special Issue on Low-Power VLSI in the IEEE Design and Test (1994) and IEEE Transactions on VLSI Systems (June 2000), IEE Proceedings—Computers and Digital Techniques (July 2002), and IEEE Journal on Emerging and Selected Topics in Circuits and Systems (2011). He is a fellow of IEEE.

10.7567/APEX.11.030101