Impact of On-chip Interconnect on In-memory Acceleration of Deep Neural Networks | 1 | 0.36 | 2022 |
XST: A Crossbar Column-wise Sparse Training for Efficient Continual Learning | 0 | 0.34 | 2022 |
Hybrid RRAM/SRAM in-Memory Computing for Robust DNN Acceleration | 0 | 0.34 | 2022 |
Temperature-Resilient RRAM-Based In-Memory Computing for DNN Inference | 0 | 0.34 | 2022 |
A 1.23-GHz 16-kb Programmable and Generic Processing-in-SRAM Accelerator in 65nm | 0 | 0.34 | 2022 |
Improving DNN Hardware Accuracy by In-Memory Computing Noise Injection | 0 | 0.34 | 2022 |
Contrastive Dual Gating: Learning Sparse Features With Contrastive Learning. | 0 | 0.34 | 2022 |
Sparse and Robust RRAM-based Efficient In-memory Computing for DNN Inference | 0 | 0.34 | 2022 |
A 177 TOPS/W, Capacitor-based In-Memory Computing SRAM Macro with Stepwise-Charging/Discharging DACs and Sparsity-Optimized Bitcells for 4-Bit Deep Convolutional Neural Networks | 0 | 0.34 | 2022 |
Impact of Multilevel Retention Characteristics on RRAM based DNN Inference Engine | 0 | 0.34 | 2021 |
An Energy-Efficient Deep Convolutional Neural Network Accelerator Featuring Conditional Computing and Low External Memory Access | 0 | 0.34 | 2021 |
A Survey on the Optimization of Neural Network Accelerators for Micro-AI On-Device Inference | 0 | 0.34 | 2021 |
Leveraging Noise and Aggressive Quantization of In-Memory Computing for Robust DNN Hardware Against Adversarial Input and Weight Attacks | 2 | 0.44 | 2021 |
PIMCA: A 3.4-Mb Programmable In-Memory Computing Accelerator in 28nm for On-Chip DNN Inference | 0 | 0.34 | 2021 |
Structured Pruning of RRAM Crossbars for Efficient In-Memory Computing Acceleration of Deep Neural Networks | 3 | 0.39 | 2021 |
FixyFPGA: Efficient FPGA Accelerator for Deep Neural Networks with High Element-Wise Sparsity and without External Memory Access | 0 | 0.34 | 2021 |
Siam: Chiplet-Based Scalable In-Memory Acceleration With Mesh For Deep Neural Networks | 3 | 0.39 | 2021 |
Hybrid In-Memory Computing Architecture For The Training Of Deep Neural Networks | 0 | 0.34 | 2021 |
Regulation Control Design Techniques for Integrated Switched Capacitor Voltage Regulators | 0 | 0.34 | 2020 |
Automatic Compilation of Diverse CNNs onto High-Performance FPGA Accelerators | 4 | 0.42 | 2020 |
Deep Convolutional Neural Network Accelerator Featuring Conditional Computing and Low External Memory Access | 0 | 0.34 | 2020 |
Noise-based Selection of Robust Inherited Model for Accurate Continual Learning. | 0 | 0.34 | 2020 |
C3SRAM: An In-Memory-Computing SRAM Macro Based on Robust Capacitive Coupling Computing Mechanism | 16 | 0.78 | 2020 |
A Latency-Optimized Reconfigurable NoC for In-Memory Acceleration of DNNs | 3 | 0.40 | 2020 |
A Smart Hardware Security Engine Combining Entropy Sources of ECG, HRV, and SRAM PUF for Authentication and Secret Key Generation | 0 | 0.34 | 2020 |
ECG Authentication Neural Network Hardware Design with Collective Optimization of Low Precision and Structured Compression. | 1 | 0.34 | 2020 |
FPGA-based low-batch training accelerator for modern CNNs featuring high bandwidth memory | 0 | 0.34 | 2020 |
Vesti: Energy-Efficient In-Memory Computing Accelerator for Deep Neural Networks | 6 | 0.44 | 2020 |
Interconnect-Aware Area and Energy Optimization for In-Memory Acceleration of DNNs | 3 | 0.40 | 2020 |
Efficient and Modularized Training on FPGA for Real-time Applications. | 0 | 0.34 | 2020 |
Compressing LSTM Networks with Hierarchical Coarse-Grain Sparsity. | 0 | 0.34 | 2020 |
XNOR-SRAM: In-Memory Computing SRAM Macro for Binary/Ternary Deep Neural Networks | 25 | 0.91 | 2020 |
An 8.93 TOPS/W LSTM Recurrent Neural Network Accelerator Featuring Hierarchical Coarse-Grain Sparsity for On-Device Speech Recognition | 2 | 0.49 | 2020 |
Deep Neural Network Training Accelerator Designs in ASIC and FPGA | 1 | 0.35 | 2020 |
Impact of Read Disturb on Multilevel RRAM based Inference Engine: Experiments and Model Prediction | 2 | 0.36 | 2020 |
Guest Editors' Introduction to the Special Section on Hardware and Algorithms for Energy-Constrained On-chip Machine Learning. | 0 | 0.34 | 2019 |
Monolithically Integrated RRAM- and CMOS-Based In-Memory Computing Optimizations for Efficient Deep Learning. | 5 | 0.53 | 2019 |
FixyNN: Efficient Hardware for Mobile Computer Vision via Transfer Learning. | 1 | 0.35 | 2019 |
XNOR-SRAM: In-Bitcell Computing SRAM Macro based on Resistive Computing Mechanism | 1 | 0.35 | 2019 |
Automatic Compiler Based FPGA Accelerator for CNN Training | 2 | 0.46 | 2019 |
Cases for Analog Mixed Signal Computing Integrated Circuits for Deep Neural Networks | 0 | 0.34 | 2019 |
FixyNN - Energy-Efficient Real-Time Mobile Computer Vision Hardware Acceleration via Transfer Learning. | 0 | 0.34 | 2019 |
Custom Sub-Systems and Circuits for Deep Learning: Guest Editorial Overview | 1 | 0.36 | 2019 |
Guest Editors' Introduction: Hardware and Algorithms for Energy-Constrained On-Chip Machine Learning (Part 2). | 0 | 0.34 | 2019 |
Inference engine benchmarking across technological platforms from CMOS to RRAM | 0 | 0.34 | 2019 |
Vesti: An In-Memory Computing Processor For Deep Neural Networks Acceleration | 0 | 0.34 | 2019 |
Joint Optimization Of Quantization And Structured Sparsity For Compressed Deep Neural Networks | 0 | 0.34 | 2019 |
Power, Performance, and Area Benefit of Monolithic 3D ICs for On-Chip Deep Neural Networks Targeting Speech Recognition. | 0 | 0.34 | 2018 |
Large-Scale Neuromorphic Spiking Array Processors: A quest to mimic the brain. | 7 | 0.61 | 2018 |
Fully parallel RRAM synaptic array for implementing binary neural network with (+1, -1) weights and (+1, 0) neurons. | 4 | 0.45 | 2018 |