Name
Affiliation
Papers
JAE-SUN SEO
Univ Michigan, Ann Arbor, MI 48109 USA
109
Collaborators
Citations 
PageRank 
263
536
56.32
Referers 
Referees 
References 
1679
1601
577
Search Limit
1001000
Title
Citations
PageRank
Year
Impact of On-chip Interconnect on In-memory Acceleration of Deep Neural Networks10.362022
XST: A Crossbar Column-wise Sparse Training for Efficient Continual Learning00.342022
Hybrid RRAM/SRAM in-Memory Computing for Robust DNN Acceleration00.342022
Temperature-Resilient RRAM-Based In-Memory Computing for DNN Inference00.342022
A 1.23-GHz 16-kb Programmable and Generic Processing-in-SRAM Accelerator in 65nm00.342022
Improving DNN Hardware Accuracy by In-Memory Computing Noise Injection00.342022
Contrastive Dual Gating: Learning Sparse Features With Contrastive Learning.00.342022
Sparse and Robust RRAM-based Efficient In-memory Computing for DNN Inference00.342022
A 177 TOPS/W, Capacitor-based In-Memory Computing SRAM Macro with Stepwise-Charging/Discharging DACs and Sparsity-Optimized Bitcells for 4-Bit Deep Convolutional Neural Networks00.342022
Impact of Multilevel Retention Characteristics on RRAM based DNN Inference Engine00.342021
An Energy-Efficient Deep Convolutional Neural Network Accelerator Featuring Conditional Computing and Low External Memory Access00.342021
A Survey on the Optimization of Neural Network Accelerators for Micro-AI On-Device Inference00.342021
Leveraging Noise and Aggressive Quantization of In-Memory Computing for Robust DNN Hardware Against Adversarial Input and Weight Attacks20.442021
PIMCA: A 3.4-Mb Programmable In-Memory Computing Accelerator in 28nm for On-Chip DNN Inference00.342021
Structured Pruning of RRAM Crossbars for Efficient In-Memory Computing Acceleration of Deep Neural Networks30.392021
FixyFPGA: Efficient FPGA Accelerator for Deep Neural Networks with High Element-Wise Sparsity and without External Memory Access00.342021
Siam: Chiplet-Based Scalable In-Memory Acceleration With Mesh For Deep Neural Networks30.392021
Hybrid In-Memory Computing Architecture For The Training Of Deep Neural Networks00.342021
Regulation Control Design Techniques for Integrated Switched Capacitor Voltage Regulators00.342020
Automatic Compilation of Diverse CNNs onto High-Performance FPGA Accelerators40.422020
Deep Convolutional Neural Network Accelerator Featuring Conditional Computing and Low External Memory Access00.342020
Noise-based Selection of Robust Inherited Model for Accurate Continual Learning.00.342020
C3SRAM: An In-Memory-Computing SRAM Macro Based on Robust Capacitive Coupling Computing Mechanism160.782020
A Latency-Optimized Reconfigurable NoC for In-Memory Acceleration of DNNs30.402020
A Smart Hardware Security Engine Combining Entropy Sources of ECG, HRV, and SRAM PUF for Authentication and Secret Key Generation00.342020
ECG Authentication Neural Network Hardware Design with Collective Optimization of Low Precision and Structured Compression.10.342020
FPGA-based low-batch training accelerator for modern CNNs featuring high bandwidth memory00.342020
Vesti: Energy-Efficient In-Memory Computing Accelerator for Deep Neural Networks60.442020
Interconnect-Aware Area and Energy Optimization for In-Memory Acceleration of DNNs30.402020
Efficient and Modularized Training on FPGA for Real-time Applications.00.342020
Compressing LSTM Networks with Hierarchical Coarse-Grain Sparsity.00.342020
XNOR-SRAM: In-Memory Computing SRAM Macro for Binary/Ternary Deep Neural Networks250.912020
An 8.93 TOPS/W LSTM Recurrent Neural Network Accelerator Featuring Hierarchical Coarse-Grain Sparsity for On-Device Speech Recognition20.492020
Deep Neural Network Training Accelerator Designs in ASIC and FPGA10.352020
Impact of Read Disturb on Multilevel RRAM based Inference Engine: Experiments and Model Prediction20.362020
Guest Editors' Introduction to the Special Section on Hardware and Algorithms for Energy-Constrained On-chip Machine Learning.00.342019
Monolithically Integrated RRAM- and CMOS-Based In-Memory Computing Optimizations for Efficient Deep Learning.50.532019
FixyNN: Efficient Hardware for Mobile Computer Vision via Transfer Learning.10.352019
XNOR-SRAM: In-Bitcell Computing SRAM Macro based on Resistive Computing Mechanism10.352019
Automatic Compiler Based FPGA Accelerator for CNN Training20.462019
Cases for Analog Mixed Signal Computing Integrated Circuits for Deep Neural Networks00.342019
FixyNN - Energy-Efficient Real-Time Mobile Computer Vision Hardware Acceleration via Transfer Learning.00.342019
Custom Sub-Systems and Circuits for Deep Learning: Guest Editorial Overview10.362019
Guest Editors' Introduction: Hardware and Algorithms for Energy-Constrained On-Chip Machine Learning (Part 2).00.342019
Inference engine benchmarking across technological platforms from CMOS to RRAM00.342019
Vesti: An In-Memory Computing Processor For Deep Neural Networks Acceleration00.342019
Joint Optimization Of Quantization And Structured Sparsity For Compressed Deep Neural Networks00.342019
Power, Performance, and Area Benefit of Monolithic 3D ICs for On-Chip Deep Neural Networks Targeting Speech Recognition.00.342018
Large-Scale Neuromorphic Spiking Array Processors: A quest to mimic the brain.70.612018
Fully parallel RRAM synaptic array for implementing binary neural network with (+1, -1) weights and (+1, 0) neurons.40.452018
  • 1
  • 2