Title | ||
---|---|---|
Accelerating Deep Neural Network In-Situ Training With Non-Volatile and Volatile Memory Based Hybrid Precision Synapses |
Abstract | ||
---|---|---|
Compute-in-memory (CIM) with emerging non-volatile memories (eNVMs) is time and energy efficient for deep neural network (DNN) inference. However, challenges still remain for DNN in-situ training with eNVMs due to the asymmetric weight update behavior, high programming latency and energy consumption. To overcome these challenges, a hybrid precision synapse combining eNVMs with capacitor has been p... |
Year | DOI | Venue |
---|---|---|
2020 | 10.1109/TC.2020.3000218 | IEEE Transactions on Computers |
Keywords | DocType | Volume |
Training,Synapses,Random access memory,Capacitors,Acceleration,Energy efficiency,Energy consumption | Journal | 69 |
Issue | ISSN | Citations |
8 | 0018-9340 | 3 |
PageRank | References | Authors |
0.39 | 0 | 2 |
Name | Order | Citations | PageRank |
---|---|---|---|
Yandong Luo | 1 | 17 | 2.82 |
Shimeng Yu | 2 | 490 | 56.22 |