Title | ||
---|---|---|
Deflate-inflate: Exploiting hashing trick for bringing inference to the edge with scalable convolutional neural networks |
Abstract | ||
---|---|---|
With each passing year, the compelling need to bring deep learning computational models to the edge grows, as does the disparity in resource demand between these models and Internet of Things edge devices. This article employs an old trick from the book "deflate and inflate" to bridge this gap. The proposed system uses the hashing trick to deflate the model. A uniform hash function and a neighborhood function are used to inflate the model at runtime. The neighborhood function approximates the original parameter space better than the uniform hash function according to experimental results. Compared to existing techniques for distributing the VGG-16 model over the Fog-Edge platform, our deployment strategy has a 1.7x - 7.5x speedup with only 1-4 devices due to decreased memory access and better resource utilization. |
Year | DOI | Venue |
---|---|---|
2022 | 10.1002/cpe.6593 | CONCURRENCY AND COMPUTATION-PRACTICE & EXPERIENCE |
Keywords | DocType | Volume |
deep learning, distributed inference, fog, HashedNets, parallelization, parameter recovery | Journal | 34 |
Issue | ISSN | Citations |
3 | 1532-0626 | 0 |
PageRank | References | Authors |
0.34 | 0 | 3 |
Name | Order | Citations | PageRank |
---|---|---|---|
Azra Nazir | 1 | 0 | 0.34 |
Roohie Naaz Mir | 2 | 0 | 0.34 |
Shaima Qureshi | 3 | 0 | 0.34 |