Benchmarking Ultra-Low-Power µNPUs
Josh Millar, Yushan Huang, Sarab Sethi, Hamed Haddadi and Anil Madhavapeddy.
Working paper at arXiv.
Efficient on-device neural network (NN) inference has various advantages over cloud-based processing, including predictable latency, enhanced privacy, greater reliability, and reduced operating costs for vendors. This has sparked the recent rapid development of microcontroller-scale NN accelerators, often referred to as neural processing units (uNPUs), designed specifically for ultra-low-power applications.
In this paper we present the first comparative evaluation of a number of commercially-available uNPUs, as well as the first independent benchmarks for several of these platforms. We develop and open-source a model compilation framework to enable consistent benchmarking of quantized models across diverse uNPU hardware. Our benchmark targets end-to-end performance and includes model inference latency, power consumption, and memory overhead, alongside other factors. The resulting analysis uncovers both expected performance trends as well as surprising disparities between hardware specifications and actual performance, including uNPUs exhibiting unexpected scaling behaviors with increasing model complexity. Our framework provides a foundation for further evaluation of uNPU platforms valuable insights for both hardware designers and software developers in this rapidly evolving space.