Benchmarking Ultra-Low-Power μNPUs
. In Proceedings of the 31st Annual International Conference on Mobile Computing and Networking. .
Abstract
Efficient on-device neural network (NN) inference offers predictable latency, improved privacy and reliability, and lower operating costs for vendors than cloud-based inference. This has sparked recent development of microcontroller-scale NN accelerators, also known as neural processing units (μNPUs), designed specifically for ultra-low-power applications.We present the first comparative evaluation of a number of commercially-available μNPUs, including the first independent benchmarks for multiple platforms. To ensure fairness, we develop and open-source a model compilation pipeline supporting consistent benchmarking of quantized models across diverse microcontroller hardware.
Our resulting analysis uncovers both expected performance trends as well as surprising disparities between hardware specifications and actual performance, including certain μNPUs exhibiting unexpected scaling behaviors with model complexity. This work provides a foundation for ongoing evaluation of μNPU platforms, alongside offering practical insights for both hardware and software developers in this rapidly evolving space.
