Kanokwan RungsuptaweekoonVasaka VisoottivisethRyousei TakanoNational Institute of Advanced Industrial Science and TechnologyMahidol University2019-08-232019-08-232018-01-12Proceeding of 2017 2nd International Conference on Information Technology, INCIT 2017. Vol.2018-January, (2018), 1-52-s2.0-85049446409https://repository.li.mahidol.ac.th/handle/20.500.14594/45658© 2017 IEEE. Deep learning inference on embedded systems requires not only high throughput but also low power consumption. To address this challenge, this paper evaluates the power efficiency of image recognition with YOLO, a real-time object detection algorithm, on the latest NVIDIA embedded GPU systems: Jetson TX1 and TX2. For this evaluation, we deployed the Low-Power Image Recognition Challenge (LPIRC) system and integrated YOLO, a power meter, and target hardware into the system. The experimental results show that Jetson TX2 with Max-N mode has the highest throughput; Jetson TX2 with Max-Q mode has the highest power efficiency. These findings indicate it is possible to adjust the trade-off relationship of throughput and power efficiency in Jetson TX2. Therefore, Jetson TX2 has advantages for image recognition on embedded systems more than Jetson TX1 and a PC server with NVIDIA Tesla P40.Mahidol UniversityComputer ScienceEvaluating the power efficiency of deep learning inference on embedded GPU systemsConference PaperSCOPUS10.1109/INCIT.2017.8257866