Publication: Evaluating the power efficiency of deep learning inference on embedded GPU systems
Issued Date
2018-01-12
Resource Type
Other identifier(s)
2-s2.0-85049446409
Rights
Mahidol University
Rights Holder(s)
SCOPUS
Bibliographic Citation
Proceeding of 2017 2nd International Conference on Information Technology, INCIT 2017. Vol.2018-January, (2018), 1-5
Suggested Citation
Kanokwan Rungsuptaweekoon, Vasaka Visoottiviseth, Ryousei Takano Evaluating the power efficiency of deep learning inference on embedded GPU systems. Proceeding of 2017 2nd International Conference on Information Technology, INCIT 2017. Vol.2018-January, (2018), 1-5. doi:10.1109/INCIT.2017.8257866 Retrieved from: https://repository.li.mahidol.ac.th/handle/20.500.14594/45658
Research Projects
Organizational Units
Authors
Journal Issue
Thesis
Title
Evaluating the power efficiency of deep learning inference on embedded GPU systems
Other Contributor(s)
Abstract
© 2017 IEEE. Deep learning inference on embedded systems requires not only high throughput but also low power consumption. To address this challenge, this paper evaluates the power efficiency of image recognition with YOLO, a real-time object detection algorithm, on the latest NVIDIA embedded GPU systems: Jetson TX1 and TX2. For this evaluation, we deployed the Low-Power Image Recognition Challenge (LPIRC) system and integrated YOLO, a power meter, and target hardware into the system. The experimental results show that Jetson TX2 with Max-N mode has the highest throughput; Jetson TX2 with Max-Q mode has the highest power efficiency. These findings indicate it is possible to adjust the trade-off relationship of throughput and power efficiency in Jetson TX2. Therefore, Jetson TX2 has advantages for image recognition on embedded systems more than Jetson TX1 and a PC server with NVIDIA Tesla P40.