Publication: Evaluating the power efficiency of deep learning inference on embedded GPU systems
dc.contributor.author | Kanokwan Rungsuptaweekoon | en_US |
dc.contributor.author | Vasaka Visoottiviseth | en_US |
dc.contributor.author | Ryousei Takano | en_US |
dc.contributor.other | National Institute of Advanced Industrial Science and Technology | en_US |
dc.contributor.other | Mahidol University | en_US |
dc.date.accessioned | 2019-08-23T10:58:08Z | |
dc.date.available | 2019-08-23T10:58:08Z | |
dc.date.issued | 2018-01-12 | en_US |
dc.description.abstract | © 2017 IEEE. Deep learning inference on embedded systems requires not only high throughput but also low power consumption. To address this challenge, this paper evaluates the power efficiency of image recognition with YOLO, a real-time object detection algorithm, on the latest NVIDIA embedded GPU systems: Jetson TX1 and TX2. For this evaluation, we deployed the Low-Power Image Recognition Challenge (LPIRC) system and integrated YOLO, a power meter, and target hardware into the system. The experimental results show that Jetson TX2 with Max-N mode has the highest throughput; Jetson TX2 with Max-Q mode has the highest power efficiency. These findings indicate it is possible to adjust the trade-off relationship of throughput and power efficiency in Jetson TX2. Therefore, Jetson TX2 has advantages for image recognition on embedded systems more than Jetson TX1 and a PC server with NVIDIA Tesla P40. | en_US |
dc.identifier.citation | Proceeding of 2017 2nd International Conference on Information Technology, INCIT 2017. Vol.2018-January, (2018), 1-5 | en_US |
dc.identifier.doi | 10.1109/INCIT.2017.8257866 | en_US |
dc.identifier.other | 2-s2.0-85049446409 | en_US |
dc.identifier.uri | https://repository.li.mahidol.ac.th/handle/20.500.14594/45658 | |
dc.rights | Mahidol University | en_US |
dc.rights.holder | SCOPUS | en_US |
dc.source.uri | https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85049446409&origin=inward | en_US |
dc.subject | Computer Science | en_US |
dc.title | Evaluating the power efficiency of deep learning inference on embedded GPU systems | en_US |
dc.type | Conference Paper | en_US |
dspace.entity.type | Publication | |
mu.datasource.scopus | https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85049446409&origin=inward | en_US |