Classification of Sugarcane Leaf Diseases Using Vision Transformers and CNN Models
Issued Date
2025-01-01
Resource Type
Scopus ID
2-s2.0-105032460483
Journal Title
Jcsse 2025 22nd International Joint Conference on Computer Science and Software Engineering
Start Page
164
End Page
168
Rights Holder(s)
SCOPUS
Bibliographic Citation
Jcsse 2025 22nd International Joint Conference on Computer Science and Software Engineering (2025) , 164-168
Suggested Citation
Silapachote P., Srisuphab A., Wutthiumphol K., Tanprathumwong Y., Pohboonchuen T. Classification of Sugarcane Leaf Diseases Using Vision Transformers and CNN Models. Jcsse 2025 22nd International Joint Conference on Computer Science and Software Engineering (2025) , 164-168. 168. doi:10.1109/JCSSE67377.2025.11297933 Retrieved from: https://repository.li.mahidol.ac.th/handle/123456789/115738
Title
Classification of Sugarcane Leaf Diseases Using Vision Transformers and CNN Models
Author's Affiliation
Corresponding Author(s)
Other Contributor(s)
Abstract
A globally prominent economic crop, sugarcane is an indispensable raw material for over 80% of sugar production worldwide. In Thailand, the sugarcane and sugar industry holds a top position in export markets. The loss of sugarcane crops due to diseases is a devastating problem that can never be overstated. Not only does it affect the economy, but it is also the primary source of income for many farmers in the provinces. To prevent a wide spread of any disease, farmers have long been heavily relying on visual inspections and their expertise to detect any signs of disease as early as possible. To assist farmers, this work applied computer vision and machine learning technology to help classifying sugarcane diseases from its leaves. Deployed on mobile devices, our application allows farmers to easily send to our chat-bot a photo of their suspected sugarcane leaves, and get a real-time response specifying the name of the disease or none if it is deemed healthy. Trained and fine-tuned on public data sets, our classifier, which is a vision transformer model, outperformed previous works. Tested on a newly collected local data set, ours achieved a high accuracy 79.64%.
