Publication: Classification of Terrain Types in Unmanned Aerial Vehicle Images
Issued Date
2018-07-02
Resource Type
Other identifier(s)
2-s2.0-85065096591
Rights
Mahidol University
Rights Holder(s)
SCOPUS
Bibliographic Citation
2018 International Joint Symposium on Artificial Intelligence and Natural Language Processing, iSAI-NLP 2018 - Proceedings. (2018)
Suggested Citation
Inon Wiratsin, Veerapong Suchaiporn, Pojchara Trainorapong, Jirachaipat Chaichinvara, Sakwaroon Rattanajitdamrong, Narit Hnoohom Classification of Terrain Types in Unmanned Aerial Vehicle Images. 2018 International Joint Symposium on Artificial Intelligence and Natural Language Processing, iSAI-NLP 2018 - Proceedings. (2018). doi:10.1109/iSAI-NLP.2018.8692953 Retrieved from: https://repository.li.mahidol.ac.th/handle/20.500.14594/45613
Research Projects
Organizational Units
Authors
Journal Issue
Thesis
Title
Classification of Terrain Types in Unmanned Aerial Vehicle Images
Other Contributor(s)
Abstract
© 2018 IEEE. Classification of terrain images taken from an unmanned aerial vehicle (UAV) is presented in this work. The objective is to classify terrain into 5 types: building, green zone, car park, road and canal. The processing flow consists of stitching sets of 4 images to form large field of view images to covers the area of interest. The stitched images were then divided into grids, and each grid were manually labeled as one of the five terrain types. Feature extraction was performed on each grid, where the features consist of percentage of pixels whose color falls with in certain range in the HSV color space, the mean pixel value of each of the BG R channels separately, the mean pixel value of all the channels together, and the number of contours detected from binary images thresholded by simple thresholding and by Otsu's method. Three different classifiers were experimented with: k nearest neighbor, decision tree, and extra tree. Two different dataset were used for training the classifiers: raw dataset where the number of each type of grid were imbalanced due to the nature of the terrains in the area of interest, and an augmented dataset where we artificially increased the number of grids by random flips and rotation such that each class has exactly the same number of grids. A total of six stitched images were reserved for the test set. Experiment results show that best accuracy was achieved by extra tree with accuracy of 85.5%. The results also show that augmenting the training data did not improve the performance.