Inon WiratsinVeerapong SuchaipornPojchara TrainorapongJirachaipat ChaichinvaraSakwaroon RattanajitdamrongNarit HnoohomMahidol University2019-08-232019-08-232018-07-022018 International Joint Symposium on Artificial Intelligence and Natural Language Processing, iSAI-NLP 2018 - Proceedings. (2018)2-s2.0-85065096591https://repository.li.mahidol.ac.th/handle/20.500.14594/45613© 2018 IEEE. Classification of terrain images taken from an unmanned aerial vehicle (UAV) is presented in this work. The objective is to classify terrain into 5 types: building, green zone, car park, road and canal. The processing flow consists of stitching sets of 4 images to form large field of view images to covers the area of interest. The stitched images were then divided into grids, and each grid were manually labeled as one of the five terrain types. Feature extraction was performed on each grid, where the features consist of percentage of pixels whose color falls with in certain range in the HSV color space, the mean pixel value of each of the BG R channels separately, the mean pixel value of all the channels together, and the number of contours detected from binary images thresholded by simple thresholding and by Otsu's method. Three different classifiers were experimented with: k nearest neighbor, decision tree, and extra tree. Two different dataset were used for training the classifiers: raw dataset where the number of each type of grid were imbalanced due to the nature of the terrains in the area of interest, and an augmented dataset where we artificially increased the number of grids by random flips and rotation such that each class has exactly the same number of grids. A total of six stitched images were reserved for the test set. Experiment results show that best accuracy was achieved by extra tree with accuracy of 85.5%. The results also show that augmenting the training data did not improve the performance.Mahidol UniversityComputer ScienceMedicineClassification of Terrain Types in Unmanned Aerial Vehicle ImagesConference PaperSCOPUS10.1109/iSAI-NLP.2018.8692953