Publication: Complementary neural networks for regression problems
Issued Date
2009-11-10
Resource Type
Other identifier(s)
2-s2.0-70350705978
Rights
Mahidol University
Rights Holder(s)
SCOPUS
Bibliographic Citation
Proceedings of the 2009 International Conference on Machine Learning and Cybernetics. Vol.6, (2009), 3442-3447
Suggested Citation
Pawalai Kraipeerapun, Sathit Nakkrasae, Somkid Amornsamankul, Chun Che Fung Complementary neural networks for regression problems. Proceedings of the 2009 International Conference on Machine Learning and Cybernetics. Vol.6, (2009), 3442-3447. doi:10.1109/ICMLC.2009.5212716 Retrieved from: https://repository.li.mahidol.ac.th/handle/20.500.14594/27488
Research Projects
Organizational Units
Authors
Journal Issue
Thesis
Title
Complementary neural networks for regression problems
Abstract
In this paper, complementary neural networks (CMTNN) are used to solve the regression problem. CMTNN consist of a pair of opposite neural networks. The first neural network is trained to predict degree of truth values and the second neural network is trained to predict degree of falsity values. Both neural networks are complementary to each other since they deal with pairs of complementary output values. In order to predict the more accurate outputs, each pair of the truth and falsity values are aggregated based on two techniques which are equal weight combination and dynamic weight combination. The first technique is just a simple averaging whereas the second technique deals with errors occurred in the prediction. We experiment our approach to the classical benchmark problems including housing, concrete compressive strength, and computer hardware from the UCI machine learning repository. It is found that complementary neural networks improve the prediction performance as compared to the traditional single backpropagation neural network and support vector regression used to predict only truth values. Furthermore, the difference between the predicted truth value and the complement of the predicted falsity value can be used as an uncertainty indicator to support the confidence in the prediction of unknown input data. © 2009 IEEE.