Publication: Bagging of complementary neural networks with double dynamic weight averaging
Issued Date
2010-09-01
Resource Type
Other identifier(s)
2-s2.0-77956044634
Rights
Mahidol University
Rights Holder(s)
SCOPUS
Bibliographic Citation
Proceedings - 11th ACIS International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing, SNPD2010. (2010), 173-178
Suggested Citation
Sathit Nakkrasae, Pawalai Kraipeerapun, Somkid Amornsamankul, Chun Che Fung Bagging of complementary neural networks with double dynamic weight averaging. Proceedings - 11th ACIS International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing, SNPD2010. (2010), 173-178. doi:10.1109/SNPD.2010.34 Retrieved from: https://repository.li.mahidol.ac.th/handle/123456789/29009
Research Projects
Organizational Units
Authors
Journal Issue
Thesis
Title
Bagging of complementary neural networks with double dynamic weight averaging
Abstract
Ensemble technique has been widely applied in regression problems. This paper proposes a novel approach of the ensemble of Complementary Neural Network (CMTNN) using double dynamic weight averaging. In order to enhance the diversity in the ensemble, different training datasets created based on bagging technique are applied to an ensemble of pairs of feed-forward back-propagation neural networks created to predict the level of truth and falsity values. In order to obtain more accuracy, uncertainties in the prediction of truth and falsity values are used to weight the prediction results in two steps. In the first step, the weight is used to average the truth and the falsity values whereas the weight in the second step is used to calculate the final regression output. The proposed approach has been tested with benchmarking UCI data sets. The results derived from our technique improve the prediction performance while compared to the traditional ensemble of neural networks which is predicted based on only the truth values. Furthermore, the obtained results from our novel approach outperform the results from the existing ensemble of complementary neural network. © 2010 IEEE.