Movatterモバイル変換


[0]ホーム

URL:


Skip to main content

Advertisement

Springer Nature Link
Log in

Learners Reliability Estimated Through Neural Networks Applied to Build a Novel Hybrid Ensemble Method

  • Published:
Neural Processing Letters Aims and scope Submit manuscript

Abstract

In this paper a novel hybrid ensemble method aiming at the improvement of models accuracy in regression tasks is presented. The basic idea of the approach is the creation of an ensemble learner composed by astrong learner which is trained by exploiting data belonging to the whole training dataset and a set of specialisedweak learners trained by using data coming from limited regions of the input space determined by means of a self organising map based clustering. In this context, different methods have been tested for the design of the learners, including a hierarchical approach. In the simulation phase, the strong and weak learners operate according to their punctual self-estimated reliabilities so as to exploit their strengths and overcome their weaknesses. The method has been tested on literature and real world datasets achieving competitive results by outperforming other ensemble methods on most of the tested datasets and reducing the average absolute error by up to 10%.

This is a preview of subscription content,log in via an institution to check access.

Access this article

Log in via an institution

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (Japan)

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

References

  1. Dietterich T (2000) Ensemble methods in machine learning. Multiple classifier systems, lecture notes in computer science. Springer, Berlin

    Google Scholar 

  2. Embrechts MJ, Gatti CJ, Linton J, Gruber T, Sick B (2012) Forecasting exchange rates with ensemble neural networks and ensemble K-PLS: a case study for the US Dollar per Indian Rupee. In: The 2012 international joint conference on neural networks (IJCNN), pp 1–8

  3. Cheng C, Xu W, Wang J (2012) A comparison of ensemble methods in financial market prediction. In: 2012 Fifth international joint conference on computational sciences and optimization (CSO), pp 755–759

  4. Hirose H, Zaman F (2011) More accurate diagnosis in electric power apparatus conditions using ensemble classification methods. IEEE Trans Dielectr Electr Insul 18(5):1584–1590

    Article  Google Scholar 

  5. Jiang Y, Zhou Z-H (2004) SOM ensemble-based image segmentation. Neural Process Lett 20(3):171–178

    Article  Google Scholar 

  6. Baños O, Galvez J-M, Damas M, GuillÃn A, Herrera L-J, Pomares H, Rojas I, Villalonga C, Hong CS, Lee S (2015) Multiwindow fusion for wearable activity recognition. Lecture Notes Comput Sci 9095:290–297

    Article  Google Scholar 

  7. Mu X, Watta P, Hassoun MH (2009) Analysis of a plurality voting-based combination of classifiers. Neural Process Lett 29(2):89–107

    Article  Google Scholar 

  8. Hashemi HB, Yazdani N, Shakery A, Naeini MP (2010) Application of ensemble models in web ranking. In: 2010 5th international symposium on telecommunications (IST), pp 726–731

  9. Hansen LK, Salamon P (1990) Neural network ensembles. IEEE Trans Pattern Anal Mach Intell 12(10):993–1001

    Article  Google Scholar 

  10. Opitz D, Maclin R (1999) Popular ensemble methods: an empirical study. J Artif Intell Res 11:169–198

    MATH  Google Scholar 

  11. Wei W, Yaoyao Z, Xiaolei H, Lopresti D, Zhiyun X, Long R, Antani S, Thoma G (2009) A classifier ensemble based on performance level estimation. In: IEEE international symposium on biomedical imaging: from nano to macro. ISBI ’09, pp 342–345

  12. Artan Y, Huang X (2008) Combining multiple 2v-SVM classifiers for tissue segmentation. In: Proceedings of ISBI2008, pp 488–491

  13. Johansson U, Lofstrom T, Lofstrom T (2008) The problem with ranking ensembles based on training or validation performance. In: 2008 IEEE international joint conference on neural networks (IEEE world congress on computational intelligence), Hong Kong, pp 3222–3228

  14. Vannucci M, Colla V, Vannocci M, Nastasi G (2012) An ensemble classification method based on input clustering and classifiers expected reliability. In: Proceedings of 6th European modelling symposium on mathematical modelling and computer simulation EMS2012, Malta, November 14–16

  15. Kuncheva LI (2000) Clustering-and-selection model for classifier combination. In: Proceedings of fourth international conference on knowledge-based intelligent engineering systems and allied technologies. Brighton, vol 1, pp 185–188

  16. Colla V, Vannucci M, Allotta B, Malvezzi M (2003) Comparison of traditional and neural system for train speed estimation. In: Proceedings of the 11th European symposium on artificial neural networks ESANN 2004, vol 1, pp 401–406

  17. Duval-Poo MA, Sosa-García J, Guerra-Gandón A, Vega-Pons S, Ruiz-Shulcloper J (2012) A new classifier combination scheme using clustering ensemble. In: Progress in pattern recognition, image analysis, computer vision, and applications: 17th Iberoamerican congress, CIARP 2012, Buenos Aires, Argentina, September 3–6, pp 154–161

  18. Acharya A, Hruschka ER, Ghosh J, Acharyya S (2011) C3E: a framework for combining ensembles of classifiers and clusterers. In: Proceedings of 10th international workshop, MCS 2011, Naples, Italy, June 15–17, pp 269–278

  19. Masoudnia S, Ebrahimpour R, Arani SAAA (2012) Incorporation of a regularization term to control negative correlation in mixture of experts. Neural Process Lett 36(1):31–47

    Article  Google Scholar 

  20. Loo CK, Liew WS, Seera M, Lim E (2014) Probabilistic ensemble Fuzzy ARTMAP optimization using hierarchical parallel genetic algorithms. Neural Comput Appl 26(2):263–276

    Article  Google Scholar 

  21. Breiman L (1996) Bagging predictors. Mach Learn 24(2):123–140

    MATH  Google Scholar 

  22. Freund Y, Schapire R (1999) Experiments with a new boosting algorithm. In: Proceedings of the 13th international conference on machine learning. Bari, Italy, pp 148–156

  23. Avnimelech R, Intrator N (1999) Boosting regression estimators. Neural Comput 11:499

    Article  Google Scholar 

  24. Karakoulas G, Shawe Taylor J (2000) Towards a strategy for boosting regressors. In: Smola A, Brattlet P, Scholkopf B, Schuurmans D (eds) Advances in large margin classifiers. MIT Press, Cambridge, p 247

    Google Scholar 

  25. Riccardi A, Fernández-Navarro F, Carloni S (2014) Cost-sensitive AdaBoost algorithm for ordinal regression based on extreme learning machine. IEEE Trans Cybern 44(10):1898–1909

    Article  Google Scholar 

  26. Fernández-Navarro F, Campoy-Munoz P, de la Paz-Marín M, Hervás-Martínez C, Yao X (2013) Addressing the EU sovereign ratings using an ordinal regression approach. IEEE Trans Cybern 43(6):2228–2240

    Article  Google Scholar 

  27. Liu Y, Yao X (1999) Ensemble learning via negative correlation. Neural Netw 12(10):1399–1404

    Article  Google Scholar 

  28. Reyneri LM, Colla V, Sgarbi M, Vannucci M (2009) Self-estimation of data and approximation reliability through neural networks. In: Proceedings of 10th international work-conference on artificial neural networks, IWANN 2009, Salamanca, Spain, June 10–12, 2009. Proceedings, Part I

  29. Haykin S (2009) Neural networks: a comprehensive foundation—chapter 9: self-organizing. Prentice-Hall, Englewood Cliffs. ISBN 0-13-908385-5

  30. Vannucci M, Colla V, Cateni S (2015) An hybrid ensemble method based on data clustering and weak learners reliabilities estimated through neural networks. In: Advances in computational intelligence. Lecture notes in computer science, vol 9095, pp 400–411

  31. Rencher AC, Christensen WF (2012) Chapter 10, multivariate regression, section 10.1, introduction. Methods of multivariate analysis, Wiley series in probability and statistics, vol 709, 3rd edn. Wiley, New York, p 19

    Google Scholar 

  32. Bache K, Lichman M (2013) UCI machine learning repository.http://archive.ics.uci.edu/ml. University of California, School of Information and Computer Science, Irvine

  33. Allotta B, Colla V, Malvezzi M (2002) Train position and speed estimation using wheel velocity measurements. Proc Inst Mech Eng F J Rail Rapid Transit 216(3):207–225

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

  1. TeCIp Institute, Scuola Superiore Sant’Anna, Via Moruzzi, 1, 56124, Pisa, Italy

    Marco Vannucci, Valentina Colla & Silvia Cateni

Authors
  1. Marco Vannucci

    You can also search for this author inPubMed Google Scholar

  2. Valentina Colla

    You can also search for this author inPubMed Google Scholar

  3. Silvia Cateni

    You can also search for this author inPubMed Google Scholar

Corresponding author

Correspondence toMarco Vannucci.

Rights and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Vannucci, M., Colla, V. & Cateni, S. Learners Reliability Estimated Through Neural Networks Applied to Build a Novel Hybrid Ensemble Method.Neural Process Lett46, 791–809 (2017). https://doi.org/10.1007/s11063-017-9586-6

Download citation

Keywords

Access this article

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (Japan)

Instant access to the full article PDF.

Advertisement


[8]ページ先頭

©2009-2025 Movatter.jp