Skip to main content

A Selection Method for Computing the Ensemble Size of Base Classifier in Multiple Classifier System

  • Conference paper
  • First Online:
Applied Computer Vision and Image Processing

Part of the book series: Advances in Intelligent Systems and Computing ((AISC,volume 1155))

Abstract

As a discipline, Machine Learning has been adopted and leveraged widely by researchers from several domains. There is a huge range of classifiers already available in machine learning and it has kept on growing with the advancement of this field. However, it is very hard to pick the best classifier among the several similar classifiers suitable for any problem. Recent advancement in this field for solving this issue is the Multiple Classifier System (MCS). It comes under the umbrella of ensemble learning and gives comparatively a better and definite result than a single classifier. MCS has two layers—(i) Base layer—contains a number of ML Classifiers appropriate for any specific task—and (ii) Meta Learner Layer—which aggregates the results from base layer classifiers by using techniques, such as Voting and Stacking. However, the job of selecting the appropriate classifiers from various classifiers or from a family of classifiers for a specific classification or prediction task on any dataset is still unraveling. This work emphasizes determining the characteristics of the selection method of base classifiers in the MCS. Moreover, which Meta Learner layer from Stacking and Voting aggregates the better result according to the different sizes of the base classifiers?

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 169.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 219.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Ayodele, T.O.: Types of machine learning algorithms. New Adv. Mach. Learn. 19–49 (2010)

    Google Scholar 

  2. Arruti, A., Mendialdua, I., Sierra, B., Lazkano, E., Jauregi, E.: Expert systems with applications one method: NOV @. 41, 6251–6260 (2014)

    Google Scholar 

  3. Cavalin, P.R., Sabourin, R., Suen, C.Y.: Dynamic selection approaches for multiple classiffier systems. Neural Comput. Appl. 22(3–4), 673–688 (2013)

    Google Scholar 

  4. Tulyakov, S., Jaeger, S., Govindaraju, V., Doermann, D.: Review of classifier combination methods. Studies in Computational Intelligence 90 (Figure 1), 361–386 (2008)

    Google Scholar 

  5. Fernandez-Delgado, M., Cernadas, E., Barro, S., Amorim, D.: Do we need hundreds of classifiersto solve real world classification problems? J. Mach. Learn. Res. 15, 3133–3181 (2014)

    Google Scholar 

  6. Valentini, G., Masulli, F.: Ensembles of learning machines, WIRN VIETRI 2002. In: Proceedings of the 13th Italian Workshop on Neural Nets-Revised Papers., pp. 3–22. ACM, London U.K (2002)

    Google Scholar 

  7. Son, H., Kim, C., Hwang, N., Kim, C., Kang, Y.: Classification of major construction materials in construction environments using ensemble classifiers. Adv. Eng. Inform. 28(1), 1–10 (2014)

    Google Scholar 

  8. Marques, A.I., Garcia, V., Sanchez, J.S.: Exploring the behaviour of base classifiers in credit scoring ensembles. Expert Syst. Appl. 39(11), 10244–10250 (2012)

    Article  Google Scholar 

  9. Chen, M., Shi, L., Kelly, R., Perkins, R., Fang, H, Tong, W.: Selecting a single model or combining multiple models for microarray-based classifier development?-a comparative analysis based on largeand diverse datasets generated from the MAQC-II project. BMC Bioinformatics, vol. 12, (Suppl 10), p. S3 (2011)

    Google Scholar 

  10. Haibo, Yang., Hongling, Z., Zongmin, W.: Remote sensing classification based on hybrid multi-classifier combination algorithm. In: ICALIP 2010—2010 International Conference on Audio, Language and Image Processing, Proceedings (X), 1688–1692 (2010)

    Google Scholar 

  11. Yang, B., Cao, C., Xing, Y., Li, X.: Automatic classification of remote sensing images using multiple classifier systems. Math. Probl. Eng (2015)

    Google Scholar 

  12. Mukhopadhyay, A., Maulik, U., Bandyopadhyay, S., Coello, C.A.C.: A survey of multi objective evolutionary algorithms for data mining: Part i. IEEE Transac. Evol. Comput. 18(1), 4–19 (2014)

    Google Scholar 

  13. Pari, R., Sandhya, M., Sankar, S.: A multi-tier stacked ensemble algorithm to reduce the regret of incremental learning for streaming data. IEEE Access 6(8452944), 48726–48739 (2018)

    Google Scholar 

  14. Mohammed, M., Mwambi, H., Omolo, B., Elbashir, M.K.: Using stacking ensemble for microarray-based cancer classification. In: 2018 International Conference on Computer, Control, 12–14 (Aug. 2018)

    Google Scholar 

  15. Chen, Y., Wong, M.L., Li, H.: Applying ant colony optimization to configuring stacking ensembles for data mining. Expert Sys. Appl. 41(6): 2688–2702 (2014). http://dx.doi.org/10.1016/j.eswa.2013.10.063

  16. Beitia, M.: Contributions on Distance-Based algorithms, Multi Classifier Construction and Pairwise Classification, (April 2015)

    Google Scholar 

  17. Biggio, B., Fumera, G., Roli, F.: Multiple classifier systems for adversarial classification tasks. In: Multiple Classifier Systems, vol. 5519, pp. 132–141, (2009). http://dx.doi.org/10.1007/978-3-642-02326-2 14

  18. Basu, T.: Effective text classification by a supervised feature selection approach. (2012)

    Google Scholar 

  19. Mendialdua, I., Arruti, A., Jauregi, E., Lazkano, E., Sierra, B.: Neuro computing classifier subset selection to construct multi-classifiers by means of estimation of distribution algorithms. 157, 46–60 (2015)

    Google Scholar 

  20. Domingos, P.: A few useful things to know about machine learning. Commun. ACM 55(10), 78 (2012)

    Article  Google Scholar 

  21. Demsar, Janez: Statistical comparisons of classifiers over multiple data sets. J. Mach. Learn. Res. 7, 1–30 (2006)

    MathSciNet  MATH  Google Scholar 

  22. Ko, A.H.R., Sabourin, R.: Single classifier-based multiple classification Scheme for weak classifiers: An experimental comparison. Expert Syst. Appl. 40(9): 3606–3622 (2013)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Santosh Kumar .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Tomer, V., Caton, S., Kumar, S., Kumar, B. (2020). A Selection Method for Computing the Ensemble Size of Base Classifier in Multiple Classifier System. In: Iyer, B., Rajurkar, A., Gudivada, V. (eds) Applied Computer Vision and Image Processing. Advances in Intelligent Systems and Computing, vol 1155. Springer, Singapore. https://doi.org/10.1007/978-981-15-4029-5_23

Download citation

  • DOI: https://doi.org/10.1007/978-981-15-4029-5_23

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-15-4028-8

  • Online ISBN: 978-981-15-4029-5

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics