Skip to main content
Log in

Weighted Hybrid Decision Tree Model for Random Forest Classifier

  • Original Contribution
  • Published:
Journal of The Institution of Engineers (India): Series B Aims and scope Submit manuscript

Abstract

Random Forest is an ensemble, supervised machine learning algorithm. An ensemble generates many classifiers and combines their results by majority voting. Random forest uses decision tree as base classifier. In decision tree induction, an attribute split/evaluation measure is used to decide the best split at each node of the decision tree. The generalization error of a forest of tree classifiers depends on the strength of the individual trees in the forest and the correlation among them. The work presented in this paper is related to attribute split measures and is a two step process: first theoretical study of the five selected split measures is done and a comparison matrix is generated to understand pros and cons of each measure. These theoretical results are verified by performing empirical analysis. For empirical analysis, random forest is generated using each of the five selected split measures, chosen one at a time. i.e. random forest using information gain, random forest using gain ratio, etc. The next step is, based on this theoretical and empirical analysis, a new approach of hybrid decision tree model for random forest classifier is proposed. In this model, individual decision tree in Random Forest is generated using different split measures. This model is augmented by weighted voting based on the strength of individual tree. The new approach has shown notable increase in the accuracy of random forest.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

References

  1. D. Opitz, R. Maclin, Popular ensemble methods: an empirical study. J. Artif. Intell. 11, 169–198 (1999)

    MATH  Google Scholar 

  2. L. Rokach, O. Maimon, Top-down induction of decision trees classifiers—a survey. IEEE Trans. Syst. Man Cybern. Part c Appl. Rev. 35(5), 476–487 (2005)

  3. L. Breiman, Bagging predictors, Technical Report No 421, 123–140 (1994)

  4. S. Bernard, L. Heutte, S. Adam, Towards a better understanding of random forests through the study of strength and correlation, in ICIC Proceedings of the Intelligent Computing 5th International Conference on Emerging Intelligent Computing Technology and Applications, 536–545 (2009)

  5. S. Bernard, L. Heutte, S. Adam, Dynamic random forests. Pattern Recogn. Lett. 33, 1580–1586 (2012)

    Article  Google Scholar 

  6. V.Y. Kulkarni, P.K. Sinha, Random forest classifiers: a survey and future research directions. Int. J. Adv. Comput. 36(1), 1144–1153

  7. A. Verikas, A. Gelzinis, M. Bacauskiene, Mining data with random forests: a survey and results of new tests. Pattern Recogn. 44, 330–349 (2011)

    Article  Google Scholar 

  8. M. Robnik-Sikonja, Improving random forests, in Machine Learning, ECML 2004 Proceedings, ed. by J.F. Boulicaut, et al. (2004), pp. 359–370

  9. L.A. Badulescu, The choice of the best attribute selection measure in DecisionTree induction. Ann Univ Craiova Math. Comp. Sci. Ser. 34(1), 88–93 (2007)

  10. J. Han, M. Kamber, Data mining: concepts and techniques, 2nd edn. (Morgan Kaufmann Publisher, Los Altos, 2006)

    MATH  Google Scholar 

  11. J. Mingers, An empirical comparison of selection measures for decision tree induction. Mach. Learn. 3, 319–342 (1989)

    Google Scholar 

  12. V.Y. Kulkarni, M. Petare, P.K. Sinha, Analyzing random forest classifier using different split measures, in Proceedings of International Conference on Soft Computing and Problem Solving, Jaipur, India, Springer AISC Series, (2012), pp. 691–699

  13. M. Robnik-Sikonja, I. Kononenko, Attribute dependencies, understandability, and split selection in tree based models, in Machine Learning: Proceedings of the Sixteenth International Conference (ICML), pp. 344–353 (1999)

  14. L. Breiman, Random forests. Mach. Learn. 45, 5–32 (2001)

    Article  MATH  Google Scholar 

  15. W. Buntine, T. Niblet, A further comparison of splitting rules for decision tree induction. Mach. Learn. 8, 75–85 (1992)

    Google Scholar 

  16. L. Brieman, Technical note-some properties of splitting criteria. Mach. Learn. 24, 41–47 (1996)

    Google Scholar 

  17. M. Robnik-Sikonja, I. Kononento, Theoretical and empirical Analysis of ReliefF and RReliefF. Mach. Learn. 53, 23–69 (2003)

    Article  MATH  Google Scholar 

  18. http://archive.ics.uci.edu/ml

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Vrushali Y. Kulkarni.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Kulkarni, V.Y., Sinha, P.K. & Petare, M.C. Weighted Hybrid Decision Tree Model for Random Forest Classifier. J. Inst. Eng. India Ser. B 97, 209–217 (2016). https://doi.org/10.1007/s40031-014-0176-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s40031-014-0176-y

Keywords

Navigation