Skip to content
Licensed Unlicensed Requires Authentication Published by De Gruyter August 7, 2020

Model-based random forests for ordinal regression

  • Muriel Buri ORCID logo and Torsten Hothorn ORCID logo EMAIL logo

Abstract

We study and compare several variants of random forests tailored to prognostic models for ordinal outcomes. Models of the conditional odds function are employed to understand the various random forest flavours. Existing random forest variants for ordinal outcomes, such as Ordinal Forests and Conditional Inference Forests, are evaluated in the presence of a non-proportional odds impact of prognostic variables. We propose two novel random forest variants in the model-based transformation forest family, only one of which explicitly assumes proportional odds. These two novel transformation forests differ in the specification of the split procedures for the underlying ordinal trees. One of these split criteria is able to detect changes in non-proportional odds situations and the other one focuses on finding proportional-odds signals. We empirically evaluate the performance of the existing and proposed methods using a simulation study and illustrate the practical aspects of the procedures by a re-analysis of the respiratory sub-item in functional rating scales of patients suffering from Amyotrophic Lateral Sclerosis (ALS).


Corresponding author: Torsten Hothorn, Institut für Epidemiologie, Biostatistik und Prävention, Universität Zürich, Zürich, Switzerland, E-mail:

Funding source: Schweizerischer Nationalfonds zur Förderung der Wissenschaftlichen Forschung

Award Identifier / Grant number: 200021_184603

Funding source: Horizon 2020 Framework Programme

Award Identifier / Grant number: 681094

Funding source: Swiss State Secretariat for Education, Research and Innovation (SERI)

Award Identifier / Grant number: 15.0137

  1. Author contribution: All the authors have accepted responsibility for the entire content of this submitted manuscript and approved submission.

  2. Research funding: This project received funding from the Horizon 2020 Research and Innovation Programme of the European Union under grant agreement number 681094, and is supported by the Swiss State Secretariat for Education, Research and Innovation (SERI) under contract number 15.0137. Torsten Hothorn received funding from the Swiss National Science Foundation under grant number 200021_184603.

  3. Conflict of interest statement: The author declares no conflicts of interest regarding this article.

A Additional results of the empirical evaluation

A.1 True log-likelihood vs. random forest based log-likelihoods

A.1.1 log-likelihood differences

Same as Figure 1, but based on forests of 2000 trees instead of aggregating over 250 trees only.

A.1.2 log-likelihood

Figures 1 and 4 present log-likelihood differences; for the same simulations, the raw log-likelihoods (without centring with respect to the log-likelihood of the true data-generating process) are given in Figures 5 and 6 with 250 and 2000 trees, respectively.

Figure 4: 
Log-likelihood differences for the quantification of the performance of the five competitors. Same as Figure 1 but with forests of 2000 instead of 250 trees.
Figure 4:

Log-likelihood differences for the quantification of the performance of the five competitors. Same as Figure 1 but with forests of 2000 instead of 250 trees.

Figure 5: 
Log-likelihoods for the quantification of the performance of the five competitors: Ordinal Forests (equal), Ordinal Forests (proportional), Conditional Inference Forests (CForest), ordinal transformation forests assuming proportional odds (OTF(α)), and ordinal transformation forests allowing for non-proportional odds (OTF(


ϑ

$\boldsymbol{\vartheta }$


)). Values close to the True Log-Likelihood are preferable. The four types of effects (absent effect “No”, proportional odds “PO”, non-proportional odds “Non-PO”, or a combination of PO and Non-PO “Combined”) are simulated with 100 repetitions for low and high dimensional prognostic variables. Each grey line connects the results obtained for a particular repetition. Each forest is an aggregation of 250 trees.
Figure 5:

Log-likelihoods for the quantification of the performance of the five competitors: Ordinal Forests (equal), Ordinal Forests (proportional), Conditional Inference Forests (CForest), ordinal transformation forests assuming proportional odds (OTF(α)), and ordinal transformation forests allowing for non-proportional odds (OTF( ϑ )). Values close to the True Log-Likelihood are preferable. The four types of effects (absent effect “No”, proportional odds “PO”, non-proportional odds “Non-PO”, or a combination of PO and Non-PO “Combined”) are simulated with 100 repetitions for low and high dimensional prognostic variables. Each grey line connects the results obtained for a particular repetition. Each forest is an aggregation of 250 trees.

Figure 6: 
Log-likelihoods for the quantification of the performance of the five competitors. Same as Figure 5 but with forests of 2000 instead of 250 trees.
Figure 6:

Log-likelihoods for the quantification of the performance of the five competitors. Same as Figure 5 but with forests of 2000 instead of 250 trees.

A.2 Kullback-Leibler divergence

The out-of-sample log-likelihood assesses the quality of a predictive distribution obtained from any of the variants of Random Forests for Ordinal Regression in light of an independent validation sample, where the ordinal response was sampled from the underlying data-generating process. This comparison might be unfair because Ordinal Forests were designed to focus on predictions of the outcome categories, not predictive distributions. Hence, it was suggested by an external referee to use an alternative performance measure which allows to assess the quality of a random forest based on the predicted outcome categories.

Let p ( c k | x ı ) = P ( Y ı = c k | x ı ) denote the true conditional probability density of the ith observation in the validation sample over the K possible categories c 1 , , c K , which is known in case of our simulation study for a configuration x ı of the explanatory variables. With p ˆ ( c k | x ı ) = P ˆ ( Y ı = c k | x ı ) we denote the predictive conditional probability density obtained from any of the random forest variants studied in this paper.

The Kullback-Leibler divergence evaluated for the N ˜ validation samples.

(7) KL = ı = N + 1 N + N ˜ k = 1 K p ( c k | x ı ) log ( p ( c k | x ı ) p ˆ ( c k | x ı ) )

allows a direct comparison of true and estimated conditional probability densities. The results for each of the eight simulation scenarios for 250 and 2000 trees are given in Figures 7 and 8. The same conclusions as drawn in the main text hold, i.e., OTF(α) performed best in the proportional odds scenario and OTF( ϑ ) seemed to be able to detect deviations from this model assumption. Ordinal Forests performed less well.

Figure 7: 
Kullback-Leibler divergence KL comparing the true and estimated conditional probability densities according to Equation (7). The performance of the five competitors: Ordinal Forests (equal), Ordinal Forests (proportional), Conditional Inference Forests (CForest), ordinal transformation forests assuming proportional odds (OTF(α)), and ordinal transformation forests allowing for non-proportional odds (OTF(


ϑ

$\boldsymbol{\vartheta }$


)) is assessed. Small values are preferable. The four types of effects (absent effect “No”, proportional odds “PO”, non-proportional odds “Non-PO”, or a combination of PO and Non-PO “Combined”) are simulated with 100 repetitions for low and high dimensional prognostic variables. Each grey line connects the results obtained for a particular repetition. Each forest is an aggregation of 250 trees.
Figure 7:

Kullback-Leibler divergence KL comparing the true and estimated conditional probability densities according to Equation (7). The performance of the five competitors: Ordinal Forests (equal), Ordinal Forests (proportional), Conditional Inference Forests (CForest), ordinal transformation forests assuming proportional odds (OTF(α)), and ordinal transformation forests allowing for non-proportional odds (OTF( ϑ )) is assessed. Small values are preferable. The four types of effects (absent effect “No”, proportional odds “PO”, non-proportional odds “Non-PO”, or a combination of PO and Non-PO “Combined”) are simulated with 100 repetitions for low and high dimensional prognostic variables. Each grey line connects the results obtained for a particular repetition. Each forest is an aggregation of 250 trees.

Figure 8: 
Kullback-Leibler divergence KL comparing the true and estimated conditional probability densities according to Equation (7). Same as Figure 7 but with forests of 2000 instead of 250 trees.
Figure 8:

Kullback-Leibler divergence KL comparing the true and estimated conditional probability densities according to Equation (7). Same as Figure 7 but with forests of 2000 instead of 250 trees.

As a special case, the predicted Dirac distribution putting mass one on category k of the K categories, is denoted by d k ( c k | x ı ) = I ( k = k ) . This distribution represents the empirical density of an observation Y ı = c k from the true data-generating process or a point prediction Y ˆ ı = c k .

When replacing the true conditional probability density p with a realisation Y ı from this density, i.e., putting empirical mass one on an observation Y ı with explanatory variables x ı , the Kullback-Leibler divergence is equivalent to the negative log-likelihood

(8) K L 1 = ı = N + 1 N + N ˜ k = 1 K d Y ı ( c k | x ı ) log ( d Y ı ( c k | x ı ) p ˆ ( c k | x ı ) ) = ı = N + 1 N + N ˜ log ( p ˆ ( Y ı | x ı ) )

For the sake of completeness, Figures 9 and 10 are printed in addition to Figures 5 and 6, again for 250 and 2000 trees.

Figure 9: 
Results of the Kullback-Leibler divergence 





KL

1



${\text{KL}}_{1}$


 according to Equation (8). 





KL

1



${\text{KL}}_{1}$


 compares the realisation 




Y
ı



${Y}_{{\imath}}$


 of the true conditional probability density p, i.e., putting empirical mass one on an observation 




Y
ı



${Y}_{{\imath}}$


 with explanatory variables 




x
ı



${x}_{{\imath}}$


, with the estimated conditional probability density 





p
ˆ



(


c
k

|

x
ı


)



$\hat{p}\left({c}_{k}\vert {x}_{{\imath}}\right)$


. The performance of the five competitors: Ordinal Forests (equal), Ordinal Forests (proportional), Conditional Inference Forests (CForest), ordinal transformation forests assuming proportional odds (OTF(α)), and ordinal transformation forests allowing for non-proportional odds (OTF(


ϑ

$\boldsymbol{\vartheta }$


)) is assessed. Small values are preferable. The four types of effects (absent effect “No”, proportional odds “PO”, non-proportional odds “Non-PO”, or a combination of PO and Non-PO “Combined”) are simulated with 100 repetitions for low and high dimensional prognostic variables. Each grey line connects the results obtained for a particular repetition. Each forest is an aggregation of 250 trees.
Figure 9:

Results of the Kullback-Leibler divergence KL 1 according to Equation (8). KL 1 compares the realisation Y ı of the true conditional probability density p, i.e., putting empirical mass one on an observation Y ı with explanatory variables x ı , with the estimated conditional probability density p ˆ ( c k | x ı ) . The performance of the five competitors: Ordinal Forests (equal), Ordinal Forests (proportional), Conditional Inference Forests (CForest), ordinal transformation forests assuming proportional odds (OTF(α)), and ordinal transformation forests allowing for non-proportional odds (OTF( ϑ )) is assessed. Small values are preferable. The four types of effects (absent effect “No”, proportional odds “PO”, non-proportional odds “Non-PO”, or a combination of PO and Non-PO “Combined”) are simulated with 100 repetitions for low and high dimensional prognostic variables. Each grey line connects the results obtained for a particular repetition. Each forest is an aggregation of 250 trees.

Figure 10: 
Results of the Kullback-Leibler divergence 





KL

1



${\text{KL}}_{1}$


 according to Equation (8). Same as Figure 9 but with forests of 2000 instead of 250 trees.
Figure 10:

Results of the Kullback-Leibler divergence KL 1 according to Equation (8). Same as Figure 9 but with forests of 2000 instead of 250 trees.

Finally, we can compare a point prediction Y ˆ ı (the default output of predict.ord() from ordinalForest [5]) with the true probability density p by using a Dirac distribution putting mass one on Y ˆ ı in

(9) KL 2 = ı = N + 1 N + N ˜ k = 1 K d Y ˆ ı ( c k | x ı ) log ( d Y ˆ ı ( c k | x ı ) p ( c k | x ı ) ) = ı = N + 1 N + N ˜ log ( p ( Y ˆ ı | x ı ) )

This measure provides a fair comparison with Ordinal Forests, because no additional effort to obtain a predictive distribution is necessary in order to compute this performance measure. The results were, however, in line with the findings from the other two Kullback-Leibler divergence metrics and the log-likelihood evaluations (see Figures 11 and 12).

Figure 11: 
Results of the Kullback-Leibler divergence 





KL

2



${\text{KL}}_{2}$


 according to Equation (9). 





KL

2



${\text{KL}}_{2}$


 compares the point prediction 






Y
ˆ


ı



${\hat{Y}}_{{\imath}}$


 estimated by the Random Forest algorithm with the true probability density p by using a Dirac distribution putting mass one on 






Y
ˆ


ı



${\hat{Y}}_{{\imath}}$


. The performance of the five competitors: Ordinal Forests (equal), Ordinal Forests (proportional), Conditional Inference Forests (CForest), ordinal transformation forests assuming proportional odds (OTF(α)), and ordinal transformation forests allowing for non-proportional odds (OTF(


ϑ

$\boldsymbol{\vartheta }$


)) is assessed. Small values are preferable. The four types of effects (absent effect “No”, proportional odds “PO”, non-proportional odds “Non-PO”, or a combination of PO and Non-PO “Combined”) are simulated with 100 repetitions for low and high dimensional prognostic variables. Each grey line connects the results obtained for a particular repetition. Each forest is an aggregation of 250 trees.
Figure 11:

Results of the Kullback-Leibler divergence KL 2 according to Equation (9). KL 2 compares the point prediction Y ˆ ı estimated by the Random Forest algorithm with the true probability density p by using a Dirac distribution putting mass one on Y ˆ ı . The performance of the five competitors: Ordinal Forests (equal), Ordinal Forests (proportional), Conditional Inference Forests (CForest), ordinal transformation forests assuming proportional odds (OTF(α)), and ordinal transformation forests allowing for non-proportional odds (OTF( ϑ )) is assessed. Small values are preferable. The four types of effects (absent effect “No”, proportional odds “PO”, non-proportional odds “Non-PO”, or a combination of PO and Non-PO “Combined”) are simulated with 100 repetitions for low and high dimensional prognostic variables. Each grey line connects the results obtained for a particular repetition. Each forest is an aggregation of 250 trees.

Figure 12: 
Results of the Kullback-Leibler divergence 



K

L
2



$\mathrm{K}{\mathrm{L}}_{2}$


 according to Equation (9). Same as Figure 11 but with forests of 2000 instead of 250 trees.
Figure 12:

Results of the Kullback-Leibler divergence K L 2 according to Equation (9). Same as Figure 11 but with forests of 2000 instead of 250 trees.

References

1. Whegang, SY, Basco, LK, Gwét, H, Thalabard, JC. Analysis of an ordinal outcome in a multicentric randomized controlled trial: Application to a 3-arm anti-malarial drug trial in cameroon. BMC Med Res Methodol 2010;10:58. https://doi.org/10.1186/1471-2288-10-58.Search in Google Scholar

2. Roozenbeek, B, Lingsma, HF, Perel, P, Edwards, P, Roberts, I, Murray, GD, et al. The added value of ordinal analysis in clinical trials: An example in traumatic brain injury. Crit Care 2011;15:R127. https://doi.org/10.1186/cc10240.Search in Google Scholar

3. Tanadini, LG, Steeves, JD, Curt, A, Hothorn, T. Autoregressive transitional ordinal model to test for treatment effect in neurological trials with complex endpoints. BMC Med Res Methodol 2016;16:149. https://doi.org/10.1186/s12874-016-0251-y.Search in Google Scholar

4. Peterson, RL, Vock, DM, Powers, JH, Emery, S, Cruz, EF, Hunsberger, S, et al. Analysis of an ordinal endpoint for use in evaluating treatments for severe influenza requiring hospitalization. Clin Trials 2017;14:264–76. https://doi.org/10.1177/1740774517697919.Search in Google Scholar

5. Hornung, R. Ordinal forests. J Classif 2019.10.1007/s00357-018-9302-xSearch in Google Scholar

6. Hothorn, T, Hornik, K, Zeileis, A. Unbiased recursive partitioning: A conditional inference framework. J Comput Graph Stat 2006;15:651–74. https://doi.org/10.1198/106186006x133933.Search in Google Scholar

7. Moons, KGM, Royston, P, Vergouwe, Y, Grobbee, DE, Altman, DG. Prognosis and prognostic research: What, why, and how?” The BMJ 2009;338.10.1136/bmj.b375Search in Google Scholar PubMed

8. Royston, P, Moons, KG, Altman, DG, Vergouwe, Y. Prognosis and prognostic research: Developing a prognostic model. The BMJ 2009;338:b604. https://doi.org/10.1136/bmj.b604.Search in Google Scholar

9. Hemingway, H, Croft, P, Perel, P, Hayden, JA, Abrams, K, Timmis, A, et al. Prognosis research strategy (progress) 1: A framework for researching clinical outcomes. The BMJ 2013:346.10.1136/bmj.e5595Search in Google Scholar PubMed PubMed Central

10. Riley, RD, Hayden, JA, Steyerberg, EW, Moons, KG, Abrams, K, Kyzas, PA, et al. Prognosis research strategy (progress) 2: Prognostic factor research. PLoS Med 2013;10:e1001380. https://doi.org/10.1371/journal.pmed.1001380.Search in Google Scholar

11. Steyerberg, EW, Moons, KG, van der Windt, DA, Hayden, JA, Perel, P, Schroter, S, et al. Prognosis research strategy (progress) 3: Prognostic model research. PLoS Med 2013;10:e1001381. https://doi.org/10.1371/journal.pmed.1001381.Search in Google Scholar

12. Hingorani, AD, Windt, DAvd, Riley, RD, Abrams, K, Moons, KGM, Steyerberg, EW, et al. Prognosis research strategy (progress) 4: Stratified medicine research. The BMJ 2013:346. https://doi.org/10.1136/bmj.e5793.Search in Google Scholar

13. Hothorn, T, Jung, HH. RandomForest4Life: A random forest for predicting ALS disease progression.Amyotroph Lateral Scler Frontotemporal Degenerat 2014;15:444–52. https://doi.org/10.3109/21678421.2014.893361.Search in Google Scholar

14. Ong, ML, Tan, PF, Holbrook, JD. Predicting functional decline and survival in Amyotrophic Lateral Sclerosis. PLoS ONE 2017;12:e0174925. https://doi.org/10.1371/journal.pone.0174925.Search in Google Scholar

15. Pfohl, SR, Kim, RB, Coan, GS, Mitchell, CS. Unraveling the complexity of Amyotrophic Lateral Sclerosis survival prediction. Front Neuroinf 2018;12:12.10.3389/fninf.2018.00036Search in Google Scholar PubMed PubMed Central

16. Beaulieu-Jones, B.K., Greene, C.S.. The Pooled Resource Open-Access ALS Clinical Trials, 2016. Semi- supervised learning of the electronic health record for phenotype stratification. J Biomed Inform 2016;64:168–78. https://doi.org/10.1016/j.jbi.2016.10.007.Search in Google Scholar

17. Seibold, H, Zeileis, A, Hothorn, T. Individual treatment effect prediction for Amyotrophic Lateral Sclerosis patients. Stat Methods Med Res 2018;27:3104–25. https://doi.org/10.1177/0962280217693034.Search in Google Scholar

18. Hothorn, T and Zeileis, A. Transformation forests. Technical report; 2017, arXiv 1701.02110, v2, URL: https://arxiv.org/abs/1701.02110.Search in Google Scholar

19. Agresti, A. Categorical Data Analysis, 2nd ed. Hoboken, New Jersey, U.S.A.: John Wiley & Sons; 2002.10.1002/0471249688Search in Google Scholar

20. Winell, H, Lindbäck, J. A general score-independent test for order-restricted inference. Stat Med 2018;37:3078–90. https://doi.org/10.1002/sim.7690.Search in Google Scholar

21. Breiman, L, Friedman, JH, Olshen, RA, and Stone, CJ. Classification and Regression Trees. California: Wadsworth; 1984.Search in Google Scholar

22. Atassi, N, Berry, J, Shui, A, Zach, N, Sherman, A, Sinani, E, et al. The PRO-ACT database: Design, initial analyses, and predictive features. Neurology 2014;83:1719–25. https://doi.org/10.1212/wnl.0000000000000951.Search in Google Scholar

23. Chiò, A, Logroscino, G, Hardiman, O, Swingler, R, Mitchell, D, Beghi, E, et al. on behalf of the Eurals Consortium (2009): “Prognostic factors in ALS: A critical review. Amyotroph Lateral Scler; 10:310–23.10.3109/17482960802566824Search in Google Scholar

24. Kimura, F, Fujimura, C, Ishida, S, Nakajima, H, Furutama, D, Uehara, H, et al. Progression rate of ALSFRS-R at time of diagnosis predicts survival time in ALS. Neurology 2006;66:265–67. https://doi.org/10.1212/01.wnl.0000194316.91908.8a.Search in Google Scholar

25. Zoccolella, S, Beghi, E, Palagano, G, Fraddosio, A, Guerra, V, Samarelli, V, et al. Analysis of survival and prognostic factors in amyotrophic lateral sclerosis: A population based study. J Neurol Neurosurg Psychiatr 2008;79:33–7. https://doi.org/10.1136/jnnp.2007.118018.Search in Google Scholar

26. Fujimura-Kiyono, C, Kimura, F, Ishida, S, Nakajima, H, Hosokawa, T, Sugino, M, et al. Onset and spreading patterns of lower motor neuron involvements predict survival in sporadic amyotrophic lateral sclerosis. J Neurol Neurosurg Psychiatr 2011;82:1244–9. https://doi.org/10.1136/jnnp-2011-300141.Search in Google Scholar

27. Beaulieu-Jones, BK, Greene, CS, The Pooled Resource Open-Access ALS Clinical Trials. Semi-supervised learning of the electronic health record for phenotype stratification. J Biomed Inf 2016;64:168–78. https://doi.org/10.1016/j.jbi.2016.10.007.Search in Google Scholar

28. Mandrioli, J, Rosi, E, Fini, N, Fasano, A, Raggi, S, Fantuzzi, AL, et al. Changes in routine laboratory tests and survival in Amyotrophic Lateral Sclerosis. Neurol Sci 2017;38:2177–82. https://doi.org/10.1007/s10072-017-3138-8.Search in Google Scholar

29. Brooks, BR, Sanjak, M, Ringel, S, England, J, Brinkmann, J, Pestronk, A, et al. The amyotrophic lateral sclerosis functional rating scale: Assessment of activities of daily living in patients with amyotrophic lateral sclerosis. Arch Neurol 1996;53:141–7.10.1001/archneur.1996.00550020045014Search in Google Scholar

30. Cedarbaum, JM, Stambler, N, Malta, E, Fuller, C, Hilt, D, Thurmond, B, et al. The ALSFRS-R: A revised ALS functional rating scale that incorporates assessments of respiratory function. J Neurol Sci 1999;169:13–21. https://doi.org/10.1016/s0022-510x(99)00210-5.Search in Google Scholar

31. Athey, S, Tibshirani, J, Wager, S. Generalized random forests. Ann Math Stat 2019;47:1148–78.10.1214/18-AOS1709Search in Google Scholar

32. Schlosser, L, Hothorn, T, Stauffer, R, Zeileis, A. Distributional regression forests for probabilistic precipitation forecasting in complex terrain. Ann Appl Stat 2019;13:1564–89. https://doi.org/10.1214/19-aoas1247.Search in Google Scholar

33. Hothorn, T, Lausen, B, Benner, A, Radespiel-Tröger, M. Bagging survival trees. Stat Med 2004;23:77–91. https://doi.org/10.1002/sim.1593.Search in Google Scholar

34. Meinshausen, N. Quantile regression forests. J Mach Learn Res 2006;7:983–99, URL: http://jmlr.org/papers/v7/meinshausen06a.html.Search in Google Scholar

35. Lin, Y, Jeon, Y. Random forests and adaptive nearest neighbors. J Am Stat Assoc 2006;101:578–90. https://doi.org/10.1198/016214505000001230.Search in Google Scholar

36. Hothorn, T, Möst, L, Bühlmann, P. Most likely transformations. Scand J Stat 2018;45:110–34. https://doi.org/10.1111/sjos.12291.Search in Google Scholar

37. Schmid, M, Hothorn, T, Maloney, KO, Weller, DE, Potapov, S. Geoadditive regression modeling of stream biological condition. Environ Ecol Stat 2011;18:709–33.10.1007/s10651-010-0158-4Search in Google Scholar

38. R Core Team. R: a language and environment for statistical computing. Vienna, Austria: R Foundation for Statistical Computing; 2020, URL https://www.R-project.org/.Search in Google Scholar

39. Hornung, R. Ordinalforest: Ordinal forests: prediction and variable ranking with ordinal target variables; 2019b, URL https://CRAN.R-project.org/package=ordinalForest, R package version 2.3-1.Search in Google Scholar

40. Hothorn, T, Hornik, K, Strobl, C, Zeileis, A. Party: a laboratory for recursive partytioning; 2019, URL https://CRAN.R-project.org/package=party, R package version 1.3-3.Search in Google Scholar

41. Hothorn, T. trtf: transformation trees and forests; 2019b, URL https://CRAN.R-project.org/package=trtf, R package version 0.3-6.Search in Google Scholar

42. Friedman, JH. Multivariate adaptive regression splines. Ann Math Stat 1991;19:1–67. https://doi.org/10.1214/aos/1176347963.Search in Google Scholar

43. Küffner, R, Zach, N, Norel, R, Hawe, J, Schoenfeld, D, Wang, L, et al. Crowdsourced analysis of clinical trial data to predict amyotrophic lateral sclerosis progression. Nat Biotechnol 2015;33:51–7. https://doi.org/10.1038/nbt.3051.Search in Google Scholar

44. Cohen, J. Weighted kappa: Nominal scale agreement provision for scaled disagreement or partial credit. Psychol Bull 1968;70:213. https://doi.org/10.1037/h0026256.Search in Google Scholar

45. McCullagh, P. Regression Models for Ordinal Data. J Roy Stat Soc B Stat Methodol 1980;42:109–27. https://doi.org/10.1111/j.2517-6161.1980.tb01109.x.Search in Google Scholar

46. Agresti, A. Analysis of ordinal categorical data, Hoboken: John Wiley & Sons; 2010, vol 656.10.1002/9780470594001Search in Google Scholar

47. Tutz, G. Regression for categorical data, New York: Cambridge University Press; 2011, vol 34.10.1017/CBO9780511842061Search in Google Scholar

48. Doksum, KA, Gasko, M. On a correspondence between models in binary regression analysis and in survival analysis. Int Stat Rev 1990;58:243–52. https://doi.org/10.2307/1403807.Search in Google Scholar

49. Korepanova, N, Seibold, H, Steffen, V, Hothorn, T. Survival forests under test: Impact of the proportional hazards assumption on prognostic and predictive forests for ALS survival. Stat Methods Med Res 2019. https://doi.org/10.1177/0962280219862586.Search in Google Scholar

50. Hothorn, T. TH.data: TH's data archive; 2019a, URL: https://CRAN.R-project.org/package=TH.data,r.package.version.1.0-10.Search in Google Scholar

51. Hothorn, T, Zeileis, A. partykit: A modular toolkit for recursive partytioning in R. J Mach Learn Res 2015;16:3905–9, URL http://jmlr.org/papers/v16/hothorn15a.html.Search in Google Scholar

52. Hothorn, T. Most likely transformations: The mlt package. J Stat Software 2020;92:1–68. https://doi.org/10.18637/jss.v092.i01.Search in Google Scholar

Received: 2019-05-31
Accepted: 2020-03-30
Published Online: 2020-08-07

© 2020 Walter de Gruyter GmbH, Berlin/Boston

Downloaded on 9.5.2024 from https://www.degruyter.com/document/doi/10.1515/ijb-2019-0063/html
Scroll to top button