Skip to main content

Advertisement

Log in

Measuring Educational Outcomes for At-Risk Children and Youth: Issues with the Validity of Self-Reported Data

  • Original Paper
  • Published:
Child & Youth Care Forum Aims and scope Submit manuscript

Abstract

Background

Youth programs often rely on self-reported data without clear evidence as to the accuracy of these reports. Although the validity of self-reporting has been confirmed among some high school and college age students, one area that is absent from extant literature is a serious investigation among younger children. Moreover, there is theoretical evidence suggesting limited generalizability in extending findings on older students to younger populations.

Objective

The purpose of this study is to examine the validity of academic and attendance self-reporting among children and youth.

Method

This study relies on original data collected from 288 children and youth using Big Brothers Big Sisters enrollment and assessment data, paired with school-records from two local school divisions. Initially, we utilized percent agreement, validity coefficients, and average measures ICC scores to assess the response validity of self-reported academic and attendance measures. We then estimated the affects of several moderating factors on reporting agreement (using standardized difference scores). We also accounted for cross-informant associations with child reported GPA using a moderated multiple regression model.

Results

Findings indicate that children and youth report their individual grades and attendance poorly. Particularly, younger and lower performing children are more likely to report falsely. However, there is some evidence that a mean construct measure of major subjects GPA is a slightly more valid indicator of academic achievement.

Conclusion

Findings suggest that researchers and practitioners should exercise caution in using self-reported grades and attendance indicators from young and low-performing students.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

Notes

  1. Social desirability has been defined as “the desire to revise a response before communicating it to a researcher to protect self-image or inaccurately project an image of academic performance” (Cole et al. 2012, p. 2).

  2. While most researchers attribute overreporting to social desirability, others have attributed inaccuracies to recall failure and biases created by the positive reconstruction of memory (Bahrick et al. 1996, 1993).

  3. Frequently reported disabilities were ADHD, speech and communication delays, and teacher-reported learning delays. Common emotional problems, as reported by parents, included having family problems, displaying anger or anxiety issues, or receiving counseling services.

  4. In tests conducted among adults, authors found psychometric tests of the depressive inventory indicate scale reliability and validity. Cronbach’s alpha ranged from .85 in community samples in .9 in psychiatric samples. Test–retest reliability show moderate correlations (r = .51–.67).

  5. The Director of Programs reviewed all determinations prior to survey administration, which would mitigate any issues with multiple rater consistency in sampling.

  6. Cicchetti (1994) provides commonly-cited ICC cutoffs for qualitative ratings of agreement. Values less than .40 are considered as weak, values between .40 and .59 as fair, values between .60 and .74 as good, and values .75 and higher as excellent.

  7. Our criteria for assessing the validity coefficient was .00–.3 as weak, .3–.59 as moderate, and .6 or above as strong.

References

  • Alexander, K. L., Entwisle, D. R., & Bedinger, S. D. (1994). When expectations work: Race and socioeconomic differences in school performance. Social Psychology Quarterly, 57, 283–299. http://spq.sagepub.com/

  • Anaya, G. (1999). Accuracy of self-reported test scores. College and University, 75(2), 13–19.

    Google Scholar 

  • Arthur, M. W., Hawkins, J. D., Pollard, J., Catalano, R. F., & Baglioni, A. J, Jr. (2002). Measuring risk and protective factors for substance use, delinquency, and other adolescent problem behaviors: The communities that care youth survey. Evaluation Review, 26, 575. doi:10.1177/0193841X0202600601.

    PubMed  Google Scholar 

  • Bahrick, H. P., Hall, L. K., & Berger, S. A. (1996). Accuracy and distortion in memory for high school grades. Psychological Science, 7, 265–271. doi:10.1111/j.1467-9280.1996.tb00372.x.

    Article  Google Scholar 

  • Bahrick, H. P., Hall, L. K., & Dunlosky, J. (1993). Reconstructive processing of memory content for high versus low test scores and grades. Applied Cognitive Psychology, 7, 1–10. doi:10.1002/acp.2350070102.

    Article  Google Scholar 

  • Blatchford, P. (1997). Students’ self assessment of academic attainment: Accuracy and stability from 7 to 16 years and influence of domain and social comparison group. Educational Psychology: An International Journal of Experimental Educational Psychology, 17(3), 345–359. doi:10.1111/j.2044-8279.1997.tb01235.x.

    Article  Google Scholar 

  • Bowman, N. A., & Hill, P. L. (2011). Measuring how college affects students: Social desirability and other potential biases in college student self-reported gains. New Directions for Institutional Research, 150, 73–85. doi:10.1002/ir.390.

    Article  Google Scholar 

  • Butler, R. (1990). The effects of mastery and competitive conditions on self-assessment at different ages. Child Development, 61, 201–210. doi:10.1111/j.1467-8624.1990.tb02772.x.

    Article  PubMed  Google Scholar 

  • Cassady, J. C. (2001). Self-reported GPA and SAT: A methodological note. Practical Assessment, Research & Evaluation 7(12). http://pareonline.net/Home.htm

  • Catalano, R. F., Berglund, M. L., Ryan, J. A. M., Lonczak, H. S., & Hawkins, J. D. (2002). Positive youth development in the United States: Research findings on evaluations of positive youth development programs. Prevention & Treatment, 5 (15). doi: 10.1037/1522-3736.5.1.515a

  • Cicchetti, D. V. (1994). Guidelines, criteria, and rules of thumb for evaluating normed and standardized assessment instruments in psychology. Psychological Assessment 6(4), 284–290. http://www.apa.org/pubs/journals/pas/

  • Cole, J. S., Rocconi, L., & Gonyea, R. M. (2012). Accuracy of Self-Reported Grades: Implications for Research.” Paper presented at the annual meeting of the Association for Institutional Research, New Orleans, Louisiana. http://cpr.iub.edu/uploads/AIR%202012%20Cole%20Rocconi%20Gonyea.pdf

  • Connell, J., & Ilardi, B. C. (1987). Self-system concomitants of discrepancies between children’s and teachers’ evaluations of academic competence. Child Development, 58, 1297–1307. doi:10.2307/1130622.

    Article  PubMed  Google Scholar 

  • Crockett, L. J., Schulenberg, J. E., & Peterson, A. C. (1987). Congruence between objective and self-reported data in a sample of young adolescents. Journal of Adolescent Research, 2(4), 383–392. doi:10.1177/074355488724006.

    Article  Google Scholar 

  • Crowne, D. P., & Marlowe, D. (1964). The approval motive: Studies in evaluative dependence, New York: Wiley.

    Google Scholar 

  • De Los Reyes, A. (2011). More than measurement error: Discovering meaning behind informant discrepancies in clinical assessment of children and adolescents. Journal of Clinical Child and Adolescent Psychology, 40, 1–9. doi:10.1080/15374416.2011.533405.

    Article  PubMed  Google Scholar 

  • De Los Reyes, A., & Kazdin, A. E. (2004). Measuring informant discrepancies in clinical child research. Psychological Assessment, 16, 330–334. doi:10.1037/1040-3590.16.3.330.

    Article  PubMed  Google Scholar 

  • De Los Reyes, A., & Kazdin, A. E. (2005). Informant descrepencies in the assessment of child psychopathology: A critical review, theoretical framework, and recommendations for further study. Psychological Bulletin, 131(4), 483–509. doi:10.1037/0033-2909.131.4.483.

    Article  PubMed  Google Scholar 

  • De Los Reyes, A., Thomas, S. A., Goodman, K. L., & Kundey, S. M. A. (2013). Principles underlying the use of multiple informants’ reports. Annual Review of Clinical Psychology, 9, 123–149. doi:10.1146/annurev-clinpsy-050212-185617.

    Article  PubMed Central  PubMed  Google Scholar 

  • Dobbins, G. H., Farh, J. L., & Werbel, J. D. (1993). The influence of self-monitoring on inflation of grade-point averages for research and selection purposes. Journal of Applied Social Psychology, 23(4), 321–334. doi:10.1111/j.1559-1816.1993.tb01090.x.

    Article  Google Scholar 

  • DuBois, D. L., Holloway, B. E., Valentine, J. C., & Cooper, H. (2002). Effectiveness of mentoring programs for youth: A meta-analytic review. American Journal of Community Psychology, 30(2), 157–197.

    Article  PubMed  Google Scholar 

  • DuBois, D. L., Portillo, N., Rhodes, J. E., Silverthorn, N., & Valentine, J. C. (2011). How effective are mentoring programs for youth? A systematic assessment of the evidence. Psychological Science in the Public Interest, 12(2), 57–91. doi:10.1177/1529100611414806.

    Article  PubMed  Google Scholar 

  • Dunnette, M. D. (1952). Accuracy of students’ reported honor point averages. Journal of Applied Psychology, 26, 20–22. doi:10.3102/00346543075001063.

    Article  Google Scholar 

  • Escribano, C., & Díaz-Morales, J. F. (2014). Are self-reported grades a good estimate of academic achievement?/Son las notas auto-informadas una buena estimación del rendimiento académico? Estudios de Psicología: Studies in Psychology, 35(1), 168–182. doi:10.1080/02109395.2014.893650.

    Article  Google Scholar 

  • Fetters, W. B., Stowe, P. S., & Owings, J. A. (1984). Quality of responses of high school students to questionnaire items. Washington, DC: National Center for Education Statistics. http://nces.ed.gov.

  • Försterling, F., & Binser, M. J. (2002). Depression, school performance, and the veridicality of perceived grades and causal attributions. Personality and Social Psychology Bulletin, 28(10), 1441–1449. doi:10.1177/014616702236875.

    Article  Google Scholar 

  • Frucot, V. G., & Cook, G. L. (1994). Further research on the accuracy of students’ self-reported grade point averages, SAT scores, and course grades. Perceptual and Motor Skills, 79, 743–746.

    Article  Google Scholar 

  • Goldman, B. A., Flake, W. L., & Matheson, M. B. (1990). Accuracy of college students’ perceptions of their SAT scores, high school and college grade point averages relative to their ability. Perceptual and Motor Skills, 70, 514.

    Article  Google Scholar 

  • Gonyea, R. M. (2005). Self-reported data in institutional research: Review and recommendations. New Directions for Institutional Research, 127, 73–89. doi:10.1002/ir.156.

    Article  Google Scholar 

  • Grossman, J. B. (2009). Evaluating Mentoring Programs. Public/Private Ventures. Retrieved April 15 2014. http://ppv.issuelab.org

  • Hall, G., Yohalem, N., Tolman, J., & Wilson, A. (2003). How afterschool programs can most effectively promote positive youth development as a support to academic achievement: A report commissioned by the Boston After-School for All Partnership. National Institute on Out-of-School Time. Retrieved July 6 2014. http://www.vamentoring.org

  • Hamilton, L. C. (1981). Sex differences in self-report errors: A note of caution. Journal of Educational Measurement, 18(4), 221–228. doi:10.1111/j.1745-3984.1981.tb00855.

    Article  Google Scholar 

  • Herrera, C., DuBois, D. L., & Grossman, J. B. (2013). The role of risk: Mentoring experiences and outcomes for youth with varying risk profiles. MDRC. www.mdrc.org

  • Kaderavek, J. N., Gillam, R. B., Ukrainetz, T. A., Justice, L. M., & Eisenberg, S. N. (2004). School-age children’s self-assessment of oral narrative production. Communication Disorders Quarterly, 26(1), 37–48. doi:10.1177/15257401040260010401.

    Article  Google Scholar 

  • Kraemer, H. C., Measelle, J. R., Ablow, J. C., Essex, M. J., Boyce, W. T., & Kupfer, D. J. (2003). A new approach to integrating data from multiple informants in psychiatric assessment and research: Mixing and matching contexts and perspectives. American Journal of Psychiatry, 160, 1566–1577. doi:10.1176/appi.ajp.160.9.1566.

    Article  PubMed  Google Scholar 

  • Kuncel, N. R., Credé, M., & Thomas, L. L. (2005). The validity of self-reported grade point averages, class ranks, and test scores: A meta-analysis and review of the literature. Review of Educational Research, 75(1), 63–82. doi:10.3102/00346543075001063.

    Article  Google Scholar 

  • Laird, R. D., & De Los Reyes, A. (2013). Testing informant discrepancies as predictors of adolescent psychopathology: Why difference scores cannot tell you what you want to know and how polynomial regression may. Journal of Abnormal Child Psychology, 41, 1–14. doi:10.1007/s10802-012-9659-y.

    Article  PubMed  Google Scholar 

  • Laird, R. D., & Weems, C. F. (2011). The equivalence of regression models using difference scores and models using separate scores for each informant: Implications for the study of information discrepancies. Psychological Assessment, 23, 388–397. doi:10.1037/a0021926.

    Article  PubMed  Google Scholar 

  • Martin, C. L., & Nagao, D. H. (1989). Some effects of computerized interviewing on job applicant responses. Journal of Applied Psychology, 74, 72–80. http://psycnet.apa.org

  • Mayer, R. E., Stull, A. T., Campbell, J., Almeroth, K., Bimber, B., Chun, D., & Knight, A. (2007). Overestimation bias in self-reported SAT scores. Educational Psychology Review, 19(4), 443–454. doi:10.1007/s10648-006-9034-z.

    Article  Google Scholar 

  • Radloff, L. S. (1977). The CES-D scale: A self-report depression scale for research in the general population. Applied Psychological Measurement, 1(3), 385–401. doi:10.1177/014662167700100306.

    Article  Google Scholar 

  • Radloff, L. S., & Locke, B. Z. (1986). The community mental health assessment survey and the CES-D Scale. In M. M. Weissman, J. K. Myers, & C. E. Ross (Eds.), Community surveys of psychiatric disorders (pp. 177–189). New Brunswick, NJ: Rutgers University Press.

    Google Scholar 

  • Ross, J. A. (2006). The reliability, validity, and utility of self-assessment. Practical Research, Evaluation & Assessment, 11(10), 1–13. http://pareonline.net

  • Sawyer, R., Laing, J. & Houston, W. (1988). Accuracy of self-reported high school courses and grades of college-bound students. ACT Research Report Series, 88(1), ii-32). Iowa City, IA: American College Testing Program. www.act.org

  • Schiel, J., & Noble, J. (1991). Accuracy of self-reported course work and grade information of high school sophomores. ACT Research Report Series. 91(6). Iowa City, IA: American College Testing Program. www.act.org

  • Shaw, E. J. and Mattern, C. D. (2009). Examining the accuracy of self-reported high school grade point average. College Board Research Report No. 2009-5. http://research.collegeboard.org

  • Shepperd, J. A. (1993). Student derogation of the Scholastic Aptitude Test: Biases in perceptions and presentations of College Board scores. Basic and Applied Social Psychology, 14, 455–473. http://www.psych.ufl.edu

  • Talento-Miller, E., & Peyton, J. (2006). Moderators of the accuracy of self-report grade point average. Graduate Management Admission Council Research Reports RR-06-IO. McLean, Virginia. Retrieved June 30 2014 from http://www.gmac.com

  • Thompson, L. A., & Kelly-Vance, L. (2001). The impact of mentoring on academic achievement of at-risk youth. Children and Youth Services Review, 23(3), 227–242. doi:10.1016/S0190-7409(01)00134-7.

    Article  Google Scholar 

  • Weems, C. F., Taylor, L. K., Marks, A., & Varela, R. E. (2010). Anxiety sensitivity in childhood and adolescence: Parent reports and factors that influence associations with child reports. Cognitive Therapy and Research, 34, 303–315. doi:10.1007/s10608-008-9222-x.

    Article  Google Scholar 

  • Zimmerman, M. A., Caldwell, C. A., & Bernat, D. H. (2002). Discrepancy between self-report and school-record grade point average: Correlates with psychosocial outcomes among African American adolescents. Journal of Applied Social Psychology, 32(1), 86–109. doi:10.1111/j.1559-1816.2002.tb01421.x.

    Article  Google Scholar 

Download references

Acknowledgments

This project was funded by a Mentoring Best Practices Research Grant award from the Office of Juvenile Justice and Delinquency Prevention (Grant No. Q215F120107). The authors would like to acknowledge the hard work and dedication of the staff at Big Brothers Big Sisters Harrisonburg Rockingham County, their graduate assistants in the James Madison University Master of Public Administration program, and Dr. Gary Kirk, who was a Principle Investigator on the project from 2011–2013.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Liliokanaio Peaslee.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Teye, A.C., Peaslee, L. Measuring Educational Outcomes for At-Risk Children and Youth: Issues with the Validity of Self-Reported Data. Child Youth Care Forum 44, 853–873 (2015). https://doi.org/10.1007/s10566-015-9310-5

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10566-015-9310-5

Keywords

Navigation