Understanding variation in classroom quality within early childhood centers: Evidence from Colorado's quality rating and improvement system

https://doi.org/10.1016/j.ecresq.2013.05.001Get rights and content

Highlights

  • We use administrative data from Colorado with measures of quality for all classrooms.

  • We decompose the variation in the classroom environment rating scale (ERS).

  • 26–28% of the overall variation the ERS is across classrooms within centers.

  • A center's rating in a quality rating system may depend on which rooms are assessed.

  • Classroom election rules can lower measurement cost but they raise classification errors.

Abstract

This study examines variability in quality across classrooms within early childhood centers and its implications for how quality rating systems (QRSs) capture center-level quality. We used data collected for administrative purposes by Qualistar Colorado which includes the environmental rating scale (ERS) collected in all classrooms in the 433 centers participating in Colorado's QRS between 2008 and 2010. We conducted variance components analysis for the ERS and found that between 26% and 28% of the variation in quality captured by the ERS occurred across classrooms within the same center serving children in the same age range. This finding reveals that capturing center-level quality based on average ERS will often miss important within-center quality differences and points to the merits of using “no score below” rules along with rating tier cutpoints in determining center-level ERS. Most QRSs assess center-level quality for a randomly selected subset of classrooms. To test the implications of cross-classroom quality variation for this practice, we simulated four classroom selection strategies in current use: selecting 50% of the rooms, 33% of the rooms, two rooms, or one room. In general, the larger the share of classrooms measured under a selection rule, the lower the chance that a center's rating tier will be misclassified. The error rates under each selection rule also depend on the extent of cross-classroom quality variability, how centers are distributed by size, and the QRS structure. QRS designers, therefore, need to consider the tradeoff between the costs of measuring more classrooms in each center versus the costs of misclassifying centers. The paper quantifies the magnitude of these tradeoffs using the Colorado data and two illustrative QRSs. The implications of our findings for QRS designers, parents, and other stakeholders are discussed.

Section snippets

Issues in measuring center-level quality in QRSs

In the last decade or so, QRSs have gained currency across the states as a favored mechanism for measuring and improving quality in early care and education (ECE) settings. As of early 2011, 24 states and the District of Columbia had designed and implemented systems that combine standards, accountability measures, program and practitioner outreach and support, financing incentives, and parent/consumer education efforts (National Child Care Information and Technical Assistance Center [NCCIC],

Research questions

This paper analyzes a unique administrative dataset that includes assessments of quality for all classrooms serving children from birth to five in the center-based programs that participated in Colorado's voluntary QRS between 2008 and 2010. As one of four states that assesses quality for each classroom in center-based settings as part of its QRS, Colorado has followed this policy longer than the three other states. With these data, the paper addresses two central questions. First, how much

Methods

To answer our research questions, we analyzed an administrative dataset collected by Qualistar Colorado, a nonprofit agency that administers Colorado's QRS. Licensed centers and family child care homes are recruited from throughout the state to participate in Qualistar. The five-tier rating system, which includes a lowest provisional tier followed by tiers associated with one to four stars, includes programs that serve children from birth to kindergarten entry. Ratings are valid for two years,

Variance decomposition analyses for a key quality indicator

Our first objective exploits the availability of ERS measures for all classrooms in the Colorado Qualistar centers. Table 3 presents the results of our variance decomposition analysis for the ERS using all classrooms for the two analysis groups: centers with multiple ITERS-R rooms and centers with multiple ECERS-R rooms. Results are also shown for the combined group of centers with multiple rooms. The average ERS in the rooms serving the preschool-age children is higher by about 0.4 scale

Within-center heterogeneity in ECE quality

The centers in our dataset, voluntary participants in Colorado's QRS, are on average of fairly high quality, as might be expected by their willingness to participate in a QRS and by their involvement in continuing QI efforts. In general, as measured by the ERS, quality is higher in the classrooms serving preschool-age children (the ECERS-R rooms) than those serving infants and toddlers (the ITERS-R rooms).

Most important for this investigation, the Colorado data further strengthen the evidence

Acknowledgements

We are grateful to Qualistar Colorado for providing us with their administrative data. We also acknowledge the research support from the Office of Planning, Research and Evaluation, U.S. Department of Health and Human Services under Grant #90YE0124/01.

References (25)

  • K.A. Clarke-Stewart et al.

    Do regulable features of child-care homes affect children's development?

    Early Childhood Research Quarterly

    (2002)
  • C. Howes et al.

    Ready to learn? Children's pre-academic achievement in pre-kindergarten programs

    Early Childhood Research Quarterly

    (2008)
  • S. Scarr et al.

    Measurement of quality in child care centers

    Early Childhood Research Quarterly

    (1994)
  • D.M. Bryant

    Observational measures of quality in center-based early care and education programs

    (2010)
  • D.M. Bryant et al.

    Empirical approaches to strengthening the measurement of quality: Issues in the development and use of quality measures in research and applied settings

  • M.R. Burchinal et al.

    Quality of center child care and infant cognitive and language development

    Child Development

    (1996)
  • P. Burchinal et al.

    Early care and education quality and child outcomes

    (2009)
  • R. Clifford

    Structure and stability in the Early Childhood Environment Rating Scale

  • D. Early et al.

    Pre-kindergarten in eleven states: NCEDL's multi-state study of pre-kindergarten and Study of State-Wide Early Education Programs (SWEEP)

    (2005)
  • B.K. Hamre et al.

    Best practices for conducting program observations as part of quality rating and improvement systems

    (2011)
  • T. Harms et al.

    Early Childhood Environment Rating Scale

    (2005)
  • T. Harms et al.

    Infant/toddler Environment Rating Scale

    (2006)
  • Cited by (18)

    • How much variability is there in children's experiences with different educators?

      2021, Early Childhood Research Quarterly
      Citation Excerpt :

      Yet little is known at present about the levels of quality in ECEC centers. Several studies (Karoly, Zellman, & Perlman, 2013; Sabol, Ross, & Frost, 2019) provide examples of research that has examined variance in quality in classrooms and centers. Fewer studies have been carried out at the educator level.

    • “Quality” assurance features in state-funded early childhood education: A policy brief

      2020, Children and Youth Services Review
      Citation Excerpt :

      Partnering in this way helps push researchers to focus in on the needs of practitioners and generate findings that are most likely to be used and incorporated into changes to both policy and practice (Coburn & Penuel, 2016). While waiting for more definitive guidance from large scale evaluations on how to design and re-engineer early learning programs, our review of the limited literature suggests that policymakers might work to embed and improve observational measures of classroom instruction and on-site technical assistance for Pre-K teachers into TQRISs (Elicker & McConnell, 2011; Isner et al., 2011; Karoly et al., 2016; Karoly et al., 2013; Snell et al., 2013; Zellman et al., 2008). The limited literature on TQRISs implies that technical assistance and other forms of teacher professional development should focus on sustained intervention and collaboration, be grounded in practice, and linked to information about standards and children’s growth (Diamond & Powell, 2011; Weiland & Yoshikawa, 2013).

    • She's supporting them; who's supporting her? Preschool center-level social-emotional supports and teacher well-being

      2016, Journal of School Psychology
      Citation Excerpt :

      Thus, to some extent, aspects of teachers' own psychological well-being and perceptions of their centers can be viewed as characteristics of the center. This finding is in line with the sentiments expressed by teachers in our prior qualitative work (Zinsser & Zinsser, 2016) and with prior quantitative studies of organizational climate and classroom quality (e.g., Bloom, 2010, Karoly et al., 2013; Zinsser & Curby, 2014). This finding supports the further investigation of center-level characteristics as meaningful contributors to teachers' workplace well-being and confirms our dynamic systems theoretical orientation.

    View all citing articles on Scopus
    1

    Tel.: +1 310 393 0411; fax: +1 310 393 4818.

    2

    Tel.: +1 416 978 0956; fax: +1 416 926 4708.

    View full text