Article Text

Assessing the quality of care for children attending health facilities: a systematic review of assessment tools
  1. Alicia Quach1,2,
  2. Shidan Tosif1,3,
  3. Herfina Nababan4,
  4. Trevor Duke1,3,
  5. Stephen M Graham1,5,
  6. Wilson M Were6,
  7. Moise Muzigaba6,
  8. Fiona M Russell1,2
  1. 1Department of Paediatrics, The University of Melbourne Faculty of Medicine Dentistry and Health Sciences, Melbourne, Victoria, Australia
  2. 2Asia Pacific Health Group, Murdoch Childrens Research Institute, Parkville, Victoria, Australia
  3. 3The Royal Children's Hospital Melbourne, Parkville, Victoria, Australia
  4. 4Health System Strengthening Unit, World Health Organisation Country Office for Indonesia, Jakarta, Indonesia
  5. 5International Child Health Group, Murdoch Childrens Research Institute, Parkville, Victoria, Australia
  6. 6Maternal, Newborn, Child and Adolescent Health and Ageing Department, World Health Organization, Geneva, Switzerland
  1. Correspondence to Dr Alicia Quach; alicia.quach{at}mcri.edu.au

Abstract

Introduction Assessing quality of healthcare is integral in determining progress towards equitable health outcomes worldwide. Using the WHO ‘Standards for improving quality of care for children and young adolescents in health facilities’ as a reference standard, we aimed to evaluate existing tools that assess quality of care for children.

Methods We undertook a systematic literature review of publications/reports between 2008 and 2020 that reported use of quality of care assessment tools for children (<15 years) in health facilities. Identified tools were reviewed against the 40 quality statements and 510 quality measures from the WHO Standards to determine the extent each tool was consistent with the WHO Standards. The protocol was registered in PROSPERO ID: CRD42020175652.

Results Nine assessment tools met inclusion criteria. Two hospital care tools developed by WHO-Europe and WHO-South-East Asia Offices had the most consistency with the WHO Standards, assessing 291 (57·1%) and 208 (40·8%) of the 510 quality measures, respectively. Remaining tools included between 33 (6·5%) and 206 (40·4%) of the 510 quality measures. The WHO-Europe tool was the only tool to assess all 40 quality statements. The most common quality measures absent were related to experience of care, particularly provision of educational, emotional and psychosocial support to children and families, and fulfilment of children’s rights during care.

Conclusion Quality of care assessment tools for children in health facilities are missing some key elements highlighted by the WHO Standards. The WHO Standards are, however, extensive and applying all the quality measures in every setting may not be feasible. A consensus of key indicators to monitor the WHO Standards is required. Existing tools could be modified to include priority indicators to strengthen progress reporting towards delivering quality health services for children. In doing so, a balance between comprehensiveness and practical utility is needed.

PROSPERO registration number CRD42020175652.

  • child health
  • health systems evaluation
  • paediatrics
  • public health
  • systematic review

Data availability statement

Data are available on reasonable request. All data collected for this systematic review can be made available through contact with the primary author.

http://creativecommons.org/licenses/by-nc/4.0/

This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/.

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Key questions

What is already known?

  • There are no universally agreed indicators to assess quality of health care.

  • Previous reviews on quality of health care for children in low-income and middle-income countries (LMICs) tend to concentrate on system input measures such as physical infrastructure, availability of essential medicines, equipment and human resources.

  • There has been no systematic review of existing assessment tools for quality of health care for children in health facilities.

What are the new findings?

  • This is the first systematic review to compare existing quality of care assessment tools against the WHO ‘Standards for improving the quality of care for children and young adolescents in health facilities’, and found that they do not adequately assess the WHO Standards in its current format.

  • Most assessment tools were more comprehensive in assessing provision of care and available human and physical resources, but deficient in assessing experience of care.

  • Most assessment tools focused more on input and process measures than outcome measures.

What do the new findings imply?

  • There is no existing assessment tool that can comprehensively assess all the indicators in the ‘WHO Standards’, however, the indicators are extensive and may not be feasible for LMICs to comprehensively assess.

  • Future endeavours should focus on identifying and obtaining consensus on a selection of key indicators in the assessment of quality of health care for children in health facilities. Harmonisation of key indicators embedded within existing assessment tools will enable regular monitoring and comparable data in order to report progress in the quality of health care for children at local and national levels.

Introduction

Ending preventable child deaths by 2030 is a major focus for the Sustainable Development Goals (SDGs).1 A crucial factor to achieving this is Universal Health Coverage (UHC) which ensures that all people, including children, have access to quality essential healthcare services without being pushed into financial hardship. Quality healthcare is defined by WHO as health services which are ‘effective, efficient, accessible, patient centred, equitable and safe’.2 To further reduce child deaths, many countries will need to find ways to increase UHC with quality healthcare for children.

Determining progress in quality of healthcare delivery for children requires monitoring and tracking of measurable indicators. However, there are no universally agreed indicators for quality of care (QoC). To better understand the complex multidimensional nature of quality healthcare, WHO developed a framework to identify domains to assess, improve and monitor the quality of paediatric care in health facilities, an extension of the earlier framework for improving maternal and newborn care in health facilities.3 4 The framework encompasses three broad categories of QoC: (A) provision of care—evidence-based practices, effective information systems and referral pathways; (B) experience of care—effective communication, recognition of child rights and appropriate emotional and psychological support; and (C) available human and physical resources to meet the best interests of children. The broad categories are subdivided into eight domains to provide a structured approach when addressing QoC at all levels of the health system (online supplemental appendix A). These eight domains reflect the eight quality standards (QSd) in the WHO ‘Standards for improving the QoC for children and young adolescents in health facilities’ which are further detailed in 40 priority statements and 510 measurable indicators.4 The WHO Standards can, therefore, be used as a standard point of reference when assessing QoC for children in a healthcare facility.

Supplemental material

Historically, various tools have been developed to assess QoC for children. We sought to understand if these tools adequately assess all aspects of QoC as outlined by the WHO Standards for children and young adolescents. A recent review identified and compared five existing assessment tools to the WHO ‘Standards for improving quality of maternal and newborn care in health facilities’.5 6 The percentage of indicators outlined in the WHO Standards that the five tools were able to assess ranged from 12% to 62%.6 There has been no systematic review of existing assessment tools for QoC for children <15 years. There is an urgent need to better understand the capacity of readily available tools to assess the QoC for children, in order to meet the SDG targets for child health.

The aim of this systematic literature review is to identify existing tools used to assess QoC for children and young adolescents in health facilities and assess the extent to which they represent the domains in the WHO QSd.

Methods

Search strategy and selection criteria

A systematic review of the literature was undertaken in August 2020 using Preferred Reporting Items for Systematic Reviews and Meta-Analyses reporting guidelines, to identify assessment tools available globally that evaluate QoC for children attending health facilities.7 MEDLINE (Ovid) database was searched using Medical Subject Heading terms and/or keywords. PubMed was searched using keywords, to retrieve items not indexed on MEDLINE. The PubMed search strategy was adapted for use in Global Health (Commonwealth Agricultural Bureaux direct) database and the International Journal for Quality in Health Care. Additional peer-reviewed publications were identified through handsearching of reference lists of key articles. Grey literature was identified by conducting a keyword search using the World Bank and WHO library databases. The search strategies and results yielded are available from online supplemental appendix B.

The inclusion criteria for eligibility included publications/reports that: reported the use of an assessment tool to evaluate QoC in a primary, secondary or tertiary level healthcare facility. The assessment tool was deemed eligible if used by more than one country; and included at least one module/component evaluating QoC in children. A child was defined as aged 0–14 years, to align with the definition of ‘birth up to 15 years’ used in the WHO Standards. To identify tools more likely to be in current use and available globally, the search period was limited to ten years preceding the publication of the WHO Standards to present time (2008–2020), and to those published in English language. Exclusion criteria included publications/reports of assessment tools that: evaluated only newborns (<1 month old) or only adolescents (10–19 years old); were developed only for research purposes; evaluated QoC for a specific disease; or a niche component of QoC (eg, antimicrobial prescribing practice). For any assessment tools not publicly available, authors and/or the original developers of the tool were contacted.

The screening process was performed independently by two reviewers using Covidence systematic review software.8 Titles and abstracts were screened and excluded if inclusion/exclusion criteria were not met. Full texts of remaining publications/reports were assessed for eligibility. The assessment tools from full texts were retrieved and further assessed to ensure that they met eligibility criteria. The assessment tools that were identified in multiple reports were grouped together as one unique tool for analysis. Conflicts in determining whether an article/assessment tool met eligibility criteria were resolved by discussion between the two reviewers. If consensus was not reached, a third reviewer was consulted.

Data analysis

Each quality assessment tool included was compared against the WHO ‘Standards for improving QoC for children and young adolescents in health facilities’.4 The WHO Standards comprises of eight overarching ‘QSd’, each one correlating to one domain of the framework for improving quality of paediatric care. Each QSd is composed of priorities or ‘Quality Statements (QSt)’ (total of 40) for improving QoC for children. The QSt further subdivided into 510 ‘Quality Measures (QM)’, comprised of 235 input, 169 process/output and 106 outcome measures (figure 1). A full list of the QSd, QSt and QM are listed in online supplemental appendix C.

Figure 1

Structure of the WHO ‘Standards for improving the quality of care for children and youngadolescents in health facilities’.

For each assessment tool, we used the most updated version in English in its generic format. Each tool was composed of various modules to assess quality (eg, direct clinical observations, health worker interviews, inventory checklists). We evaluated all modules and excluded those not relevant to our review (eg, antenatal care services). For the remaining modules, we reviewed each question/statement and matched them if applicable to the relevant QM in the WHO Standards. Two paediatricians (AQ and ST) performed the matching process independently to decrease the risk of bias. Any conflicts were discussed between the two reviewers and a third reviewer was consulted for any unresolved conflicts.

We used a similar scoring system as the review of facility assessment tools on maternal and newborn QoC in health facilities performed by Brizuela et al.6 A question/statement from the tool was considered a match to a WHO QM if any component of the QM was included. If a WHO QM was matched, a score of 1 was allocated. Although multiple questions could match the same QM, each QM could only score a maximum of 1. For a QM that consisted of more than one subcomponent, a question/statement from the assessment tool only had to fulfil one subcomponent to be considered a match. For example, QM 1.1.1: ‘health facility maintains an up-to-date 24 hours staff duty roster, with a functioning contact mechanism for finding additional support, which ensures that staff responsible for paediatric triage are available at all times’; would be matched by a question asking if the health facility has a 24-hour staff roster.4 Conversely, a single question/statement could also be matched to more than one QM. For example, an assessment tool with a checklist of available antibiotics in the health facility would match the WHO QM detailing adequate supplies of antibiotics to treat pneumonia (QM 1.3.3), sepsis (QM 1.5.4), neonatal infections (QM 1.2.2); while also matching the QM detailing adequate stocks of essential medicines (QM 8.4.4).

Descriptive statistics were used to calculate the percentage of matched QM, QSt and QSd for each assessment tool. The assessment tools were ranked according to the total percentage of WHO QM assessable. Assessment tools were also categorised according to whether they were able to completely assess (100%), partially assess (1%–49% and 50%–99%), or not assess (0%) any of the QSd and QSt.

Patient and public involvement

No patients or public were involved in this study.

Results

The search strategy identified 1180 publications/reports, after duplicates were removed (figure 2). The screening process excluded 1035 publications/reports. The remaining 145 full-text manuscripts were assessed for eligibility, with 39 publications/reports being deemed eligible. These publications/reports were further evaluated to collate duplications of assessment tools, with 10 unique tools identified as eligible. One tool was not accessible from the author, leaving nine unique assessment tools for analysis (figure 2).

Figure 2

PRISMA flow diagram for the selection of assessment tools. PRISMA, Preferred Reporting Items for Systematic Reviews and Meta-Analyses.

Table 1 summarises the nine assessment tools and the modules that were evaluated as part of our analysis. All tools were developed for use in low-income and middle-income countries (LMICs), except for the Child Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS).9 All tools were available in English, but could be adapted/translated to local context. All tools were structured questionnaires/interviews with checklist style questions, but varied in length and composition of modules. The shortest tool was the HCAHPS with 62 multichoice questions. Others such as the Service Provision Assessment (SPA) and the WHO Hospital Care assessment tools were detailed with over 100 pages, had multiple modules, with over 100 questions/checklist items per module.10–12

Table 1

Summary of assessment tools reviewed for quality of care for children in health facilities

Table 2 summarises the percentages of WHO QM within each QSd assessable by each tool. Overall, QM related to the domains of provision of care and available human and physical resources were more widely assessable than experience of care. QSd 1: ‘Every child receives evidence-based care and management of illness according to WHO guidelines’ was most comprehensively assessable across all tools, apart from the Child HCAHPS which did not assess it at all.4 9 QSd 6: ‘All children and their families are provided with educational, emotional and psychosocial support that is sensitive to their needs and strengthens their capability’ was absent from five of the nine included tools.4 All assessment tools were able to partially assess QSd 7 (staff availability) and QSd 8 (health facility physical infrastructure, waste management, supplies and equipment).

Table 2

Percentage of WHO Quality Standards assessable by each quality assessment tool

Overall, the tools were more comprehensive at assessing the input QM (median 32·8%, IQR (16·4%–45·3%)) and process/output QM (median 21·3%, IQR (12·4–41·7%)) compared with the outcome QM (median 16·0%, IQR (3·8%–28·3%)). Table 3 summarises the proportion of QSt that had at least one each of input, process and outcome QM within each QSd. No single tool was able to fulfil this for all 40 WHO QSt. The percentages of QM that was assessable within each of the 40 QSt are summarised in figure 3. The WHO (Europe regional office) ‘Hospital care for children: quality assessment and improvement tool’ was the only tool able to partially assess every QSt. Of the remaining tools, 5–27 of 40 QSt were not assessable at all. The SPA and the Health Resources Available Mapping System (HeRAMS) were the only tools able to assess 100% of any one QSt. The HeRAMS, however, failed to assess any QM of 15 other QSt.

Table 3

Proportion of WHO Quality Statements with at least one each of input, process and output Quality Measure assessable by each quality assessment tool

Figure 3

Percentage of WHO Quality Statements* assessable by each quality assessment tool. *‘Quality Statements’ are 40 concise statements of the priorities for improving quality of care for children as documented in the WHO Standards. Each quality statement contains from 6 to 22 quality measures.4 Not assessable = the assessment tool did not assess any quality measures in the quality statement. Partially assessable = the assessment tool assessed at least one of the quality measures in the quality statement. Completely assessable = the assessment tool assessed all of the quality measures in the quality statement. HRBF, Health Results Based Financing impact evaluation toolkit; HeRAMS, Health Resources Availability Mapping System; HFS-IMCI, Health Facility Survey—using Integrated Management of Childhood Illness clinical guidelines; HCAHPS, Hospital Consumer Assessment of Healthcare Providers and Systems; r-HFA, rapid Health Facility Assessment; SPA, Service Provision Assessment. SARA, Service Availability and Readiness Assessment.

Figure 4 shows the overall percentage of QM assessable by each tool. The WHO-Europe tool was the most comprehensive with 291 (57·1%) of the 510 WHO QM being assessable.11 The remaining tools varied from 6·5% to 40·8% in their capacities to assess the QM. Table 4 shows the percentage of assessable QM in each QSt. Most tools assessed less than half the QM in any single QSt. The WHO-Europe tool and the WHO-SE Asia were the most comprehensive, able to assess more than half the QM in 23 and 13 of the 40 QSt, respectively. The HCAHPS-Child tool had the largest number of gaps, leaving 27 QSt completely unassessed, but was able to assess 9 (69%) of the 13 QM for QSt 4.1, that is, effective communication given to children and their carers, which is a key objective of the tool.

Figure 4

Overall percentage of WHO Quality Measures* assessable by each quality assessment tool. HRBF, Health Results Based Financing impact evaluation toolkit; HeRAMS, Health Resources Availability Mapping System; HFS-IMCI, Health Facility Survey—using Integrated Management of Childhood Illness clinical guidelines; HCAHPS, Hospital Consumer Assessment of Healthcare Providers and Systems; r-HFA, rapid Health Facility Assessment; SPA, Service Provision Assessment; SARA, Service Availability and Readiness Assessment.

Table 4

Percentage of WHO Quality Statements assessable by each quality assessment tool

Discussion

This is the first systematic review to compare QoC assessment tools against the current WHO Standards for children and young adolescents in health facilities. Three of the nine assessment tools included questions from all eight of the WHO QSd, but only one (WHO-Europe Hospital Care assessment tool) was able to address all 40 QSt. Despite being the most comprehensive tool, the WHO-Europe Hospital Care assessment tool still only included about half of all QM. QSd that included evidence-based management, staffing and physical infrastructure and resources (QSd 1,7 and 8) were more widely covered across the tools than those which encompassed health information systems (HIS), referral processes, communication, psychosocial support and child rights (QSd 2–6).

Previous assessments of QoC in LMICs have often centred on input measures as these are seen to be more tangible, objective measures.6 13–15 This was reflected in this review with most tools assessing more input measures and less process or outcome measures, and no tool assessing at least one input, process and outcome measure in all 40 QSt. Mortality data were largely absent from almost all tools. The Donabedian quality framework describes the relationship between the three components on input/structure, process and outcomes.16 Although structural, input measures are important to healthcare delivery, the flow-on effects to processes such as appropriate care delivery and adequate communication, and outcome measures such as morbidity and mortality data, and patient satisfaction, determine how successful a health facility is. Including all three components is therefore crucial for assessing the level of QoC that is being provided for children in hospitals.

The WHO Standards for improving QoC for children and young adolescents were used in this review as a reference standard for assessing QoC for children attending health facilities. They are comprehensive and include all three components of the Donabedian quality framework, while aligning with the SDG emphasis on equity. They have been developed as a resource for healthcare professionals and managers at the health facility level through to government bodies and technical partners responsible for policy and programme development at the national level, to support quality improvement practices for children.4 How the WHO Standards are implemented in practice and what tools to use when assessing quality of healthcare for children, are to be decided within the local context. However, it is unlikely that a single tool could encompass all 510 QM and be feasible in most settings in LMICs.

The purpose and context of the assessment tools needs to be considered when examining how comprehensive they are in comparison to the WHO Standards. The WHO Hospital Care assessment tools were found to be more comprehensive in their ability to capture the WHO Standards. These tools were first developed in 2001 to provide government and stakeholders guidance in performing evaluation of quality of healthcare practices in order to identify key areas to improve on.17 They have since been revised multiple times and adapted to multiple settings with the most recent European version including indicators for child rights, communication and alignment with evidence-based practice as outlined in the WHO Pocket Book of Hospital Care for Children.18 The WHO Standards for improving QoC have been modelled on similar frameworks, which may explain the overlap of indicators between the WHO Hospital Care assessment tools and the WHO Standards.

Of the three broad arms of QoC in the WHO Standards: provision of care (QSd 1–3) and available human and physical resources (QSd 7–8), were more widely covered by the tools than experience of care (QSd 4–6). This may be because provision of care and system inputs have more definitive QM, which make them amenable to be assessed through checklist style questionnaires. The SPA and Service Availability and Readiness Assessment were developed to provide nationwide data on the capacities of health facilities to provide quality services and have been used in over 30 countries.10 19 These surveys are designed to be repeated at periodic intervals, every 1–5 years, to monitor progress and inform development of national health policies and programmes. It is therefore not surprising that their strengths lie in assessing availability of concrete measures such as physical infrastructure and human resources, with less emphasis on subjective components such as communication and emotional support. The HeRAMS similarly was developed to collect information on availability of health resources and services. Designed to be implemented in humanitarian and emergency response settings, rapid reporting is essential to its purpose in order to obtain supplies and resources required for basic healthcare needs.

The child-specific assessment tools such as the WHO hospital tools, HFS-IMCI and Child-HCAHPS, had relative strengths in assessing experience of care when compared with the remaining tools. This may be in recognition that communicating with parents/carers is a large component of paediatric healthcare. The Child-HCAHPS survey was developed for the sole purpose of obtaining parent/guardian feedback on their experience of care of their child in hospital. Health service delivery has traditionally revolved around disease diagnosis and treatment. However, there has been a gradual global shift towards integrated people-centred health services, where people and communities are seen as active participants as well as beneficiaries of their responsive health systems. In 2016, the WHO adopted the ‘Framework on integrated people-centred health services’ to help drive change in national policies on health services delivery to include cross-sectoral collaboration and community involvement and empowerment in decision-making processes.20 Involving people in their own care, especially marginalised subpopulations, is considered essential to achieving equitable access and QoC towards UHC. So, although tools such as the Child-HCAHPS may not be a suitable tool to assess all aspects of QoC for children, it can be a useful adjunctive tool to assess communication skills and patient experiences of care, which may otherwise be lacking in existing quality assessment frameworks.

Feasibility was not formally assessed as part of this review. The more comprehensive WHO Hospital Care Assessment tools are extensive and labour intensive and yet only cover about half the QM in the WHO Standards. Our search only identified two original peer-reviewed publications that used the WHO Hospital Care Assessment Tools.21 22 It is possible that the tools are used for auditing and quality improvement practices with information only being disseminated within country. However, there is little anecdotal evidence of this occurring. Without clear quality frameworks in place and limited resources, health facilities would likely find it challenging to implement these assessment tools effectively. In order for LMICs to effectively implement quality improvement practices and to sustainably assess and monitor them, key indicators need to be clear and manageable. We recommend that the QM in the WHO Standards be simplified and that key indicators to monitor in each QSd be highlighted. Key indicators should obtain global consensus and adhere to a measurement framework to ensure that they are relevant, acceptable, achievable and robust. Existing core assessment tools could then be combined and simplified, to incorporate the key indicators, with flexibility for other QM to be included as prioritised by individual health facilities. This will be more achievable and constructive at the local level, while assisting in reporting of national progress of uniform child health indicators in LMICs.

An alternative to using explicit QoC assessment tools to evaluate and monitor quality improvement practices, would be to have key indicators embedded in other routine data collection systems. In the USA, there are existing monitoring systems of QMs embedded in HIS. Data are collated from multiple databases by such agencies as the Agency for Healthcare Research and Quality to produce annual National Healthcare Quality and Disparities Reports which include indicators overlapping with many of the WHO Standards.23 In LMICs, paper-based surveys and medical records have traditionally been the main way to collect health data. Routine HIS which could potentially monitor key indicators for QoC can be variable in levels of data recording and quality, and are seldom used to evaluate programme interventions and policy changes often due to lack of capacity.24 25 The introduction of electronic health records and platforms are emerging as more reliable sources of data management and analyses in LMICs.26 27 However, a range of challenges has meant that electronic-HIS have not yet replaced paper-based records or survey tools in most LMIC settings.28–31 Until routine HIS become more robust and reliable, using existing, comprehensive, low-cost tools to assess QoC, will continue to more feasible.

Our review focused on assessing QoC for children and young adolescents. However, it is important to recognise that quality healthcare is required throughout the continuum of the life course, including the antenatal and perinatal periods, to ensure improved outcomes for all children. The WHO Standards for children and adolescents are an extension of the WHO Standards for maternal and newborn care in health facilities, with both developed using the same framework. A previous review comparing existing assessment tools with the WHO Standards for maternal and newborn care drew similar conclusions to our assessment—that current tools had gaps in assessing experience of care and that there should be global consensus on core data to be collected.6 Although each life stage has its own unique healthcare needs (eg, obstetric care for women; immunisations for children), comparable themes for quality healthcare practices are applicable across the life course. Having similar frameworks to assess and monitor quality of healthcare across the life course would make quality improvement practices and assessment tools easier to develop. It would also foster collaboration between health sectors in the development of common goals towards achieving better health outcomes for all.

Our systematic review had several limitations. In the matching process, clinical judgement determined whether questions/items from assessment tools matched the QM in the WHO Standards. This subjective process could have led to bias in some indicators. Our selection criteria were aimed at identifying stand-alone assessment tools. This may have inadvertently excluded quality assessment tools integrated within other health data collection activities. We also only included English publications, which may have excluded other existing tools used in non-English speaking countries. Our review only evaluated the survey instruments and did not assess feasibility of implementation, which would include preplanning, training, supervision, evaluation and feedback of data. These pragmatic factors would affect the capacity of a tool to reliably assess the WHO Standards and would need to be considered in future activities assessing QoC.

Conclusion

This review found that although the WHO Standards are comprehensive, no single tool can adequately assess all the QM in its current format. Furthermore, operational use of extensive assessment tools is seldom seen due to lack of resources and organisational frameworks. Existing tools tend to emphasise input measures and few tools adequately assess experience of care. Consensus and harmonisation of select key indicators from the WHO Standards, integrated into simplified assessment tools would make them more achievable in LMICs. Comparable data on key indicators for monitoring within and between countries will also assist in national and global reporting on progress of child health outcomes. Further research into the feasibility of modified tools with key indicators to assess QoC and the impact on health outcomes, is, therefore, an important next step in establishing equitable access to quality healthcare.

Data availability statement

Data are available on reasonable request. All data collected for this systematic review can be made available through contact with the primary author.

Ethics statements

Patient consent for publication

Ethics approval

Patient consent and ethics approval for publication not required.

References

Supplementary materials

  • Supplementary Data

    This web only file has been produced by the BMJ Publishing Group from an electronic file supplied by the author(s) and has not been edited for content.

Footnotes

  • Handling editor Seye Abimbola

  • Twitter @QuachAlicia

  • Contributors AQ conceived the study, wrote the first draft and is the guarantor of the manuscript. ST and HN were second reviewers for screening and data analysis. FMR and SMG provided feedback on the first draft of the manuscript. All authors including TD, WMW and MM provided feedback on subsequent versions of the manuscript and approved the final version.

  • Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.

  • Competing interests We declare no competing interests. The authors alone are responsible for the views expressed in this article and they do not necessarily represent the views, decisions or policies of the institutions with which they are affiliated.

  • Provenance and peer review Not commissioned; externally peer reviewed.

  • Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.