Hostname: page-component-8448b6f56d-c47g7 Total loading time: 0 Render date: 2024-04-23T06:46:36.200Z Has data issue: false hasContentIssue false

Key learnings from Institute for Clinical and Economic Review’s real-world evidence reassessment pilot

Published online by Cambridge University Press:  14 March 2022

Ashley Jaksa*
Affiliation:
Scientific Research, Aetion, Inc., New York, NY, USA
Lisa Bloudek
Affiliation:
The Comparative Health Outcomes, Policy, and Economics Institute, University of Washington, Seattle, WA, USA
Josh J. Carlson
Affiliation:
The Comparative Health Outcomes, Policy, and Economics Institute, University of Washington, Seattle, WA, USA
Kanya Shah
Affiliation:
Institute for Clinical and Economic Review, Boston, MA, USA
Yilin Chen
Affiliation:
The Comparative Health Outcomes, Policy, and Economics Institute, University of Washington, Seattle, WA, USA
Amanda R. Patrick
Affiliation:
Scientific Research, Aetion, Inc., New York, NY, USA
Avery McKenna
Affiliation:
Institute for Clinical and Economic Review, Boston, MA, USA
Jon D. Campbell
Affiliation:
Institute for Clinical and Economic Review, Boston, MA, USA
*
*Author for correspondence: Ashley Jaksa, E-mail: ashley.jaksa@aetion.com
Rights & Permissions [Opens in a new window]

Abstract

Health technology assessment (HTA) agencies are considering adopting a lifecycle approach to assessments to address uncertainties in the evidence base at launch and to revisit the clinical and economic value of therapies in a dynamic clinical landscape. For reassessments of therapies post launch, HTA agencies are looking to real-world evidence (RWE) to enhance the clinical and economic evidence base, though challenges and concerns in using RWE in decision-making exists. Stakeholders are embarking on demonstration projects to address the challenges and concerns and to further define when and how RWE can be used in HTA decision making. The Institute for Clinical and Economic Review piloted a 24-month observational RWE reassessment. Key learnings from this pilot include identifying the benefits and challenges with using RWE in reassessments and considerations on prioritizing and selecting topics relevant for RWE updates.

Type
Article Commentary
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2022. Published by Cambridge University Press

Background

Health technology assessment (HTA) agencies assess the clinical effectiveness, safety, and in some cases cost-effectiveness of a therapy to inform health care resource allocation and promote high-quality health systems. Pivotal clinical trials used for regulatory submissions form the backbone of the clinical evidence assessed by HTAs. Reliance on clinical trials alone can result in the efficacy–effectiveness gap, or limited evidence of how the therapy will perform in real-world clinical practice, and provide an incomplete picture of the long-term clinical and economic impact of new therapy in actual practice (Reference Nordon, Karcher and Groenwold1).

The limited evidence base at the time of assessment leads to challenges and uncertainties for HTA agencies. Foremost, there is often the lack of effectiveness data, insufficient follow-up data, use of surrogate end points, and lack of appropriate comparators as those used in clinical studies (if an active comparator is included at all) are not always the relevant standard of care in the HTA agency’s region. These uncertainties are propagated into the economic model population and structure (e.g., target populations and comparators included), parameter uncertainty (i.e., precision around specific model inputs), and key assumptions (e.g., duration of treatment effect), which impacts the model results and subsequent value assessment. Value assessments operate using a “best available evidence” paradigm and are therefore subject to all the limitations inherent in the available data—that is, the model is only as good as the data upon which it is built. Real-world evidence (RWE) studies (e.g., noninterventional or observational studies) can supplement the evidence from randomized controlled trials (RCTs) to address some uncertainties relevant for HTAs (Reference Kent, Salcher-Konrad and Boccia2). With often substantial uncertainty in the evidence base at the time of launch, HTA organizations, including Health Technology Assessment International, are deliberating how best to use RWE in assessing therapies (Reference Oortwijn, Sampietro-Colom and Trowman3) and how to evaluate decisional uncertainty and implement dynamic HTA programs that are responsive to uncertainty in evaluations (Reference Siebert, Rochau and Claxton4).

One approach is to update the assessment paradigm to move from a single assessment at time of launch to lifecycle management, or assessing a therapy at each stage of its development lifecycle (e.g., phase II/III development, market launch, postapproval, and managed exit) (Reference Oortwijn, Sampietro-Colom and Trowman3). The goals of these additional assessments, often referred to as “reassessments,” are to address challenges and uncertainties present at market launch and to update the understanding of the value of a therapy in a changing clinical landscape. Many countries, including the US and Europe, have implemented coverage with evidence development (CED), which is a step toward lifecycle management (Reference Dabbous, Chachoua, Caban and Toumi5;6). CED decisions allow for temporary reimbursement, while additional data (typically from RCTs or registries) is generated to address clinical and economic uncertainty. This new evidence triggers a reassessment of the therapy for a final reimbursement decision. In the United Kingdom, the National Institute for Care and Excellence allows temporary reimbursement of certain therapies (e.g., cancer therapies) for up to 2 years, while more evidence is developed to inform cost-effectiveness evaluations (7). Although on the surface CED seems like a logical approach to addressing uncertainty, challenges exist; manufacturers must understand the evidence requirements (especially from multiple stakeholders) and what types of study designs are most suitable (Reference Dabbous, Chachoua, Caban and Toumi5;Reference Trueman, Grainger and Downs8). The additional studies are often lengthy, costly, and not finished in the timeframe agreed upon. When the new evidence does not resolve uncertainties or does not demonstrate effectiveness/cost-effectiveness, agencies struggle to remove reimbursement (Reference Dabbous, Chachoua, Caban and Toumi5;Reference Trueman, Grainger and Downs8). The lifecycle management approach goes beyond CED by promoting continuous exchange of evidence between all stakeholders starting with early stages of development through the therapy’s maturation to managed exit/disinvestment (9).

RWE or evidence generated from real-world data (RWD; e.g., claims, electronic health records (EHRs), and registries) has propelled the shift toward a lifecycle management approach (Reference Oortwijn, Sampietro-Colom and Trowman3). Throughout a therapy’s lifecycle, RWE can address both how (and in whom) a therapy is used and the effectiveness of the therapy in clinical practice (Reference Facey, Espin and Kent10); RWE can address context-specific questions, like real-world treatment patterns, safety, and comparative effectiveness especially in patient populations not included in RCTs (Reference Facey, Espin and Kent10).

Global HTA agencies have acknowledged the benefits of incorporating RWE into their decision making and are actively working on generating guidance on its use (1113). However, concerns regarding internal validity, reporting bias, data quality, lack of randomization, and diminished transparency in RWE exist (Reference Berger, Sox and Willke14;Reference Sherman, Anderson and Dal Pan15). Many stakeholders have embarked on demonstration projects to address these concerns and inform future guidance (Reference Jaksa and Mahendraratnam16). These projects establish methodological recommendations for generation and reporting of high-quality RWE studies (e.g., publishing RWE study protocol a priori) and explore the ideal uses for RWE in decision making.

One such demonstration project is Institute for Clinical and Economic Review (ICER)’s 24-month observational RWE update pilot (11). The aim is to supplement the limited clinical evidence base at launch with RWE and to test the impact of using RWE 24-months after the initial assessment at product launch to further refine the understanding of the clinical and economic value of the therapies. The focus is to examine how RWE generation may or may not overcome challenges like those in CED policies and in economic modeling in the face of uncertainty and how RWE contributes to a lifecycle approach to HTA. In August 2021, ICER completed its first 24-month RWE update pilot of prophylaxis therapies for hereditary angioedema (HAE) (Reference Bloudek, Jaksa and McKenna17). This paper focuses on key learnings from this pilot with regards to the strengths and limitations of using RWE in the clinical and economic evaluation of therapies 24-months or more postlaunch.

ICER’s assessment process and the HAE assessments

ICER’s assessment methodology (18) and the results from the ICER 2018 HAE assessment (Reference Lin, Agboola and Samur19) and the 2021 Observational RWE Update (Reference Bloudek, Jaksa and McKenna17) have been detailed elsewhere. An overarching aim of ICER’s work is to help the United States evolve toward a health care system that provides sustainable access to high-value care for all patients (18). Briefly, ICER’s assessment process includes stakeholder engagement and a review of the following data sources: systematic review and meta-analyses, RCTs, cohort studies, patient surveys, and other published RWE studies. This evidence serves as the basis for summarizing the patient and caregiver perspectives and evaluating the comparative clinical effectiveness of a therapy. Further, the comparative clinical effectiveness evidence is used alongside economic evidence and modeling methods to estimate the long-term cost-effectiveness and to frame contextual considerations and potential other benefits for patients. The patient and caregiver perspectives, the comparative clinical effectiveness, the long-term cost-effectiveness, and the contextual considerations and potential other benefits are discussed and deliberated during a public meeting prior to a vote by an independent panel on the long-term value for money.

In the 2018 HAE assessment of three prophylactic treatments, all were considered to be clinically efficacious in reducing HAE attacks and improving quality of life for HAE patients, but none were considered cost-effective based on the commonly cited threshold of $100,000 or $150,000 per quality-adjusted life year (QALY) gained (Reference Lin, Agboola and Samur19). The 2018 report identified uncertainties in the evidence and key model assumptions that influenced the cost-effectiveness findings. One of the most consequential inputs in the 2018 model was the frequency and severity of attacks at baseline, derived at the time from clinical trials. As demonstrated in the 2018 report, small differences in the assumed attack rate resulted in a wide range of cost-effectiveness results. With a lower mean baseline frequency of attacks, fewer attacks would be averted with prophylaxis, resulting in fewer cost-offsets and less quality-of-life gain, and thus resulting in a higher incremental cost-effectiveness ratio. In 2018, it was unclear how representative the baseline attack rates of RCT participants would be of those treated with prophylaxis in clinical practice. Other uncertainties included the frequency of dose de-escalation among patients prescribed one of the prophylactic therapies (lanadelumab), and resource utilization and costs to treat HAE attacks in outpatient, emergency department, and inpatient settings.

In the 2021 Observational RWE Update, the original cost-effectiveness analysis was revisited to assess what influential model inputs could be reliably analyzed in RWD. A feasibility assessment was completed and a formal RWE study protocol and model analysis plan were developed a priori to inform these inputs (20;21). The main RWE analysis centered on estimating the baseline attack rates of patients initiating the three HAE prophylactic therapies. The RWE study also evaluated healthcare resource utilization, cost of care estimates, and proportion of patients who de-escalated dosing for lanadelumab. The main finding from the RWE analysis was that the baseline attack rate in the real-world was lower than that used in the 2018 model (1.88 attacks per patient per month based on observational RWE vs. 3.39 based on RCTs). With inclusion of the RWE inputs, including the new lower baseline attack rates, but continuing to use trial-based relative reductions in attacks, the incremental cost-effectiveness ratios increased, suggesting larger price discounts are needed to reach commonly cited thresholds compared to the 2018 report.

Key learnings

Benefits of RWE to Inform Cost-effectiveness Models

As health care systems navigate their fixed budgets, more relevant real-world estimates of cost-effectiveness are essential. Although RCTs have excellent internal validity, because they are purposely designed to minimize bias surrounding the effects of an intervention, RCTs are often not generalizable to real-world clinical practice. For example, patients enrolled in RCTs are typically more homogeneous than patients in the real-world (Reference Blumenthal, Yu-Isenberg, Yee and Jena22) and are more likely to be adherent to therapy (Reference van Onzenoort, Menger and Neef23). These are key arguments for supplementing clinical trials with RWE, especially to inform cost-effectiveness models, which attempt to estimate the value of a therapy in clinical practice. In the HAE case, there are two examples where using RWE in the economic model better aligns the cost-effectiveness findings to HAE prophylactic treatment use in clinical practice. First, baseline HAE attack rates observed in the real-world were lower than what was observed in the RCTs. When the cost-effectiveness model was updated with this lower real-world baseline HAE attack rate, the trial-based relative reductions in HAE attacks resulted in fewer attacks averted, thus yielding less favorable cost-effectiveness findings. Second, prescribing patterns can vary by patient and provider in clinical practice versus the dosing frequency studied in RCTs (Reference Mehta, Chen and Alexander24). When the trial-based dosing used in cost-effectiveness modeling differs from real-world prescribing, the incremental cost-effectiveness ratio may not reflect real-world practice. In the HAE case, the label for lanadelumab noted that frequency of dosing should be reduced from every 2 weeks to every 4 weeks if the patient has been attack free for 6 months (25). This reduction in dosing would decrease the acquisition costs of the drug by half and improve the cost-effectiveness of the product with all other inputs held constant. In the 2018 model, there was insufficient data to include a reasonable estimate of the proportion of patients that would dose reduce in the base case. Therefore, less frequent dosing was included as a scenario based on the proportion of patients who remained attack free at 6 months in the trial coupled with an assumption about the proportion of those patients who would successfully switch to less frequent dosing. Implementing the switch to less frequent dosing in the 2021 model reduces the ICER for lanedelumab by 18 percent. The true cost-effectiveness of these products as they are used in the real-world is likely more aligned with the 2021 RWE updated model.

Researchers building cost-effectiveness models are often at the whim of published literature for inputs. Some of these inputs might not be timely or exactly match the criteria needed in the model. Even when published observational studies are used as inputs, the evidence is often outdated (Reference Jaksa, Skornicki and Patrick26). Conducting RWE studies alongside model development allows modelers to control the timeline and exact details of the inputs. For example, patient weight is important for HAE prophylaxis therapy dosing. The previous 2018 model used weights for the average American published in 2016, however, the RWE was able to provide weight values for a cohort of HAE patients. Although this did not make a substantial difference in the results of the model for HAE, there could be therapeutic areas and models where precise estimates of patient characteristics are essential. Although it may be ideal to create the most relevant and timely model inputs from RWE concurrently with model development, researchers and HTA agencies can be constrained by budget and time as they operate within fixed budgets and often mandated timelines and feasibility of data access that includes the most relevant model parameters.

Observational RWE is beneficial to update evidence and reduce uncertainty. It is also a feasible and efficient approach to address many HTA research questions. RWE generation focuses on utilizing the vast amount of data being collected globally by health systems, insurers, devices, and applications. The costs and time to complete observational RWE studies are substantially less than conducting an RCT of similar sample size. For example, an RCT that took 7 years and cost tens of millions of dollars was replicated in RWE in 12 weeks for a hundredth of the price (Reference Fralick, Kesselheim, Avorn and Schneeweiss27). Although we are not advocating for universal replacement of RCTs with RWE studies, there can be specific study questions that are more applicable for RWE studies compared to expensive and large-scale RCTs. Admittedly, the measurements available within an RCT versus observational RWE can vary (with less opportunity to tailor measures in observational RWE because it is based on secondary data often collected for other purposes [e.g., billing]). When appropriate, observational RWE can efficiently add evidence to HTA processes and support a shift toward estimating the cost-effectiveness of treatments used in clinical practice.

Challenges to Accessing RWD and Using RWE

One of the main goals of using RWE is to complement clinical trials by addressing uncertainties, but RWE can share similar challenges. Uncertainties in clinical trials sometimes arise due to logistical reasons. For example, accelerated approvals are often granted to therapies that treat rare diseases with substantial unmet needs. In these instances, running large and lengthy RCTs can be challenging or impossible due to difficulties with patient recruitment and the appeal of using shorter-term surrogate endpoints. Some of these logistical issues can also carry over to RWE. Selecting the appropriate time frame post launch to re-evaluate using RWE is important. Although 2 years has become a popular timeframe for reassessment (7;11), this timeframe might not be appropriate for all research questions. Disease rarity that limits RCT enrollment can also limit the number of patients available for analysis in RWD. Slow uptake of a new technology can also delay accruing a sufficient sample size for RWD analysis. For HAE, a rare disease, there were fewer than 100 patients initiating each of the three drugs of interest who were eligible for the RWE analysis. While the most recently approved drug, lanadelumab, was approved in 2018, the average follow-up time for patients initiating lanadelumab was less than 10 months. HTAs should balance the need to address uncertainties with RWD accumulation given the indication, patient population, and outcomes of interest.

An HTA agency’s ability to access RWD can also be a challenge, even in countries with a centralized health care system. In a survey of 24 HTA agencies in the European Union, rules complicating or prohibiting access and use of RWD were the main reason for not using RWD in decision making (Reference Stromgren and Ponten28). The cost of commercially available data sets may also be prohibitive for HTA agencies with fixed budgets.

RWE studies often rely on secondary data sources, such as claims or EHR data, which were collected for a purpose other than research. Updating the most relevant model inputs may not be feasible in RWD or in the sources available to the researcher (e.g., laboratory values like HbA1c are often not available in claims data). A mismatch between what is available in RWD sources, and the most relevant data needed to update the assessment is also possible. Another feasibility consideration is if the secondary data includes direct measures of all pertinent study endpoints. In some cases, researchers may apply algorithms to define these endpoints. One of the major challenges of using secondary data is ensuring that the algorithms used to define study endpoints are valid (i.e., reflect the underlying medical concepts that the researcher is trying to capture). Researchers should use endpoints that have been validated both in the data source (e.g., claims) and in the patient population; however, in the real-world, validated endpoints might not exist. In fact, few exist (Reference Mercon, Mahendraratnam and Silcox29). The FDA and other stakeholders (Reference Mercon, Mahendraratnam and Silcox29;30) are working to define best practices in validating RWD endpoints; however, consensus-based comprehensive guidance has not been released (Reference Jaksa, Wu and Jónsson31). In the absence of validated endpoints and best practice methods to validate them, RWE researchers face considerable challenges in acceptance of the data as fit-for-purpose and in the results of the study. In the HAE analysis, there was a lack of a validated endpoint for measuring HAE attacks. HAE attacks were defined based on clinical expertise, published clinical guidelines, manufacturer feedback on the protocol, and assumptions based on treatment pathways and methods of treatment delivery to estimate the frequency of attacks. The protocol focused heavily on sensitivity analyses to pressure test different definitions and their impact on the results. HTAs should work with other stakeholders to develop RWD validation best practices and help facilitate the validation of common endpoints (e.g., overall survival). In the absence of validated endpoints, HTAs should encourage, through official guidance, that researchers take a principled approach to testing the robustness of the results using multiple algorithms of the endpoint and highlight the limitations and need for future validation.

Globally, the status quo of pricing and access are not necessarily aligned with a lifecycle approach to health technology management. The lifecycle approach hinges on re-evaluating the value of a product at multiple points within its lifecycle, and if the product no longer offers the same value, either the price should change or the therapy’s reimbursement should change. Removing therapies that no longer offer value and/or clinical benefit has been historically challenging and a noted obstacle in CED decisions. In a study of the Netherlands ’ CED schemes, the reassessment process advised that two out of ten therapies should be discontinued, but it was not implemented in Dutch Healthcare practice (Reference Makady, van Veelen and de Boer32). Similarly, in the regulatory space, therapies that receive accelerated approval must confirm clinical benefit in postmarketing confirmation studies and authorization should be removed if these therapies do not show a benefit. However, in a study of thirty-five oncology indications that received accelerated approval by the FDA and were re-evaluated, ten did not confirm benefit, but market authorization was not revoked (Reference Beaver and Pazdur33). Of note, many of these have been voluntarily withdrawn by the manufacturer. Importantly, in the HAE case, we do not advocate for patient access to be impacted; the observational RWE update did not refute that patients’ health is improved by taking the preventative therapies. However, prices paid for these therapies should change to be in alignment with health gains. In the 2018 report that emphasized RCT evidence, U.S. prices paid for these therapies were not in alignment with health gains, which complicates whether RWE had an opportunity to align prices paid with health gains in the RWE update.

Recommendations and Considerations for Selecting Topics Relevant for RWE Updates

For HTAs, implementing lifecycle management will require additional resources, as it will potentially increase the number of assessments done for each therapy. Topic selection and prioritization will be important. To determine if a topic is relevant for an RWE update, we recommend the following considerations (Table 1):

  • Topic Selection: First and foremost, HTAs should ask if generating the RWE update can change the status quo and prioritize actionable opportunities.

  • Identifying Impactful Evidence Gaps: Evaluate all uncertainties in the prior assessment(s) and quantify (if possible) how they impact patients, clinical and cost-effectiveness results, and population-level decisions (e.g., tornado diagrams are useful tools in understanding the impact of uncertainty). Communicate with the manufactures and other stakeholders to determine if these uncertainties are shared and if there are current plans to generate additional evidence to address the knowledge gap.

  • Feasibility: Can the most impactful evidence gaps be validly addressed and can bias be appropriately control in RWD? HTA agencies should follow best practices in identifying if research questions are relevant for RWE studies, including data availability and measurement validity (Reference Gatto, Reynolds and Campbell34). Consider the optimal timeframe to evaluate the outcomes of interest and determine if there has been time for the data to mature. Is there a data set that is fit-for-purpose and can this data be accessed?

  • Study Design: Once it is determined that RWE is an appropriate approach, study design and analytical considerations should be evaluated and published best practices should be followed (Reference Jaksa, Wu and Jónsson31). Determine if there are resources to execute the RWE study in the duration of the reassessment process.

  • Study Execution: For some HTA agencies, another consideration is how the RWE will be generated, which has consequences on timing, budget, and control over study implementation. Will internal resources, external collaborators, or manufacturers be responsible for the study execution?

After the RWE study is executed and the reassessment process is complete, HTA agencies should reflect on the process to determine if the RWE ultimately impacted its decision and what learnings can be applied to future reassessments. Publication of the RWE results is also important to help others address similar uncertainties and promote the efficient use of resources.

Table 1. Summary of Key Considerations and Recommendations for Selecting Topics Relevant for RWE Updates

HTA, health technology assessment; RWD, real-world data; RWE, real-world evidence.

For the HAE case, attack rates, an important outcome to patients (and clinicians) and influence within the cost-effectiveness findings led to a strong rationale for choosing this topic. We were confident that severe attack rates, cost inputs, and frequency of dose de-escalation could be evaluated in RWD and that the study could be executed within a suitable time window to inform an assessment. We consulted with the manufacturers on post launch evidence generation plans, especially RWE studies that were recently published or were ongoing and incorporated their feedback into the RWE study protocol. Because of the sensitivity of the economic model to baseline HAE attack rates, an input that was hypothesized to potentially differ between an often-enriched clinical trial population and a real-world population, we felt the RWE could influence the overall results.

Considerations for Implementing a Reassessment Process

In addition to determining what topics are relevant for an RWE reassessment, there are important considerations for HTAs as they develop a reassessment assessment process. HTAs will have to consider if an entire refresh of the initial assessment is necessary or if their process will only update certain aspects of the initial assessment. For the HAE reassessment, we limited the updated systematic literature review to identify newly published evidence for consideration in the cost-effectiveness model and limited reassessment to the same interventions included in original assessment. We created a two-phased modeling approach to separate incremental effects of the new RCT evidence and RWE evidence (Reference Bloudek, Jaksa and McKenna17). While new evidence was limited in HAE, updating the entire assessment can be a substantial resource investment that HTAs should consider. HTAs should also consider what stakeholder engagement processes remain important to reassessments (e.g., industry, regulators, other HTA organizations, and patient groups) and building in such engagements. HTAs should evaluate if newly generated evidence can be used across different healthcare authorities. Sustainability of the reassessment process is a key consideration, which includes resourcing, budget, and the opportunity to make evidence-based changes; we recommend using pilots to test processes and collect information on resourcing needs, impact of stakeholder engagement, and challenges.

Conclusions

This pilot touches on the intersectionality of several salient topics for HTAs as they potentially shift to a lifecycle approach: the usefulness of RWE, challenges with post launch evidence development and reassessments, and economic modeling in the face of uncertainty. Although there is literature on each of these topics alone, there is less understanding and concrete examples of how RWE can facilitate reassessments and reduce uncertainty in economic modeling, which will be integral in the shift toward lifecycle management. Lessons learned from this pilot can bolster both the evidence on the use of RWE in HTA to address clinical and cost-effectiveness uncertainty and how RWE can become a tool for HTAs in lifecycle management. Criteria to consider for future observational RWE updates are feasibility, quality, efficiency, what matters most to patients, what evidence matters most to the HTA’s objectives, and providing actionable evidence to policy makers. Lessons from this and other experiences should form advances to best practices HTA frameworks.

Funding Statement

The University of Washington has received funding from the Institute for Clinical and Economic Review related to this work.

Conflicts of Interest

Ashley Jaksa and Amanda R. Patrick are employed by and own stock options in Aetion, Inc. Lisa Bloudek, Yilin Chen, and Josh J. Carlson are associated with the University of Washington, which received funding for this work from the Institute for Clinical and Economic Review. Jon D. Campbell, Kanya Shah, and Avery McKenna are employees of the Institute for Clinical and Economic Review.

References

Nordon, C, Karcher, H, Groenwold, RHH, et al (2016) The “efficacy-effectiveness gap”: Historical background and current conceptualization. Value Health J Int Soc Pharmacoeconomics Outcomes Res. 19, 7581.Google ScholarPubMed
Kent, S, Salcher-Konrad, M, Boccia, S, et al (2021) The use of nonrandomized evidence to estimate treatment effects in health technology assessment. J Comp Eff Res. 10, 10351043. Available at: https://www.futuremedicine.com/doi/10.2217/cer-2021-0108.CrossRefGoogle ScholarPubMed
Oortwijn, W, Sampietro-Colom, L, Trowman, R (2019) How to deal with the inevitable: Generating real-world data and using real-world evidence for HTA purposes - From theory to action. Int J Technol Assess Health Care. 35, 346350.CrossRefGoogle ScholarPubMed
Siebert, U, Rochau, U, Claxton, K (2013) When is enough evidence enough? - Using systematic decision analysis and value-of-information analysis to determine the need for further evidence. Z Evidenz Fortbild Qual Im Gesundheitswesen. 107, 575584.Google ScholarPubMed
Dabbous, M, Chachoua, L, Caban, A, Toumi, M (2020) Managed entry agreements: Policy analysis from the European perspective. Value Health J Int Soc Pharmacoeconomics Outcomes Res. 23, 425433.Google ScholarPubMed
Coverage with Evidence Development | CMS [Internet]. [cited 2022 Jan 20]. Available at: https://www.cms.gov/Medicare/Coverage/Coverage-with-Evidence-Development.Google Scholar
Trueman, P, Grainger, DL, Downs, KE (2010) Coverage with evidence development: Applications and issues. Int J Technol Assess Health Care. 26, 7985.Google ScholarPubMed
CADTH 2018-2021 Strategic Plan, 14.Google Scholar
Facey, KM, Espin, J, Kent, E, et al (2021) Implementing outcomes-based managed entry agreements for rare disease treatments: Nusinersen and Tisagenlecleucel. PharmacoEconomics. 39, 10211044. Available at: https://link.springer.com/10.1007/s40273-021-01050-5.CrossRefGoogle ScholarPubMed
Value Assessment Framework [Internet]. ICER. [cited 2021 Oct 19]. Available at: https://icer.org/our-approach/methods-process/value-assessment-framework/.Google Scholar
NICE. The NICE Strategy 2021 to 2026 [Internet]. NICE. [cited 2021 May 3]. Available at: https://www.nice.org.uk/about/who-we-are/corporate-publications/the-nice-strategy-2021-to-2026.Google Scholar
Developing a Canadian Real-World Evidence Action Plan Across the Drug Life Cycle | CADTH [Internet]. [cited 2021 Oct 19]. Available at: https://www.cadth.ca/news/developing-canadian-real-world-evidence-action-plan-across-drug-life-cycle.Google Scholar
Berger, ML, Sox, H, Willke, RJ, et al (2017) Good practices for real-world data studies of treatment and/or comparative effectiveness: Recommendations from the joint ISPOR-ISPE special task force on real-world evidence in health care decision making. Pharmacoepidemiol Drug Saf. 26, 10331039.Google ScholarPubMed
Sherman, RE, Anderson, SA, Dal Pan, GJ, et al (2016) Real-world evidence - What is it and What can it tell us? N Engl J Med. 375, 22932297.Google ScholarPubMed
Jaksa, A, Mahendraratnam, N (2021) Learning from the past to advance tomorrow’s real-world evidence: What demonstration projects have to teach us. J Comp Eff Res. 10, 11691173.CrossRefGoogle ScholarPubMed
Bloudek, L, Jaksa, A, McKenna, A, et al (2021) Observational real-world evidence update prophylaxis of hereditary Angioedema with Takhzyro and C1 Inhibitors: Effectiveness and value [Internet]. ICER. Available at: https://icer.org/wp-content/uploads/2020/10/ICER_Aetion_RWE_HAE_Report_082421.pdf.Google Scholar
Methods & Process [Internet] ICER. [cited 2021 Oct 19]. Available at: https://icer.org/our-approach/methods-process/.Google Scholar
Lin, GA, Agboola, F, Samur, S, et al (2018) Prophylaxis for hereditary angioedema with lanadelumab and C1 inhibitors: Effectiveness and value [Internet]. Available at: https://icer.org/wp-content/uploads/2020/10/ICER_HAE_Final_Evidence_Report_111518-1.pdf.Google Scholar
ICER (2021) Observational real-world evidence update prophylaxis of hereditary angioedema with lanadelumab and C1 inhibitors: Effectiveness and value modeling analysis plan [Internet] Available at: https://icer.org/wp-content/uploads/2020/10/ICER_HAE_RWE_MAP_041221.pdf.Google Scholar
ICER (2021) Observational real-world evidence update prophylaxis of hereditary angioedema with takhzyro and C1 inhibitors: RWE protocol [Internet]. Available at: https://icer.org/wp-content/uploads/2020/10/ICER_Aetion_HAE_RWE_Protocol_041221.pdf.Google Scholar
Blumenthal, DM, Yu-Isenberg, K, Yee, J, Jena, AB (2021) Real-world evidence complements randomized controlled trials in clinical decision making | Health affairs blog [Internet]. [cited 2021 Oct 19]. Available at: https://www.healthaffairs.org/do/10.1377/hblog20170927.062176/full/.Google Scholar
van Onzenoort, HAW, Menger, FE, Neef, C, et al (2011) Participation in a clinical trial enhances adherence and persistence to treatment: A retrospective cohort study. Hypertens Dallas Tex 1979. 58, 573578.Google Scholar
Mehta, RH, Chen, AY, Alexander, KP, et al (2015) Doing the right things and doing them the right way: Association between hospital guideline adherence, dosing safety, and outcomes among patients with acute coronary syndrome. Circulation. 131, 980987.CrossRefGoogle ScholarPubMed
Food and Drug Administration (2018) Full prescribing information: Takhzyro [Internet]. [cited 2021 Oct 19]. Available at: https://www.accessdata.fda.gov/drugsatfda_docs/label/2018/761090s001lbl.pdf.Google Scholar
Jaksa, A, Skornicki, M, Patrick, A (2019) Are value assessment frameworks using RWE? A review of NICE and ICER psoriasis assessments. Int Soc Pharmacoeconomics Outcomes Res. 22, S611. Available at: https://www.ispor.org/heor-resources/presentations-database/presentation/euro2019-3121/96938.Google Scholar
Fralick, M, Kesselheim, AS, Avorn, J, Schneeweiss, S (2018) Use of health care databases to support supplemental indications of approved medications. JAMA Intern Med. 178, 5563.Google ScholarPubMed
Stromgren, A, Ponten, J Next generation health technology assessment to support patient-centred, societally oriented, real-time decision-making on access and reimbursement for health technologies throughout Europe [Internet]. HTx Consortium. Available at: www.htx-h2020.eu.Google Scholar
Mercon, K, Mahendraratnam, N, Silcox, C, et al (2020) A roadmap for developing study endpoints in real-world settings [Internet]. Margolis Center for Health Policy. [cited 2021 Jul 12]. Available at: https://healthpolicy.duke.edu/publications/roadmap-developing-study-endpoints-real-world-settings.Google Scholar
Food and Drug Administration (2021) Real-world data: Assessing electronic health records and medical claims data to support regulatory decision making for drug and biological products [Internet]. Food and Drug Administration. [cited 2021 Oct 18]. Available at: https://www.fda.gov/media/152503/download.Google Scholar
Jaksa, A, Wu, J, Jónsson, P, et al (2021) Organized structure of real-world evidence best practices: Moving from fragmented recommendations to comprehensive guidance. J Comp Eff Res. 10, 711731.Google ScholarPubMed
Makady, A, van Veelen, A, de Boer, A, et al (2019) Implementing managed entry agreements in practice: The Dutch reality check. Health Policy Amst Neth. 123, 267274.Google ScholarPubMed
Beaver, JA, Pazdur, R (2021) “Dangling” accelerated approvals in oncology. N Engl J Med. 384, e68.Google ScholarPubMed
Gatto, NM, Reynolds, RF, Campbell, UB (2019) A structured preapproval and postapproval comparative study design framework to generate valid and transparent real-world evidence for regulatory decisions. Clin Pharmacol Ther. 106, 103115.CrossRefGoogle ScholarPubMed
Figure 0

Table 1. Summary of Key Considerations and Recommendations for Selecting Topics Relevant for RWE Updates