Flyer

Health Science Journal

  • ISSN: 1791-809X
  • Journal h-index: 61
  • Journal CiteScore: 17.30
  • Journal Impact Factor: 18.23
  • Average acceptance to publication time (5-7 days)
  • Average article processing time (30-45 days) Less than 5 volumes 30 days
    8 - 9 volumes 40 days
    10 and more volumes 45 days
Awards Nomination 20+ Million Readerbase
Indexed In
  • Genamics JournalSeek
  • China National Knowledge Infrastructure (CNKI)
  • CiteFactor
  • CINAHL Complete
  • Scimago
  • Electronic Journals Library
  • Directory of Research Journal Indexing (DRJI)
  • EMCare
  • OCLC- WorldCat
  • MIAR
  • University Grants Commission
  • Geneva Foundation for Medical Education and Research
  • Euro Pub
  • Google Scholar
  • SHERPA ROMEO
  • Secret Search Engine Labs
Share This Page

Review Article - (2023) Volume 17, Issue 9

A Review of the ICER Unsupported Price Increase Report and its Potential Use in Health Policy Decisions

Roman Casciano1*, Matthew Brougham1, Ulrich Neumann2 and Bridget Doherty2
 
1Department of Pharmaceuticals, University of Wilmington, Wilmington, USA
2Department of Pharmaceuticals, University of Pittsburgh, Titusville, USA
 
*Correspondence: Roman Casciano, Department of Pharmaceuticals, University of Wilmington, Wilmington, USA, Email:

Received: 22-Nov-2022, Manuscript No. IPHSJ-23-13184; Editor assigned: 25-Nov-2022, Pre QC No. IPHSJ-23-13184 (PQ); Reviewed: 09-Dec-2022, QC No. IPHSJ-23-13184; Revised: 01-Sep-2023, Manuscript No. IPHSJ-23-13184 (R); Published: 29-Sep-2023, DOI: 10.36648/1791-809X.17.9.1057

Abstract

Background: The Institute for Clinical and Economic Review (ICER) recently published its 3rd annual Unsupported Price Increase (UPI) report and positions it as a guide for policy making on prescription drug spending. Lawmakers across the US have shown interest in the UPI approach as a potential tool to optimize healthcare resource allocation.

Methods: The latest UPI methodology and its changes were explicitly documented and then compared and contrasted with the author’s understanding and experience of sound scientific method and logic.

Results: The UPI report’s findings are based on a partial product selection and the rejection of high-quality evidence, offering healthcare decision makers misleading guidance on the value for money of the assessed products.

Conclusion: Our evaluation shows that significant methodological flaws mar the UPI report’s utility and that it is a tool that may cause unintended consequences that healthcare decision makers and patients in the US can ill afford.

Keywords

Prescription drug spending; Unsupported price increase; Policy making; Healthcare; ICER

Introduction

Prescription drug spending remains a priority for federal and state lawmakers. While initiatives such as price transparency and drug affordability boards are now law in several states, a policy under consideration across a number of jurisdictions is the regulation of price increases. Proposed state legislation would impose taxes or penalties when the list price (i.e., Wholesale Acquisition Cost (WAC)) of a branded medicine increases above a pre-specified threshold and that increase is deemed unsupported by clinical evidence. However, policy proposals are shaped by analyses from private third-party organizations, such as the Institute for Clinical and Economic Review (ICER). These analyses should be subject to rigorous standards and open debate [1].

ICER, a 501(c) non-profit, is a research organization that issues health policy papers and reports on the clinical and costeffectiveness of pharmaceutical products. ICER’s operations are funded through grants, contracts and unrestricted gifts from donors, the largest being Arnold ventures LLC. Crediting financial support from Arnold ventures, ICER launched its annual Unsupported Price Increases Report (UPI Report) in 2019 to ‘advance the public debate on drug price increases and its most recent edition was published on November 16, 2021, covering the year 2020. As decision makers consider using the report to define health policy and affect resource allocation, the quality of the analysis, methodology, interpretation of the findings and its implications require critical review.

Literature Review

A standard approach to scientific critique was adopted. All versions of the UPI methodology up to the time of analysis were documented by their steps and changes in each step systematically recorded along with all recorded and published reasons. This analysis was performed separately by two researchers and all differences resolved through review by the research team and authors [2].

The most current UPI methodology was then analysed by the research team to determine its ability to answer the question it was designed to address. Weaknesses in the methodology were identified by comparing and contrasting it with the research team's understanding and experience of high-quality scientific method and captured through the use of a Delphi process run by the project manager over four separate meetings. Identified weaknesses were subsequently each discussed to determine their importance. The UPI researcher’s changes in methodology and where available, their stated reasons for the changes acted as context to help focus attention on methodological flaws and unsupported and/or implicit assumptions. So identified key methodological weaknesses were thus recorded for discussion [3].

Discussion

Research premise and assumptions

When assessing the quality of any social science research program, a foundational consideration is whether the research design offers precisely defined terms and avoids logical fallacies regarding the measurement of causations, relationships or associations.

The report’s title and study variable of interest ‘unsupported price increases’ suggests that the market price increases analyzed were not justified by clinical evidence (not ‘supported’) and should therefore be subject to enhanced scrutiny by healthcare decision makers. If price is understood as a market dynamic between supply and demand, then such framing is misleading in that it implies that market pricing dynamics rest solely on recently published clinical trial data. However, the US pharmaceutical pricing process reaches far beyond the data ICER deems sufficient. Prices result from a highly competitive and negotiated process between multiple actors involving rebates, channel concessions, volume agreements and utilization restrictions, among other factors. In its assessments, ICER acknowledges the importance of incorporating domains such as utilization management and formulary tiring given their direct association with net pricing yet ignores these same domains in its UPI report [4].

Acknowledging the ambiguity around an ill-defined research question, the report states as a limitation that the UPI methodology ‘cannot determine whether a price increase for a drug is fully justified by new clinical evidence’. The degree to which the UPI analysis offers only a partial view of a price change justification remains entirely unaddressed and it is seemingly forgotten in the headline. The approach rests on a logical fallacy. ICER’s finding of a lack of literature does not necessarily mean a price change is unsupported; various other factors, including the perceived value of the drug, may explain observed price changes. As we know from studies on causeeffect relationships, the ignorance of other explanatory variables poses a threat to the internal validity of the research.

ICER’s blurred research construct can be restated in simpler terms. Conclusions of the UPI Report are solely based on ICER’s categorical determination of clinical evidence meeting its own expectations. The presence or lack of clinical evidence meeting ICER criteria is then erroneously interpreted as the sole legitimate foundation for pricing dynamics, a misconstruction of drug reimbursement realities justified by no rationale in the economics, strategic pricing or health economics literature [5].

UPI report approach and methodology

Our review shows that ICER’s UPI methodology has not been validated and lacks substantiation in the health economics discipline. It can be best described as a set of sequential analytical steps to derive a ranking of products.

First, net sales data are obtained from a proprietary estimate by SSR Health LLC, an investment research firm, to determine 250 medicines with the highest net revenues.

Second, changes in the average WAC are calculated for the selected 250 products and those with an increase exceeding the medical consumer price index by 2% qualify for further examination. In the most recent UPI Report, 32 drugs exceeded this threshold.

Third, for those 32 products ICER again utilizes SSR health estimates to calculate net price changes, gauging net sales and expected budget changes due to the change in net price. ICER then excludes drugs from consideration based on a ‘lack of face validity’ if the net price is higher than the WAC price. In the most recent report, 11 of the 32 drugs-more than one-third were excluded due to this inconsistency.

Fourth, ICER allows for the arbitrary addition of three products meeting certain subjective criteria, undermining their primary selection methodology and raising questions about the impartiality of drug selection. For example, the addition of ‘drugs whose price increases raise concerns about the fairness of the price increases’ is particularly problematic and open to interpretation.

Fifth, manufacturers of the 15 highest ranked products are then notified that their drugs will potentially be reviewed for price justification and they are granted three weeks to clarify the calculated estimates of average price changes or budget impact. Disputes must include either the effect of net price changes on change in revenue or average net prices and total volumes of sales for the evidence review period.

Finally, after disputes are resolved, the top 10 drugs remaining on the list are evaluated further. ICER determines all indications that account for at least 10% of the utilization of each drug. For the indications above the 10% threshold, ICER seeks to assess the quality of the clinical evidence drawing on the Grading of Recommendations, Assessment, Development and Evaluations (GRADE) scale. If ICER analysts rate evidence as ‘moderate or high quality’ according to their application of the GRADE scale, the magnitude of net benefit is estimated using the ICER evidence rating matrix. Products without new ‘moderate or high quality’ evidence of benefit are categorized as having unsupported price increases. Importantly, the UPI Report features no rationale why certain rankings were made and by whom, nor does it fully explain the application of their evidence rating matrix or many of its findings. We also found no reference to the level of inter-rater reliability among the ICER assessors to gauge the reproducibility of the results [6].

Procedural inadequacies

The above described analytical procedure presents a variety of challenges for both the internal and external validity of the research. The UPI report makes no assessment of the overall clinical and economic value for any of the listed medications it assesses only the context of a price change and ignores the empirical economic realities and market factors that govern drug prices.

ICER lacks a systematic method of data collection. From a research quality perspective, consideration of evidence for ‘justifiable’ price change is based on each manufacturer’s interpretation of the report criteria, which may result in a vastly dissimilar evidence base for each assessed product. As manufacturer participation is optional, ICER conducts its own independent systematic review of evidence only on products when it cannot rely on manufacturer submissions. However, that review is limited to published data from Randomized Controlled Trials (RCTs). ICER thereby ignores other crucial types of research, including real-world analyses, meta-analyses and observational data. This data selection process yields a highly fragmented and uneven evidence pool inappropriate to support comparative analysis [7].

By using a narrow definition of ‘substantial new evidence,’ the UPI report does not fully capture the clinical value of a drug. In the 2021 UPI report assessing price increases in 2020, of the 286 pieces of evidence reviewed, only 21 (7%) were considered ‘moderate to high quality’ and used in the inal analysis; 137 of the 286 (48%) submissions ICER dismissed as studies not meeting criteria for new moderate-to-high quality evidence, despite having been peer-reviewed and presented at established conferences and published in scienti ic journals such as the Journal of the American Medical Association (JAMA), the New England journal of medicine and the Lancet (Table 1).

References 2018 2019 2020
Total considered 1393 264 286
Total accepted by ICER 3 9 21
Percentage accepted 0.22% 3.41% 7.34%

Table 1. References considered versus accepted by ICER by report year.

This begs the question, would clinical experts in the disease states addressed by the evidence show a similar level of rejections? We find that without transparency on the review process and evidence ratings, questions about the reproducibility and generalizability of the report’s findings are heightened.

Exclusion criteria

The UPI report utilizes explicit exclusion criteria that further limit the ability to assess clinical evidence, such as the rejection of clinical evidence for indications that account for less than 10% of drug utilization and exclusion of confirmatory studies that strengthen certainty of clinical impact.

ICER rejected 17 submissions of clinical evidence in the 2020 UPI report on these grounds. By excluding clinical evidence for less used indications such as underserved populations, including those with rare or pediatric conditions regardless of potential quality or impact, the UPI report appears to discount the voices of patients who may already be overlooked in the healthcare system and it creates a disincentive to pursue research in underserved populations for already marketed therapies.

The exclusion of confirmatory studies is particularly perplexing. Confirmatory trials strengthen the certainty around a treatment and ICER itself acknowledged the critical importance of such studies in its recent policy paper on the US Food and Drug Administration (FDA) accelerated approvals pathway. This type of evidence also plays a crucial, validating role in evidence-based medicine and is used by professional societies to determine clinical guidance with a higher degree of confidence [8].

A conspicuous omission is the failure to capture observational data and real-world evidence. ICER declares, ‘most high-quality comparative observational studies generate only low-quality evidence using GRADE for the comparison being assessed’. It is misleading to suggest types of research other than RCTs do not inform clinical care or to imply that the GRADE methodology supports ignoring non-RCT evidence. Not a single real-world evidence study was considered moderate-to-high quality by ICER in its 2020 report. The reductionist approach to evidence is not broadly supported by established scientific methodology and it could discourage continued research in this critical area. Such a restrictive approach also breaks ranks with the position of FDA, which now considers these evidence types as meaningful endpoints even for drug approval while key stakeholders across the healthcare landscape including patients, payers and providers are using real-world research to improve evidencebased care (including ICER, who uses this type of data when conducting its value assessments).

By limiting its attention to RCTs alone, the UPI report deviates from established scientific methodology accepted and encouraged by the FDA and omits crucial high-quality peerreviewed data that examines the impact of the drug on outcomes in real world settings. The UPI report further validates criticisms of patient associations that have challenged ICER on its perceived lack of interest to properly incorporate patient perspectives in its findings [9].

Following previous UPI report trends, ICER rejected all comments and the vast majority of references submitted by manufacturers in the recent 2020 report. Based on our analysis, in the past three UPI assessments, ICER rejected a total of 98% of the evidence submitted by manufacturers and found by ICER in its own searches, dismissing a large volume of peer-reviewed scientific literature. Since the inception of this report, ICER has assessed a total of 1,943 references with only 33 references accepted. The methodological exclusions of peer-reviewed clinical and other high-quality data reduce the ability of the UPI report to achieve even its narrowly stated purpose, that is, to identify new evidence potentially in support of a price increase.

Market influences

The UPI report fails to acknowledge that pricing is heavily influenced by other economic actors like payers, Pharmacy Benefit Managers (PBMs), wholesalers and distributors. Multiple analysis show that broader medicine uptake is the major driver of spending growth, not price increases for patent-protected brands.

Scientific innovation in areas of unmet need shifts spending to novel therapies and drug classes, but due to effective genericization and declining net prices, manufacturer revenues after all discounts and rebates have grown by only $56 per capita since 2010. Thus, a focus only on perceived price increases in branded products misses the consistently moderating effect of generics in general and the impact of new generic entries, including biosimilars, in particular [10].

The ICER UPI methodology results in a number of arbitrary findings that may create market distortions. Looking at branded drugs identified by sales volume and price means that widely used products (which may be more widely used because of their clinical value) could be marked as having ‘unsupported’ price increases, while lesser used treatments in the same class with the same price change would not be flagged, even though the aggregate impact in dollar terms may be greater.

The utility of the UPI report in decision-making

Economic research focuses on elucidating relative value for money, misaligned incentives or market failures. The UPI report ignores these dimensions. Yet, its findings are being utilized to advance price control policies by state governments. Recent survey research also suggests that over 6 in 10 commercial payers now consider the results of ICER’s UPI report in formulary decision making.

Analyses of drug spending have their place in budgetary decision making. However, one cannot determine the net benefit or loss of drug price controls by examining whether or not ‘substantial new clinical evidence,’ as narrowly defined by ICER, exists. Visible cost savings for larger (and potentially more valuable) treatments may be completely offset by implicit market distortions created by the UPI report. Policymaking based on the UPI report findings alone without consideration of other costs and benefits could create substantial downsides for allocative efficiency and spending overall. This partial analysis could easily divert payer efforts from actions that may have greater impact on their budgets and fewer unintended consequences.

Using the UPI report findings could result in unfair, uneven or systematically biased price controls, while opening the process to legal challenges. The fundamental methodological flaws we discovered in the ICER process go beyond a mere failure of research design or misguided signals to policymakers and may result in serious consequences for patient access and outcomes.

Bridging scientific research to health policymaking can be challenging. Academics face the dilemma that effectively shaping evidence-based policy often requires deliberate persuasion, intentional positioning and emotional appeal that allows for abstract models to fully impact decision makers. Organizations such as ICER straddle difficult territory in their quest to shape policy outcomes. However, where the dividing line between advocacy and academia gets thin, a critical distinction emerges in our analysis, ICER’s research process is narrowed to exclude countervailing results in support of a predefined narrative, so as to steer policymaking in a specific direction. Given the range of fundamental methodological flaws, the UPI report is advocacy presented as research, which is in line with the report donor’s stated mission to motivate drug price reductions [11].

In times of highly polarized policy debates, academic rigor in health policy analysis gains importance for both long-term scientific credibility and dependable policymaking. Policymakers ultimately bear the responsibility of making decisions that impact the overall health of the people and thus need to uphold patient-centricity as a top priority. Our analysis demonstrates how the methods of the UPI report may result in partial product selection, feature an incomplete analysis of the evidence base and deliver potentially misleading conclusions regarding growth drivers in healthcare spending. Were we to GRADE the evidence set used by ICER to reach its product-specific conclusions, the rating most applicable is ‘very low-the true effect is probably markedly different from the estimated effect.’ In a report aspiring to leverage the authority of objective scientific analysis to inform policy choices, major methodological failings mar its utility.

In addition to flawed methodology, the UPI report diverts attention away from true drivers of healthcare spend. In a recent report, IQVIA found that the cost of medicines after all discounts and rebates declined 2.9% in 2020, continuing a downward trend over the past five years. Of the more than 20,000 prescription drugs approved for marketing in the US, the most recent ICER UPI Report highlighted 12 with a meaningful impact on spending due to price changes, of which only eight were deemed to have had unsupported net price increases [12].

Conclusion

Furthermore, one of those eight was responsible for 85% of the identified drug spending increases. The other seven had marginal impacts to any single payer. For budget-constrained payers, the UPI report provides no policy guidance as to the value for money of any of the selected products, no view on the level of market efficiency within any of the drug classes or important impacts on population health. It does offer a blunt tool to rally against market price dynamics for a certain set of drugs, but as it is with blunt tools, caution is advised. Partial and flawed analyses such as the UPI report may cause unintended consequences healthcare decision makers and patients can ill afford.

Funding

This study received funding from Janssen scientific affairs, LLC. Employees of the funder had a role in study design, data collection and analysis, decision to publish and preparation of the manuscript and serve as co-authors.

Declaration of Financial/Other Relationships

RC and MB were employed by Certara and work for multiple biopharmaceutical manufacturers whose products could be evaluated by ICER. UN and BD were employed by Janssen scientific affairs, LLC, whose products have fallen under the scope of the UPI report and may do so in the future. This study received funding from Janssen scientific affairs, LLC. Employees of the funder had a role in study design, data collection and analysis, decision to publish and preparation of the manuscript and are as such, consistent with their contributions, listed as coauthors. All authors declare no other competing interests.

Author Contributions

MB and RC conducted the research and performed evidence compilation. MB, RC, UN and BD performed analysis and developed conclusions based on the evidence compiled. MB and UN performed the bulk of the writing with content review and all authors performed revisions.

Acknowledgments

Editorial and writing support was provided by Cheryl Jones, BA and Shereen Cynthia D’Cruz, Ph D of Certara Synchrogenix under the direction of the authors in accordance with good publication practice guidelines and was funded by Janssen scientific affairs, LLC.

References

Citation: Casciano R, Brougham M, Neumann U, Doherty B (2023) A Review of the ICER Unsupported Price Increase Report and its Potential Use in Health Policy Decisions. Health Sci J. Vol. 17 No. 9: 1057.