The Institute for Clinical and Economic Review (ICER) measures the value of new drugs and health technologies, but faces a number of key challenges such as keeping their assessments up to date with new evidence, allowing for different perspectives on treatment value, and incorporating diverse patient preferences. To address these issues, ICER should follow the lead of the tech industry and start building open source models to measure treatment value.
ICER’s goal is to evaluate “the clinical and economic value of prescription drugs, medical tests, and other healthcare and healthcare delivery innovation”. These value assessments are then turned into value-based price recommendations.
ICER arrives at these recommendations through a 33-week review process in which a value-based price is recommended and a panel of experts votes on the relative clinical and economic merits of new drugs. ICER’s price recommendations have been used by organisations such as CVS and the Veterans Affairs.
While ICER’s reputation has grown, their current approach suffers from four major challenges.
Four challenges with ICER’s current approach
First, ICER’s report-based system is not sufficiently flexible to account for the rapid flow of new information. Because the output of an ICER review is a static report, updating an ICER model with new information is difficult.
ICER does periodically update their evidence assessments either through a brief two- to three-page summary or through their full, eight-month review process.
However, making marginal changes to the model as new evidence arrives in real time is generally not possible. This approach is particularly problematic since evidence is often incomplete at drug launch. For instance, a treatment’s real-world effectiveness will not be known, its impact on patient productivity may not be measured, and how the treatment would affect the lives of real-world caregivers is often uncertain. The failure of ICER to be able to update their value assessment as these pieces of evidence are measured is the first key limitation.
“Under an open-source approach, ICER would release all underlying analyses into the public domain. Not only would ICER create a model and write a report, but the underlying code would be sharable with the public”
Second, ICER’s approach takes the perspective of a single payer system and fails to account for heterogeneity in different concepts of value across US stakeholders.
For instance, private health insurers may care most about healthcare costs and health outcomes, but ignore a drug’s impact on productivity. Employers may care more about productivity, but may not take into account how a treatment affects the educational outcomes of an enrollee’s child. Policymakers may take a broader societal view and highly value drugs that improve educational attainment, but their perspective on value may vary depending on whether they are interested in maximizing social welfare at the federal, state, or local level.
ICER does now include a modified societal perspective in their 2020 value assessment framework, but even so, many novel components of value are relegated to the “other benefits” and “contextual considerations” sections of the report, and are not included in any quantitative measure of value.
Third, ICER does not allow for different patient preferences to influence the way they measure value. Consider the case where there were two treatments: Treatment A has improved survival with a high risk of adverse events and Treatment B has inferior survival outcomes with a low risk of adverse events. Having both treatments available would be of high value since some patients would prefer Treatment A’s better survival, but others would prefer Treatment B’s fewer adverse events. In ICER’s perspective, however, the value of outcomes are measured only from the average patient’s perspective, obscuring heterogeneity in preferences across patients.
Fourth, ICER models are not fully transparent. ICER describes their models in detail in their 100+ page reports. Despite these lengthy descriptions, oftentimes researchers are not able to fully replicate their methods. ICER has recently created a pilot program to allow drug manufacturers to access ICER’s shareable executable models, but access costs money and these models are available only to drug manufacturers and not available to the wider research community.
Taking an open-source approach
ICER could overcome many of these obstacles through an open-source approach.
Under an open-source approach, ICER would release all underlying analyses into the public domain. Not only would ICER create a model and write a report, but the underlying code would be sharable with the public.
Further, the creation of user-friendly interactive tools would allow nonacademics to measure treatment value using a variety of alternative assumptions without having to be a coding expert.
In the world of technology, open-source approaches are commonplace. Some of the most popular computer operating systems (Linux) and web browsers (Mozilla Firefox) are open source. My own blog, Healthcare Economist, is written on an open-source software platform: WordPress.
Even in the competitive landscape of artificial intelligence (AI), the software underlying AI is increasingly open source. Google, for instance, open sourced its AI engine, TensorFlow. This collaborative approach helped other companies and researchers by letting them use the Google technology, but Google also benefited by being able to learn from other developers and incorporate their improvements into their AI platform.
As applied to value measurement, open-source modeling would solve many of ICER’s current challenges.
First, model updates could be implemented in real time by any researchers. As additional information on treatment benefits, risks, and costs emerge, researchers could readily incorporate this information into the open-source model.
If researchers disagreed with a given model structure or parameter assumption, one could incorporate toggles to use different model structures or parameter values, or create branches from an overall model tree.
Second, users could customise the model to tailor the term value into something meaningful to them. For instance, employers could incorporate productivity benefits by checking a box and adding it to the model, and policymakers could incorporate educational benefits.
Novel value components — which ICER typically does not include — could be incorporated into the model at the users’ discretion. Further, all of these value components could be incorporated quantitatively into the model rather than qualitatively mentioned in a report.
Third, beyond using a cost per quality-adjusted life year approach, an open-source approach would facilitate the use of interactive tools such as multicriteria decision analysis (MCDA). MCDA can allow for more heterogeneous patient preferences.
Finally, open-source modeling, by nature, is fully transparent. An open-source approach would not only improve ICER’s credibility, but would also reduce the likelihood of errors.
While for many, the use of open-source modeling may appear to be an academic pipedream, open-source value models have already been built. For instance, the Innovation and Value Initiative (IVI) has an Open-Source Value Project (OSVP) that has measured the value of treatments for rheumatoid arthritis and lung cancer.
Similar to ICER’s methodology, OSVP models are developed by a team of researchers and the models are reviewed by a panel of experts. Unlike ICER, however, any individual can access these models, and — if they wish — improve upon them.
Journals such as PLOS One already require authors to submit their raw data when submitting an article for peer-review publication. In the future, other journals should require researchers to submit their models into the public domain for publication in a peer-reviewed journal.
If ICER wants to address some of the current limitations of their modeling framework, they should follow the lead of today’s tech companies and move to an open-source approach to value assessment.
About the author
Jason Shafrin, PhD, is vice president, health economics at Precision Health Economics & Outcomes Research.