Keckley: Knowing What Works Is Critical

Posted June 4, 2009 at 3:37pm

The annual U.S. investment in health care is more than 16 percent of its gross domestic product and is projected to increase at 6.2 percent annually through 2018. Although the current annual health care investment is $2.3 trillion, studies have shown that less than 1 percent of that total is used for assessing the comparative effectiveness of available treatments.

Simply stated, clinical decisions about health care interventions, for individuals or populations, are not always informed by adequate evidence of the clinical effectiveness of those interventions. And credible analytics from Rand and others indicate a significant gap exists between evidence and practice.

There are many reasons for the widening gap — lack of accurate or complete information from patients, pressure from patients to get “the latest and greatest— based on what they just read, or lack of helpful information technologies that prompt medical professionals to be more accurate in diagnosing problems and recommending treatments. The biggest reason is simply lack of availability of evidence in the teachable moments when clinical decisions are made by medical professionals and consumers.

Health care is complex. Matching patient data (signs and symptoms, risk factors, co-morbidities and genetics) requires substantial investments to amass data adequate to evaluate even the most prevalent medical problems. Building a national program to monitor the efficacy and effectiveness of existing and new diagnostic tests, surgical procedures and medications is a major undertaking. The payoff — reduced inappropriate variation, better care and improved efficiency in the delivery of care — may not be realized for many years.

The allocation of $36 billion in the stimulus package to promote adoption of electronic health records by physicians and hospitals is an important step toward narrowing the gap. It’s a start. A second major element is a process whereby approaches to care are systematically evaluated so the most appropriate diagnosis and treatment options are made readily available to medical professionals and patients at the point of care — that’s the essence of the current health reform discussion about “comparative effectiveness— — how it should be done in the pluralistic and complicated environment of the U.S. health system.

At least 16 of the world’s 30 developed health systems have a comparative effectiveness program to guide clinical decisions. Ours is the exception. But those systems are primarily government run and operated. They are not as complex and fragmented as ours.

Patients in these systems are accustomed to the way their health care is provided — typically a general practitioner serves as a gatekeeper to specialty services and hospitals. Funding is through taxes and in some cases employer contributions, and often 10 percent to 20 percent of the populace pursues private insurance to augment or replace the government’s coverage.

In most developed systems, data about patient care is captured, de-identified and analyzed by agencies to assess correlations between diagnostics and therapeutics and to advise their government about what works best. In 16 developed systems of the world, an entity is in place to evaluate new approaches to care, compare them to other options and direct doctors and hospitals to practice per their recommendations.

The current health reform discussion includes prominent attention to comparative effectiveness as a means of reducing costs associated with inappropriate variation. But its implementation will not be easy. Differences between four prominent health systems’ approaches to their comparative effectiveness programs illustrate the challenges:

Scope of Authority. The comparative effectiveness programs in the United Kingdom (National Institute for Health and Clinical Excellence, or NICE) and Australia (Pharmacy Benefits Advisory Committee and Medical Services Advisory Committee) have statutory authority to approve coverage requirements for their nationalized health systems; the comparative effectiveness programs in Germany (Institute for Quality and Efficiency in Health Care) and Canada (Canadian Agency for Drugs and Technology in Health) are advisory only.

Scope of Review Process. Some systems of comparative effectiveness focus on comparisons between surgical options, diagnostic tests and medications; others like the United Kingdom’s NICE focus almost exclusively on comparisons of medications.

Availability of Data. Information about patient care in the United States is all over the place. Long-term relationships between physicians and patients is not the norm, and most information about patients is in paper records. Health plans capture data from claims filings, but much of the information about patient signs and symptoms is incomplete, lacking clinical data only accessible in the medical record. As clinical researchers have learned, knowing what works in the controlled setting of a clinical trial study is relatively straightforward — that’s efficacy research. But clinical effectiveness research requires data about actual use of the intervention in “the real world.— That’s effectiveness research. Studies showed Vioxx to be efficacious, but it faced notable legal issues about its effectiveness. Both are important.

Comparative Effectiveness Research Is an Ongoing Process. Every day, more than 80 randomized control trials are published in the world’s clinical research literature. Databases for ongoing longitudinal research that feature searchable data sets about patient populations simply do not exist for most of the perplexing health issues facing the U.S. system.

The American Recovery and Reconstruction Act allocates $1.1 billion to comparative effectiveness research. The ARRA established a Federal Coordinating Council for Comparative Effectiveness Research: The 15-member group has until the end of June to make its recommendations — how to implement a comparative effectiveness program in the United States. The role of the council, according to the legislation, is “to advise the president and Congress on strategies with respect to the infrastructure needs of comparative effectiveness research within the Federal Government; and organizational expenditures for comparative effectiveness research by relevant Federal departments and agencies.—

When comparing investments in comparative effectiveness programs in these countries, it’s clear the stimulus investment is only a down payment on the program. It will no doubt require substantial ongoing funding.

Comparative effectiveness is understandably controversial. It is heralded by some as the key to aligning payments to appropriate, evidence-based use of prescription drugs, medical devices, diagnostic tests and surgical interventions for patients. To others, it represents a major threat to innovation in the development of new medical solutions. To physicians, it represents infringement on their professional judgment unless implemented with their oversight. And to consumers, it’s complicated.

As the debate over health reform evolves in coming weeks, questions about comparative effectiveness will no doubt surface:

• Can head-to-head quality comparisons of appropriately juxtaposed drugs, medical diagnostics and surgical interventions be structured in a way meaningful to consumers and caregivers? What’s the methodology?

• Are current data adequate to create a comparative effectiveness program that’s useful in providing meaningful insight about comparisons based on the current, up-to-date, relevant science? How long will it take to have enough information?

• Should head-to-head comparisons also include costs? And how are costs to be defined (over what period of time, inclusive of direct and indirect costs, and method for expensing of research and development costs)?

• Will the comparative effectiveness platform in the U.S. health care system stifle innovation and R&D among drug, medical device, biotech and manufacturers?

• Can the United States “cut and paste— CE programs from other countries?

• Who will control the process? How will it be governed?

• Will liability and risk management be aligned with adherence to a comparative effectiveness platform? Will manufacturer liability and provider performance be addressed in such a way as to inure from unnecessary costs for litigation and risk?

• What will it cost and who will pay?

Clearly, the health costs of the U.S. system are not sustainable. But its citizens are accustomed to the latest technologies and fear the government might encroach on the clinical judgment of their physicians. As a result, the public discussion about comparative effectiveness — the scope of its impact on the U.S. health system — promises to be among the most watched of “hot issues— in the summer of health reform.

Paul H. Keckley, Ph.D., is executive director of the Deloitte Center for Health Solutions in Washington, D.C.

Key Terms in the Debate Over
Assessing the Effectiveness of Various Treatments

Evidence-Based Medicine: the application of scientific knowledge to the diagnostic and treatment recommendations that medical professionals make for their patients.

Comparative Effectiveness: the evaluation of the impact of different treatment options available for treating a given medical condition for a particular set of patients. It assumes there is a set of analytic tools that allow for the comparison of one treatment to another.

Clinical Effectiveness Research: the generation of evidence through use of experimental methods to understand which treatment options are most beneficial. Comparative effectiveness research is a type of clinical effectiveness research.

Effectiveness: performance under “real life— practice conditions.

Efficacy: performance under controlled or ideal conditions.

Systematic Reviews and Technology Assessments: the synthesis of evidence gathered across multiple primary comparative or clinical effectiveness studies.