Medicaid Managed Care Transparency: What Can Quality Data Do?

Earlier this week the data transparency door to Medicaid managed care opened.  Not as wide as some of us would hope, but wide enough to start a detailed conversation about the performance of individual MCOs on quality.  It happened at a briefing sponsored by the California Health Care Foundation and took the form of a slide deck summarizing the work of health services researchers from University of California, San Francisco (UCSF).

Dr. Andrew Bindman and his colleagues reviewed quality scores for Medi-Cal managed care plans (MCPs) that were publicly reported by the state Medicaid agency from 2008 to 2017.  (Data for 2018 are expected shortly and will be incorporated into their final paper).  The scores were based on HEDIS (quality) and CAHPS (patient experience) metrics.  The UCSF researchers found that over the 10-year period, on a statewide basis for all participating MCPs, the HEDIS scores improved on 19 measures, declined on 11 measures, and remained unchanged on 5 measures.

Unfortunately, children’s performance measures were among those on which performance declined; child access to primary care (1-2 years, 2-6 years, and 12-19 years), annual well child visit (3-6 years), and childhood immunization all dropped between 2008 and 2017.

Within those performance averages for each metric the researchers found, there are large differences in quality across MCPs; for example, in 2017 individual plan scores on access to primary care (1-2 years) varied from 81% to 98%, while childhood immunization scores ranged from 55% to 83%.

The researchers assessed each MCP on its overall performance on all measures to produce a ranking for 2017.  At the top was a non-profit MCP, Kaiser Southern California; at the bottom was a for-profit MCP, Health Net-Kern; in the middle was a public MCP, Santa Clara Family Health.  The researchers concluded:  “On average, public MCPs and non-profit MCPs rank significantly better than for-profit MCPs.”

There is much more in this research that is specific to the performance of California’s Medicaid managed care program over the past ten years that will give agency officials and state policymakers a good deal to think about.  Among other things, the declines in child access to primary care are—very unfortunately—consistent with the recent State Auditor report “Millions of Children in Medi-Cal Are Not Receiving Preventive Health Services.”

Who knows?  Maybe even CMS will take note; after all, one out of five Medicaid beneficiaries nationally who are enrolled in MCOs live in California. (MCOs—managed care organizations—are how MCPs are described at the federal level). Given Administrator Verma’s full-throated support for rolling back” regulatory requirements on states and MCOs, however, I’m not holding my breath.

In any event, what’s of national significance is that this research demonstrates how performance data, if it’s specific to an MCO and publicly available, can be used to differentiate high-performing MCOs from low-performing MCOs, and to put those results in the public domain.  This transparency has at least two important implications for children and families and other Medicaid beneficiaries enrolled in MCOs.

First, transparency gives the public and the media access to at least some of the same information about the performance of individual MCOs that state agency officials have.  This enables the media, the public, and its elected representatives to assess not just the performance of individual MCOs, but also the performance of the state Medicaid agency in managing managed care.  If the data show that there are poor performing MCOs, is the agency taking necessary steps to get the poor performers to up their game or, failing that, to terminate them from the program?  In California, these data will allow the public to compare the findings of the UCSF researchers with those of the State Auditor.

Second, the knowledge that individual MCO performance results will be public, and that MCOs will be publicly ranked vis-à-vis their peers, may cause some MCO managers to try to improve their organization’s performance.  In areas where beneficiaries have a choice between two or more MCOs, management may be concerned about the possible loss of market share to a high-performing MCO if beneficiaries are aware of the rankings.  On the other hand, public awareness of an organization’s poor performance may make less of a difference to MCO managers than what drives increases in their own compensation or financial sanctions imposed on their organization by a state agency.  But it certainly can’t hurt.

As the UCSF researchers are quick to acknowledge, the HEDIS and CAHPS measures they used in their analysis do not fully explain why quality performance varies from MCO to MCO.  Even though the researchers adjusted their results for differences in the number of doctors per population across regions, the researchers cautioned that there are likely to be important differences in the types of providers available in MCO networks, the degree of integration among network providers, and the level of integration between an MCO and its network providers.  Collecting this type of information and making it publicly available for independent analysis would help advance MCO quality improvement.

But let’s not get too far ahead of ourselves.  Transparency is a journey.  One door at a time.

Andy Schneider is a Research Professor at the Georgetown University McCourt School of Public Policy.

Latest