What is the value of health information to a community?
Potentially, a great deal, in terms of both health care quality and costs. The trick, though, is not to share too much too fast, because information overload can undermine the bestof intentions.
That's what David F. Lobach, M.D., Ph.D., of the Division of Clinical Informatics in the Department of Community and Family Medicine at Duke University Medical Center in Durham, N.C., is learning through a research project funded by the Agency for Healthcare Research and Quality (AHRQ).
Early lessons from the ongoing, 3-year project -- which involves providing clinical alerts, feedback reports, and reminders to clinicians and patients -- indicate that sometimes less is more when it comes to sharing health information technology (health IT).
Lobach's study, which involves 18,000 active Medicaid beneficiaries in Durham County, N.C., has already identified one important lesson: Beware of information overload. Lobach and his colleagues have found that health care providers can absorb only so much new information at once. At the same time, the proprietary database used for the study was put under tremendous strain, and had to be migrated to a more robust database platform.
Lobach's project generates a lot of information, through three interventions:
- Clinical alerts sent via email to care providers, informing them of specific actions or follow-up that a patient may need -- whether it's time for a woman to have a Pap smear or whether a patient has had multiple emergency room visits in the past 3 months.
- Performance feedback reports to clinic managers on individual patients' care and follow-up needs. The reports also address "concerning" events, such as an emergency room visit by an asthma patient, and care deficiencies, such as delinquency on a biannual mammogram.
- Reminder letters mailed to patients, alerting them they are due for a check-up or cholesterol, blood-sugar, or other types of preventive care or disease monitoring tests.
The project's database contains administrative, clinical, care management, and communication data pulled from a variety of sources, including eight primary care clinics, two area hospitals and their emergency departments, and the state Medicaid program. These data are used to generate the three interventions that are being tested.
Each patient in the study has been assigned to a home clinic so that the study researchers could determine which physicians and care managers should receive which patient reports. Initially, each of the 18,000 study patients was randomized into one of three intervention groups: alert, feedback report, or patient reminder; plus three groups that did not receive an intervention for the study's first phase, to provide a comparison. Under the original study plan, each group would receive new information from an additional intervention every six months.
But that design proved difficult to implement.
First, clinicians were concerned that they would become bombarded with information that they didn't know how to use effectively. Second, the volume of data required to generate the interventions overwhelmed the project database. The core system database had to be migrated to a more robust database management system, requiring additional, planned time for the project.
As a result, the researchers redesigned the study in mid-stream -- essentially slowing it down and simplifying it so that providers could adjust and learn how to make the best use of their new information without being overwhelmed.
Lobach and his team decided to decrease the content of the initial interventions by focusing on the detection of health events "committed" by a patient, such as an emergency department visit. During a second phase, health issues pertaining to aspects of care "omitted" from a patient's care management , such as missing a biennial mammogram, were added. In addition, the first phase of the study was extended from six months to nine to allow more time for the interventions to be accepted and to have measurable impact.
To make up for time delays related to the database migration, Lobach and his colleagues collapsed the three comparison groups into one. Consequently, the project phase in which clinicians were to receive only two interventions was dropped, and arrangements were made for clinicians, clinics, and patients to progress from receiving one intervention to receiving three interventions.
The researchers are reviewing a wide range of outcomes, including emergency department and hospital use; care quality as measured by HEDIS scores for preventive services and chronic disease management; care coordination; costs and revenues; and patient and provider satisfaction.
But Lobach says the researchers are already starting to see some impact from the interventions on the care management teams. "Once the clinicians get used to the information, it becomes a driving force for finding patients with problems and improving quality," he says.
Each week, the care management teams receive email messages with a secure link to their clinical alerts reports. "They use that report as a work list to figure out who to contact that week," Lobach says. For example, frequent emergency department users are a red flag.
Lobach acknowledges that the project has been very labor-intensive: from developing strong partnerships with participating hospitals and clinics, to cleaning and maintaining the data that are collected, to transforming data because of different data codes and standards used by providers.
But the potential payoff is more than academic. Lobach hopes that this project will not only help quantify the value of health information to communities, but also shed light on the unique contributions of different methods for providing health information. In that way, the project could provide valuableinsights for the development and implementation of future community-based health information exchanges.