By Leon Katsnelson
By Susan Visser
By Bernie Spang
By the DB2 Guys
By Fred Ho
By Louis T. Cherian
By Shweta Shandilya
By Lawrence Weber
By Serge Rielau
By Dwaine Snow
We all have our own data governance war stories. Some departments produce data that others consume. Each department has its own parochial interests that may not be in sync with the rest of the enterprise. However, successful governance programs align the interests of the producers and consumers of data to treat information as an enterprise asset.
The first step in any data governance program is to generate organizational awareness. Data producers often do not have visibility into the impact of their decisions on data consumers. The data governance program should bring the producers and consumers of the data together in the same room. A simple process diagram can demonstrate the impact of poor data quality on downstream business processes. In this article, we will map out a simple claims administration process in a health plan and describe the impact of poor data governance on business outcomes.
Figure 1. A simple claims administration process at a health plan.
There are a number of actors in the claims administration process:
Health plans use claim codes to reimburse providers and hospitals, to benchmark costs and quality of service, and to offer care management services that reduce medical costs. Health plans require their providers to include the appropriate ICD-9 and CPT codes when submitting claims. We will not go into detail about these codes except to say that ICD-9 codes represent diagnoses while CPT codes represent the services rendered.
One large health plan processes 500 million claims per year. Each claims record contains approximately 600 attributes in addition to unstructured text. The health plan decided to focus on claims data governance because it spent about 85 cents of every premium dollar on claims. The business intelligence and medical informatics departments conducted analytics on claims data. This analysis drove several downstream activities, including care management. For example, if an elderly member made multiple doctor visits for ankle pain, a nurse from healthcare services would call the person to consider treatment for arthritis. This proactive approach would improve the quality of life for the member while also reducing medical costs for the health plan.
The business intelligence department noticed that a number of entries in the diagnosis code field were not ICD-9 codes. Upon profiling the data, the business intelligence team determined that the field included both ICD-9 and CPT codes. The business intelligence team then met with the network management team responsible for managing provider relationships. After many meetings, it became clear that the network management team had allowed providers to use either ICD-9 codes or CPT codes, despite stringent guidelines that the field was for ICD-9 codes only. As a result, the claims reports showed inconsistent data, which resulted in healthcare services devoting scarce nursing resources to deal with low-risk patients. Inconsistent data also introduced delays and additional costs for claims administration. In addition, medical informatics had to contend with data quality issues.
The business intelligence team also conducted text analytics on the free-form text fields in the claims documents. The team compared the results with the reference data for CPT codes and found several anomalies. For example, the free-form text seemed to indicate that the procedure was “flu shot” but the CPT code was “99214,” which may be used for a physical. They concluded that providers might have been inadvertently entering incorrect procedure codes in the claims documents.
In this example, providers were the producers of the data. Because network management oversaw the provider relationships, it was treated as the de facto data producer. On the other hand, claims administration, business intelligence, medical informatics, and healthcare services were the consumers of the data. By bringing the various actors together, the data governance program could shine the spotlight on the importance of data governance over claims codes.
DB2 TechTalk: Deep Dive on BLU Acceleration in DB2 10.5, Super Analytics Super Easy
Thursday, May 30: 12:30 – 2:00 PM ET
Informix Chat with the Lab: Primary Storage Manager (PSM) a Parallel Backup Alternative to Ontape
Thursday, May 30: 11:30 – 1 PM ET
Big Data Seminar 2013, Featuring Krish Krishnan
June 14 in New York City
marcus evans Pharma Data Analytics Conference
July 10-11 in Philadelphia
IBM Smarter Content Summit 2013
Big Data at the Speed of Business
Broadcast event replay now available
Information on Demand 2013: Early Bird Registration Now Open
November 3-7 in Las Vegas