Doctors, hospitals and insurance companies are making the switch to electronic health records. Lucy Pemoni/Associated Press
Reading Time: 7 minutes

Doctors, hospitals and insurance companies using electronic health records are required by law to report security breaches to patients and the government — but only after they have done their own risk assessment to determine whether the breaches posed “significant harm” to patients.

This standard, established as a temporary regulation by the Department of Health and Human Services’ Office for Civil Rights, came under sharp criticism by Congress and privacy advocates when it was released Aug. 24, 2009. The term “significant harm” is subjective, they say, and skirts adequate transparency on the side of the health industry.

Providers, meanwhile, say they do not want to notify patients every time a security breach occurs because doing so could cause unwarranted panic. It could also cause patients to eventually ignore the large numbers of notifications they receive, including those that really matter.

According to federal data, since 2009 nearly 300 health providers reported security breaches to their systems that have exposed private medical information of 500 or more people, totaling almost 11 million patients.

In October 2009, a bipartisan group of six lawmakers wrote a letter to HHS asking the department to repeal its definition “at the soonest appropriate opportunity.”

The tentative rule, officially called the “interim final rule” has the same weight as a permanent regulation or “final rule,” but government agencies often use the title to officially meet deadlines set by Congress while they continue to make revisions.

Nearly two years have passed with no final rule, though an HHS spokeswoman recently told iWatch News it was due out this summer. She said that HHS is not able to comment on whether the harm standard will be in the final draft until rulemaking is complete.

Meanwhile, patients’ information could fall into the wrong hands without their knowledge.

Breaches that have affected more than 1 million people in a single system alone include Health Net, Inc., in California; North Bronx Healthcare Network in New York; AvMed, Inc., in Florida and Blue Cross Blue Shield of Tennessee.

The Blue Cross Blue Shield problem occurred in October 2009. The company notified more than 1 million members that 57 hard drives containing private medical information had been stolen. The security breach at AvMed in December 2009 was also the result of a theft. Two laptops, with data on 1.22 million members, were stolen. The data included a mix of names, addresses, Social Security numbers, birth dates and health details.

Security rules

The transition from print records to electronic form has been a long-sought goal of health care reformers. Republicans and Democrats have championed the idea for years because it holds the potential to scale back costs, reduce errors, link providers and track the effectiveness of treatments. But it was only in recent years that the federal government created financial incentives for doctors and hospitals to make the change.

The $787 billion economic stimulus package, formally known as the American Recovery and Reinvestment Act (ARRA) of 2009, set aside $27 billion to push the use of electronic medical records.

The Health Information Technology for Economic and Clinical Health (HITECH) Act, the portion of the legislation that outlined the digital record incentive program, financially rewards doctors and hospitals for switching to electronic records and demonstrating that they have followed government guidelines in using the technology to effectively treat patients.

But members of Congress who drafted the legislation knew that with the increasingly widespread use of digital records could come consequences: If doctors and hospitals could someday seamlessly pass patient information from one point to another, the likelihood of private medical information being breached would increase.

Patients often have personal reasons for not disclosing their information to a friend or family member. But on a broader scale, if medical information is shared with unintended groups it can lead to discrimination in a person’s employment, insurance, credit and education. Most dangerously, it can lead to medical identity fraud — when someone uses another person’s name and health information to get medical care.

“I think people really aren’t aware of how their information is being used,” said Michelle De Mooy, senior associate for national priorities at Consumer Action, a nonprofit consumer rights group. “I think the public erroneously believes that their information is being protected.”

Conversely, she added, sometimes patients are fearful about whether their information will stay private, and it impacts what they tell their doctors. “Their fears are not unfounded,” she said.

Amendments to privacy and security regulations that had been originally installed through the 1996 Health Insurance Portability and Accountability Act (HIPAA) were added with the HITECH Act to minimize risks to patients. Authored by the House Committee on Energy and Commerce, the 2009 HIPAA update says providers now must report all security breaches to patients and HHS — which makes the information public on its website — and must often report such breaches to the media.

The federal government also gave state attorneys general increased authority to take action against companies that violate HIPAA laws, and providers now can face up to $1.5 million in penalties if they fail to abide by the policies — a dramatic increase from the $25,000 cap that existed prior to the Recovery Act.

Harm standard

HITECH also then required the Office for Civil Rights and HHS to develop more specific recommendations for security and privacy. HHS published interim final regulations on “Breach Notification for Unsecured Protected Health Information” August 24, 2009, in which it specified a “breach” to include only cases in which providers had determined harm had been brought upon a patient.

A security breach must harm someone financially, harm their reputation or harm them in other ways, the rule states.

If a provider or insurance company decides no harm occurred, then they are not required to tell the government or patients that an unauthorized party accessed their private medical information.

This limitation was not what Congress had intended, according to the letter lawmakers sent to HHS. The harm standard was not included in the statute, said Lisa Gallagher, senior director of privacy and security at the Healthcare Information and Management Systems Society (HIMSS), a nonprofit that promotes understanding and use of health information technology.

Though the rule does exempt groups from having to report breaches when the information accessed is encrypted — meaning that it appears unusable, unreadable or indecipherable — the subjective definition of “harm” was seen as problematic because it would not allow consumers to accurately judge the quality of a health provider’s security.

“Some in the health care industry have been more concerned with exploiting patient data for their own economic benefit than addressing the technical and practical tasks required to ensure that Americans’ medical information is protected under the ethical standards we have held for generations,” said William Pewen, former senior health policy adviser to Sen. Olympia J. Snowe, R-Maine, who helped draft the HIPAA legislation.

“The public may not fully understand the threat, they certainly know that they’re the ones at risk.”

Still, the interim final rule went into effect with the harm standard on Sept. 23, 2009. HHS submitted a final rule to the Office of Management and Budget May 10, 2010, but later withdrew it, saying it needed further consideration. It was unclear whether HHS was reconsidering the final rule based on the mounting pressure from Congress and patient privacy groups regarding the harm standard.

“This is a complex issue and the Administration is committed to ensuring that individuals’ health information is secured to the extent possible to avoid unauthorized uses and disclosures, and that individuals are appropriately notified when incidents do occur,” HHS wrote in a release. “We intend to publish a final rule in the Federal Register in the coming months.”

More than a year after this statement, the interim final rule requiring demonstration of harm is still in effect.

The debate

The breach notification portion of HIPAA was put in place to hold health care providers accountable and to give them incentives to secure their databases.

But the notification process is expensive for providers, who want to avoid exaggerated numbers of breach reports to the government and patients if they happened in an unthreatening manner.

During the comment period for the interim final rule, prior to when it went into effect, various hospital associations and other groups had suggested adding the harm provision to cut back on unnecessary reporting.

The American Hospital Association was a proponent of the harm standard. “An acquisition, access, use or disclosure that does not compromise the security or privacy of the information is not a breach,” they wrote during the public comment period. “A breach notice in this circumstance would create unnecessary anxiety and concern [and] also would create an unnecessary and unproductive administrative burden and expense for the covered entity.”

Kaiser Permanente agreed with this view, saying that excluding a harm standard would have “no calculable benefit to either the individual or the covered entity.”

AIDS Healthcare Foundation also supported the “no harm no foul” provision — what members of the industry are nicknaming the harm standard. Without it, “patients could be unduly alarmed, even panicked, and in some cases may decide to leave care,” they wrote.

The Center for Democracy and Technology (CDT), a nonprofit civil liberties group, has come out in strong opposition to the harm standard.

“The concern over sending too many breach notifications to patients implies that the industry anticipates a high number of breaches,” wrote Harley Geiger, policy counsel for CDT. “The best way for industry to cut down the number of notifications would be to strengthen their privacy and security practices.”

CDT says the number of breaches reported so far is likely an indication that health care providers are not encrypting their data — an example of how the industry has difficulty catching up to the technology of other groups, such as the financial sector, said Deven McGraw, director of the Health Privacy Project at CDT.

“Most of these breaches have been attributable to lost devices,” she said. “If that data had been encrypted when it was at rest, they never would have had a breach.”

Including a harm factor doesn’t give a clear idea to the public about the total number of systems that have been breached. HHS provides the names of hospitals, doctors and insurance companies on its website, but some groups are saying the list is incomplete — partially due to the harm standard but also due to other factors. For example, groups often do not even know that their systems have been breached. Privacy advocates say security breaches go unreported, though it’s impossible to guess how many.

“We don’t know what we don’t know,” said Gallagher from HIMSS. “There is no way for us to come up with the numbers.”

Regardless of whether the harm standard makes the final rule, doing a risk assessment is important because organizations should be thinking about the risks for patients, she said. HIMSS data shows 25 percent of providers are not doing these assessments. “People in provider organizations don’t spend a lot of time thinking about what the motivation was behind the breach,” Gallagher said.

In addition, HHS does not track whether medical fraud resulted from individual cases.

Groups on both ends of the debate said they had no sense as to whether the harm standard would be published in HHS’ final rule.

“It’s a question as to whether they believe in the reasons they put it in there to begin with,” Gallagher said. “They are going to do what they think is best.”

Several pieces of privacy and security legislation are being considered in Congress, showing these issues are widespread across various industries beyond health care. Pending legislation includes the Personal Data Privacy and Security Act, sponsored by Sen. Patrick Leahy, D-Vt., and the Secure and Fortify Electronic Data Act, authored by Rep. Mary Bono Mack, R-Calif.


Help support this work

Public Integrity doesn’t have paywalls and doesn’t accept advertising so that our investigative reporting can have the widest possible impact on addressing inequality in the U.S. Our work is possible thanks to support from people like you.