ONC’s Michael Lipinski Breaks Down the Final Interoperability Rule

Although the focus for every healthcare organization during the past month has been on the COVID-19 pandemic — and rightfully so — there have been other recent developments that will have far-reaching implications for IT leaders. One of those was the finalization of a set of rules designed to “give patients safe, secure access to their health data,” according to HHS’ website. The two rules, issued by ONC and CMS, implement interoperability and patient access provisions of the 21st Century Cures Act (Cures Act), while also supported the MyHealthEData initiative.

And while many looked forward to the release of the rule so they could achieve clarity on what data blocking is not, the real crux of it is the accessibility angle, according to Michael Lipinski, Division Director of Regulatory Affairs at ONC. “This rule will make data more readily available; that’s what we’re trying to do — get patients their information,” he said. “We feel good about where it is and what the final policies will hopefully achieve.”

Recently, Lipinski spoke with healthsystemCIO about what he believes are the most important takeaways from the final rule, how care delivery will be affected, and why he’s “excited” about what it means for the industry.

Below are some of the key implications of the rule, with excerpts from the interview:


The final rule is divided into two main parts, the first of which involves the certification process.

As part of the HITECH Act, providers have to participate in the Promoting Interoperability Program to receive incentives or avoid penalties. We’re now updating the 2015 edition of the certification criteria from an interoperability perspective; in particular APIs, because the standards have matured, and because the 21st Century Cures Act requires us to certify APIs without special effort. We now have FHIR-4, as well as several security standards related to that. We also have the United States Core Data for Interoperability (USCDI), which focuses on a set of data that should always be made available for exchange or use.

Also important is the EHI export criterion, which pulls out the data from a certified product and provides a data dictionary. It’s a way to understand the data as a flat file. That should help two use cases we’ve heard about from stakeholders: providers who want to switch their EHR system, which we believe will increase competition among developers; and the ability to provide information to patients to help support the right of access under HIPAA.


ONC can regulate developers’ behaviors through real-world testing.

As part of the conditions of certification, which was introduced to provide oversight into the certification process and hold developers accountable, Congress is now allowing ONC to regulate the behaviors of developers through real-world testing. There are also attestations by the developers that they won’t block information, and make sure they don’t restrict any of the product’s certified functionalities.


ONC can ban providers from certification and de-certify products.

The second part of the rule is broader in scope, discussing the new authority given to ONC regarding information blocking. Congress identified three actors: providers, developers of certified health IT, and health information networks/health information exchanges. The distinction here is that ONC would have authority over developers where information blocking is concerned, meaning we could ban the developer from the program for violation of any of the conditions, and we could de-certify the product if necessary. But if those same developers of certified health IT took actions with their non-certified products that were deemed to be information blocking, OIG can investigate that, and HHS could take enforcement action against them. The goal is to make sure EHI (Electronic Health Information) is accessible, and available for access, exchange and use.


The HIN definition for data blocking “changes the dynamic.”

The definition for information blocking was selectively revised in response to comments. Now, if a healthcare provider or even a health plan takes certain actions, you can be found to be an information blocker as an HIN. That changes the dynamic in terms of the knowledge standard, because in this instance, you should have known your actions were going to be deemed as information blocking. It’s not the same standard that a provider has right now — they have to know that their actions are going to likely be an inference, and that it was unreasonable.


Penalties can range significantly, depending on the business model.

Penalty is obviously important. It goes from appropriate disincentives identified by the secretary through notice of comment rule making, which has not even occurred yet, to as much as a $1 million penalty per violation. You need to be aware of your business model. For a high percentage of providers, this might not even register, but they may be acting like an HIN network where they’re providing the ability, the infrastructure, the policies and the agreements for exchange to occur amongst many providers. For example, a hospital system might be allowing a lot of ambulatory clinicians to exchange on their network under certain conditions, and so they need to be aware of that.


The definition of EHI is much more focused in the final rule.

Before the final rule, the definition of EHI was very broad. Stakeholders told us they weren’t sure how to differentiate it from EPHI, particularly in certain situations. In the final rule, we’ve focused the definition to just EHI in the designated record set. Most providers that are covered entities have been collecting that information, maintaining it, and providing it for right of access, including developers or even HINs that are usually business associates. We think that’s a nice dovetail in terms of making sure providers have ease of burden in complying with sets of laws — both HIPAA and information blocking.


There are exceptions to data blocking that don’t have to be proven.

The last key piece was refinement to the exceptions for clarity. There’s an understanding and acknowledgement that certain things are just not feasible, and you don’t have to prove that. For example, if there’s an earthquake and your system is either shut down or you can’t provide the information, that’s acknowledged in the infeasibility exception. So it’s a more discrete issue of being able to segment the record. Sometimes a system just isn’t capable of segmenting out certain information that the patient doesn’t want shared — that’s now acknowledged in the exception.


Meaningful opportunity to consent: the burden isn’t always on providers.

One area that needed clarification in the final rule was the concept of meaningful opportunity to consent or authorize under the privacy exception. A lot of providers asked us if that shifted the burden to them, meaning that if a random entity requests information about a patient, do they have to seek out the patient. For example, let’s say there’s a train wreck, and several people were sent to the local hospital. If a newspaper reporter asks for the records of all the patients involved, does the hospital then have to find the patients and give them a meaningful opportunity to disclose the information? That was never our intention, but we can understand why there was a concern. And so, in the final rule, we make it clear that this refers to cases where the provider has already received some type of a clear authorization or a consent from the patient.

But it’s not perfected. We’re asking providers to take reasonable steps to help correct that, such as providing the correct authorization form for them to fill out; to help out in those instances where there is obvious intent to get consent — not the random situations where someone shows up asking for access to a patient’s record.


Privacy practices for third-party apps are addressed.

The goal was about sharing, getting information out, and getting it to the patient. When third-party apps receive a patient’s information, it raises concerns in terms of privacy: what happens with the information once the entity gets that patient’s information at the patient’s direction, with the argument being that in most cases they’re not covered by privacy laws.

With this rule, we looked at what is interference and what is not. Because part of information blocking is interfering with access, exchange, and use of data. What’s not interference is providers educating patients about the privacy and security risks associated with giving their information to a third party.

And that can be pretty broad. It can be about the privacy practices and policies of that third party; for example, does it encrypt the data once it’s on whatever application it has? Does it sell data? In this rule, we set out a minimum set of privacy policies that any third party should follow, including getting expressed consent before the third party sells or shares the patient’s data.


Before an app can connect to the provider’s API, the patient must authorize it.

We talked also about letting the market lead on what are the appropriate privacy practices. We gave examples in the rule of things like: how do you go about telling the patient? What’s an efficient way to do that? We gave examples of how you can use your API to do this so that it can be automated. Every app gets registered with an API, and is assigned a number that says they’re registered to a particular privacy practice (for example, does this app say yes or no to certain privacy practice?). Before an app can connect to the provider’s API, the patient has to authorize it.

Providers can use education as a tool to help patients make informed choices.

Another way is for patients to enter their credentials — such as username and password — which then enables the provider to enter information into the app. Providers can put up a warning that says something like, ‘the app to which you’re connecting didn’t attest to a privacy policy, or to parts of the policy where it says it will get your expressed consent. Do you still want us to release the data to them?’

But it’s not just about the privacy practices. You can educate by informing them that the app they’ve chosen has been fined by the FTC for false and misleading activities, for example. And that can be done in an automated way, which is the least burdensome way for providers. So those are some of the ways education can be used as a tool to help patients make informed choices.

With this rule, the goal was to make everything uniform so that all of these apps are using the same standards; it’s the same data. We can build one app that connects to multiple systems so that if patients see physicians in different states, they can access all of that information in one place, which can help more effectively manage care.


CIOs are encouraged to provide feedback through ONC’s website.

We’re continuing to do outreach. I’d encourage CIOs and others to check out the website we’ve created. It has listings for future webinars, and a Q&A section. We’ll do our best to answer any questions. If we can’t, because we need to run it through certain folks from a quality control perspective, we’ll run it through those processes and publish that content.

We also encourage people to send in any feedback they might have, whether it’s about the actual rule, or a claim of information blocking. We’re going to continue to educate leaders about the rule, and field any questions you might have.


Bottom line: “We’re excited about the rule.”

We’re really excited about the rule and what it can achieve. We think we’ve done what we can within our authority to address, first and foremost, direct concerns through our proposals, whether it’s meaningful opportunity, or the ability to reach market terms and not force people to ‘compulsory license’ their product. We think we’ve done a great job in addressing those concerns, while still getting data out to improve competition and innovation, and enable patients to access information so they can control their care, and improve their care.


Source link

Health Care Workers Are Scared, Sad, Exhausted—and Angry

Researchers advance neuromorphic computing — ScienceDaily