ads

,
Showing posts with label healthcare IT regulation. Show all posts
Showing posts with label healthcare IT regulation. Show all posts
A new book has appeared on improving usability of electronic health records.  The result of government-sponsored work, the book is available free for download.  It was announced via an AMIA (American Medical Informatics Association, http://www.amia.org/) listserv, among others:

From: Jiajie Zhang [support@lists.amia.org]
Sent: Tuesday, December 02, 2014 6:00 PM
To: implementation@lists.amia.org
Subject: [Implementation] - New Book on EHR Usability - "Better EHR: Usability, Workflow, and Cognitive Support in Electronic Health Records"

Dear Colleagues,

We are pleased to announce the availability of a free new book from the ONC supported SHARPC project: "Better EHR: Usability, Workflow, and Cognitive Support in Electronic Health Records".The electronic versions (both pdf and iBook) are freely available to the public at the following link:https://sbmi.uth.edu/nccd/better-ehr/


First, this book appears to be a very good resource at understanding issues related to EHR usability.  I particularly like the discussion of cognitive issues.

However, this book also holds messages about the state of the industry and the issue of regulation vs. no regulation, and impairment of innovation:

I think it axiomatic that user-centered design (UCD) is a key area for innovation, especially in life-critical software like clinical IT.  (I would opine that UCD is actually critical to safety and efficacy of these sophisticated information systems in a sociotechnically complex setting.)

I think it indisputable that the health IT industry has been largely unregulated for most of its existence, in the manner of other healthcare sectors such as pharma and traditional medical devices.

Yet, even in the absence of regulation, the book authors found this, per Section 5 - EHR Vendor Usability Practices:

a)  A research team of human factors, clinician/human factors, and clinician/informatics experts visited eleven EHR vendors and conducted semi-structured interviews about their UCD processes. "Process" was defined as any series of actions that iteratively incorporated user feedback throughout the design and development of an EHR system. Some vendors developed their own UCD processes while others followed published processes, such as ISO or NIST guidelines.

Vendor recruitment. Eleven vendors based on market position and type of knowledge that might be gained were recruited for a representative sample (Table 1). Vendors received no compensation and were ensured anonymity.
and

b)  RESULTS
Vendors generally fell into one of three UCD implementation categories:

Well-developed UCD: These vendors had a refined UCD process, including infrastructure and the expertise to study user requirements, an iterative design process, formative and summative testing. Importantly, these vendors developed efficient means of integrating design within the rigorous software development schedules common to the industry, such as maintaining a a network of test participants and remote testing capabilities. Vendors typically employed an extensive usability staff.

Basic UCD: These vendors understood the importance of UCD and were working toward developing and refining UCD processes to meet their needs. These vendors typically employed few usability experts and faced resource constraints making it difficult to develop a rigorous UCD process.

Misconceptions of UCD: These vendors did not have a UCD process in place and generally misunderstood the concept, in many cases believing that responding to user feature requests or complaints constituted UCD. These vendors generally did not have human factors/usability experts on staff. Leadership often held little appreciation for usability.

About a third of our vendor sample fell equally into each category.

In other words, a third of health IT sellers lacked the resources to do an adequate job of UCD and testing; and a third did not even understand the concept.

Let me reiterate:

In an unregulated life-critical industry, a third of these sampled sellers thought 'responding to user feature requests or complaints constituted UCD'.  And another third neglected UCD due to a 'lack of resources'.

I find that nothing short of remarkable.

I opine that this is only possible in healthcare in an unregulated healthcare sector.

Regulation, for example, that enforced good design practices and good manufacturing practices (GMP's) could, it follows, actually improve clinical IT innovation considering the observations found by these authors, through ensuring those without the resources either found them or removed themselves from the marketplace, and by making sure those sellers that did not understand such a fundamental concept either became experts it UCD, or also left the marketplace.

I can only wonder in what other fundamental(s) other sellers are lacking, hampering innovation, that could be improved through regulation.

As a final point, arguments that regulation hampers innovation seems to assume a fundamental level of competency and good practices to start with among those to be freed from regulation. In this case, that turns our to be an incorrect assumption. 

As a radio amateur, I often use the term "health IT amateurs" to describe persons and organizations who should not be in leadership roles in health IT, just as I, as a radio amateur, should not be (and would not want to be) in a leadership role in a mission-critical telecommunications project.

I think that, inadvertently, the writers of this book section gave real meaning to my term "health IT amateurs."  User centered design is not a post-accident or post-mortem activity.

-- SS

12/4/2014 Addendum:

I should add that in the terminology of IT, "we don't have enough resources" - a line I've heard numerous times in my CMIO and other IT-related leadership roles - often meant: we don't want to do extra work, to reduce our profits (or miss our budget targets), or hire someone who actually knows what they're doing because we don't really think that the expertise/tasks in question are really that important.

In other cases, the expertise is present. but when those experts opine an EHR product will kill people if released, they find the expert 'redundant', e.g., http://cci.drexel.edu/faculty/ssilverstein/cases/?loc=cases&sloc=lawsuit.

Put in more colloquial terms, this is a slovenly industry that has always made me uncomfortable, perhaps in part due to my experience having been a medical safety manager in public transit (SEPTA in Philadelphia), where lapses in basic safety processes could, and did, result in bloody train wrecks.

Perhaps some whose sole experience with indolence and incompetence-driven catastrophe has been in discussions over coffee in faculty lounges cannot appreciate that viewpoint.

Academic organizations like AMIA could do, and could have done, a whole lot more to help reform this industry, years ago.

-- SS
2:39 PM
In the past several days the media has been abuzz with stories about the admission, then the following retraction, by a Texas hospital that and EMR "flaw" had caused a man who had been in West Africa and was infected with the Ebola virus to be sent home, instead of admitted and put into isolation.

I wrote about these matters at my Oct. 2, 2014 post "Did Electronic Medical Record-mediated problems contribute to or cause the current Dallas Ebola scare?" (http://hcrenewal.blogspot.com/2014/10/did-electronic-medical-record-mediated.html) and the followup October 4, 2014 post
"Dallas Hospital reverses EHR-related explanation for fumbling Ebola case" (http://hcrenewal.blogspot.com/2014/10/dallas-hospital-reverses-ehr-relarted.htm).

A spectrum of the healthcare IT ecosystem seems represented (see http://cci.drexel.edu/faculty/ssilverstein/cases/?loc=cases&sloc=ecosystem).  The technology enthusiasts and hyper-enthusiasts seem to believe the computer could have done no wrong (and usually lack medical and Medical Informatics expertise).

Some people such as myself with specific Medical Informatics experience and who know the failure modes via AHRQ, FDA, ECRI Institute etc. believe the EHR was quite likely contributory or causative of the mistake (see my April 9, 2014 post "FDA on health IT risk:  "We don't know the magnitude of the risk, and what we do know is the tip of the iceberg, but health IT is of 'sufficiently low risk' that we don't need to regulate it" (http://hcrenewal.blogspot.com/2014/04/fda-on-health-it-risk-reckless-or.html).

The reason I have written little after my initial two posts is that the only was to resolve the controversy is to actually examine the EHR screens, screen navigation and behavior of the EHR, if possible both before and after the hospital's stated "fix" of the problem, the EHR audit trails (automatically generated EHR accounting logs of user accesses, action taken, time, location etc.) and to examine the EHR in actual operation to evaluate it in context with the clinical setting in which it was installed.

Barring that, everything else is speculation usually biased either by the speculator's own beliefs about either the beneficence or fallibility of information technology in healthcare, and perhaps IT generally, and/or conflicts of interest.

Unfortunately, considering the health IT industry and environment, the only way I believe such an examination of the EHR can come about is via litigation.  I doubt it will come from the traditional regulators of medical devices and healthcare safety.

I do note the following of interest at Politico:

... While all EHRs difficult to use, some are set up better than others.

At Mount Sinai Hospital in New York City, information that a patient was feverish and recently flew in from Liberia would have set off an alarm, with the nurse’s screen flashing yellow and giving instructions to immediately isolate the patient, said Jason Shapiro, an emergency room physician and informatics expert at the hospital.

The nurse entering “fever” into the record would “get a hard stop. They immediately have to enter a response to a travel history question. And if there’s fever and the right kind of travel history, the whole isolation mechanism is supposed to swing into play,” Shapiro said.

... Both Mount Sinai and Texas Health Presbyterian have health records systems they purchased for hundreds of millions of dollars from Epic.


At least some users of EPIC seem to have a system configured to catch such a problem.  In my mind, this speaks the need to industry regulation, to ensure all EHRs meet basic standards of safety and reliability and are not haphazardly designed or implemented from one hospital to the next.

-- SS

10/9/14 Addendum:  

Prof. Jon Patrick of Australia, cited numerous times on this blog, relates this:

"I always talk about data capture and data reuse and the reuse is defined by the data flows required in the design of the system. EPIC might well have allowed for the the data capture but failed to deal with the data flow to properly effect the required reuse."

As may the implementers at the hospital in question also have failed at the flows supporting appropriate and fail-safe reuse in a hectic ED environment.

He adds, for further clarification:

A footnote to this point. We separate data flow from work flow. Data flow is the movement of data from context to reuse in another context, or you collect data on this screen(first context) and then you see it later on another screen (=another context).

Workflow is the route staff team members take in moving from one context to another, that is the movement from using one screen to another screen. Most often triggered by clicking a button that moves you to the chosen screen(next context).

The two are very different things and require close thinking in both cases to not trip up with unhelpful and frustrating system solutions.

Historically, Information Systems development has dealt with these issues both poorly and without adequate separate planning. In the past the focus has been on the data capture and storage, because the notion of reuse and context shifting has been left behind. This has been OK for many business systems where contexts have only small variations and workflow are simple or unimportant.

In medicine that just isn’t the case.

-- SS
12:13 PM
Karen De Salvo has assumed the role of Director of ONC, the office of the National Coordinator for Health IT at HHS (http://www.healthit.gov/newsroom/dr-karen-desalvo-md).





A pretty face, but here's evidence of the same old tired political hucksterism and spin concerning healthcare information technology. 

In response to perhaps the most candid exposé in the public media to date of the risks and defects of current commercial health IT, industry conflicts of interest, and injuries and deaths, that appeared on July 20, 2014 in the Boston Globe under the title "Hazards tied to medical records rush" (http://tinyurl.com/lm7x34h) by the Globe's Washington bureau chief Christopher Rowland, Ms. De Salvo authored a letter to the editor.

DeSalvo is new to the job, but not to the political message of unbridled health IT hyper-enthusiasm and pointless "Safety Centers", instead of formal regulation as in other mission critical industries using IT (including pharma, for one).

Her letter to the Boston Globe attempts to put lipstick on a pig regarding a technology largely reviled by physicians and nurses due to its poor user experience and defects (rampant due to the free-for-all of this healthcare sector's unprecedented regulatory accommodation, that is, no regulation) that cause patient endangerment.  See http://hcrenewal.blogspot.com/2010/01/honest-physician-survey-on-ehrs.html, http://hcrenewal.blogspot.com/2014/02/ehrs-real-story-sobering-assessment.html and http://hcrenewal.blogspot.com/2013/11/another-survey-on-ehrs-affinity-medical.html as just a few examples.

What politicians do:  they spin like neutron stars

Her letter, with my comments:

http://www.bostonglobe.com/opinion/editorials/2014/07/26/many-health-success-stories-note/MyyGM3uq2LU0GqGLLqYJ7M/story.html

Letters | CHALLENGES IN THE MOVE TO ELECTRONIC MEDICAL RECORDS

July 27, 2014

Many health-IT success stories to note

I was disappointed to read “Risks, some dire, tied to medical records rush” (Page A1, July 20), as it failed to mention any examples of patients and their health care providers benefiting from the use of health information technology, including electronic health records. 

As to "disappointment", nearly the entire healthcare and lay press is filled with "success stories" and other propaganda.  There is, in fact, no need for artificial and industry-favoring "balance" in the rare article about the downsides. The industry has its own very large mouthpiece.  (This is the response we at HC Renewal give to critique that we're not "balanced" in every post.  It would be as if every article on defective avionics and pilot training issues that cause hundreds to die (e.g, Air France Flight 447, http://en.wikipedia.org/wiki/Air_France_Flight_447) should be accompanied by articles on just how many non-fatal flights there are, too; or as if every article about criminals has to mention there are good people, too.)

Such success stories are playing out across the country daily, including in Boston, and their omission from the article incompletely portrays the important role of electronic health records in improving patient safety and outcomes.

Is this and example of a far-left "you have to break an egg to make an omelette" (even if the 'egg' is a human being) thinking?   This statement, attempting redirection from the downsides, in effect says: "It's OK to sacrifice 100 in experimentation to (potentially) 'save' 10,000 - or 1,000 to 'save' 100,000."

Problem is, this is not how Western medicine is supposed to work - by HHS's own policies on research ethics, and international agreements and treaties based on work that arose after WW2's medical abuses, no less, e.g., the Nuremberg laws.

In medicine, legal and ethical standards such as the NIH Guidelines for Conduct of Research Involving Human Subjects (http://grants.nih.gov/grants/policy/hs/regulations.htm), the World Medical Association Declaration Of Helsinki (http://www.wma.net/en/30publications/10policies/b3/) and others restrict introduction of new drugs and medical devices without informed consent, and without extensive preclinical and clinical testing and post-marketing surveillance, especially when risks of the technology are unknown.  

And health IT is decidedly experimental, considering we don't even know the true extent of harms, by multiple admissions (by FDA, IOM, ECRI etc., see http://hcrenewal.blogspot.com/2014/04/fda-on-health-it-risk-reckless-or.html).

Also see my post Mar. 12, 2012 post "Human Subjects Experimentation Directives Ignored in the Grand Health IT Experiment?" at http://hcrenewal.blogspot.com/2012/03/human-subjects-experimentation.html.  The highlights, emphases mine:

Directives for Human Experimentation
NUREMBERG CODE
  1. The voluntary consent of the human subject is absolutely essential. This means that the person involved should have legal capacity to give consent; should be so situated as to be able to exercise free power of choice [that is, to opt-out - ed.], without the intervention of any element of force, fraud, deceit, duress, over-reaching, or other ulterior form of constraint or coercion; and should have sufficient knowledge and comprehension of the elements of the subject matter involved as to enable him to make an understanding and enlightened decision. This latter element requires that before the acceptance of an affirmative decision by the experimental subject there should be made known to him the nature, duration, and purpose of the experiment; the method and means by which it is to be conducted; all inconveniences and hazards reasonable to be expected; and the effects upon his health or person [information on HIT risk exists, such as on this blog - ed.] which may possibly come from his participation in the experiment. The duty and responsibility for ascertaining the quality of the consent rests upon each individual who initiates, directs or engages in the experiment. It is a personal duty and responsibility which may not be delegated to another with impunity.
  2. The experiment should be so conducted as to avoid all unnecessary physical and mental suffering and injury.
  3. No experiment should be conducted where there is an a priori reason to believe that death or disabling injury will occur; except, perhaps, in those experiments where the experimental physicians also serve as subjects.
  4. During the course of the experiment the human subject should be at liberty to bring the experiment to an end [go back to paper - ed.] if he has reached the physical or mental state where continuation of the experiment seems to him to be impossible.
  5. During the course of the experiment the scientist in charge must be prepared to terminate the experiment at any stage [go back to paper - ed.], if he has probable cause to believe, in the exercise of the good faith, superior skill and careful judgment required of him that a continuation of the experiment is likely to result in injury, disability, or death to the experimental subject.

Perhaps they don't teach these things at Harvard?

Back to De Salvo's letter:

A fully electronic health system can help identify and prevent potential medical errors. The Office of the National Coordinator for Health IT has taken steps to address the safe use and implementation of electronic health records, including sponsoring the Institute of Medicine report referenced in the story.

And since that 2012 report, which acknowledges that bad health IT causes risks, errors and harms to a definite but unknown magnitude (see bottom section of my post at http://hcrenewal.blogspot.com/2012/03/doctors-and-ehrs-reframing-modernists-v.html), ONC has done next to nothing. People are exposed to risk, harms and deaths and that seems just fine to ONC, for if it were not, they would have acted aggressively - say, as if a type of jet plane, or nuclear power plant had been revealed to pose risks and dangers to the community.

As Mr. Rowland himself pointed out in the Boston Globe article:

... In 2011, the Institute of Medicine said the lack of a central repository for reporting error-prone software, patient injuries, and deaths, combined with nondiclosure and confidentiality clauses in vendor contracts, “pose unacceptable risks to safety.”

It strongly recommended that the Obama administration mandate that vendors report “deaths, serious injuries, and unsafe conditions” to a centralized, government-designated entity. Such reports should be made available to the public, it said, without information that would identify individual patients and providers.

Three years later, no such reporting system exists.

Instead, ONC takes the GM ignition switch approach (http://en.wikipedia.org/wiki/2014_General_Motors_recall):

... On February 7, 2014, GM recalled about 800,000 of its small cars due to faulty ignition switches, which could shut off the engine during driving and thereby prevent the airbags from inflating ... GM says it expects to charge $1.2 billion against its second quarter earnings as a result of its ongoing recalls, and the charge could get worse as lawsuits and investigations continue. 

The fault had been known to GM for at least a decade prior to the recall being declared.  Some have suggested that the company actually approved the switches in 2002 even though they knew they might not meet safety standards.

The company is facing multiple investigations into why it did not attempt to fix these faulty ignitions sooner, including a federal criminal probe, as well as a probe led by Anton Valukas, the latter of which produced a report which GM made public on June 5, 2014.

Instead of a serious approach to safety, ONC and De Salvo champion window dressing:

Most recently, the Office of the National Coordinator, the Food and Drug Administration, and the Federal Communications Commission issued a proposed plan that would include the creation of a health IT safety center, which would assist in the voluntary reporting of health IT-related medical errors. Many patient advocates, medical professionals, and other stakeholders have expressed support for this approach.

"Many?" - The consensus views, often dominated by industry insiders and others with conflicts of interest, is how ONC and De Salvo apparently think safeguarding the public is to be done.  Those who veer from this "consensus" with facts of risk and harms are to be ignored.

As to the hypocrisy and absurdity of a toothless "health IT safety center", see my April 9, 2014 post "FDA on health IT risk: reckless, or another GM-like political coverup?" at http://hcrenewal.blogspot.com/2014/04/fda-on-health-it-risk-reckless-or.html.

The hundreds of thousands of providers successfully and safely using electronic health records today show that health IT can, and does, improve health and health care.

Dr. Karen DeSalvo
National coordinator for health information technology
Department of Health and Human Services
Washington

Ms. De Salvo apparently never got this Mar. 14, 2014 CMS memo that was sent in response to a query by the American Association  of Physicians and Surgeons:


CMS: "we do not have any information that supports or refutes claims that a broader adoption of EHRs can save lives."  [But let's spend hundreds of billions of dollars anyway.]  Click to enlarge.

However, in politics, such issues do not seem to matter as compared to passing along the party line. 

In medicine, they do matter.  Very much so (see http://hcrenewal.blogspot.com/2011/06/my-mother-passed-away.html).

-- SS
4:52 PM
Congress has just released an an Act "to amend the Federal Food, Drug, and Cosmetic (FD&C) Act to revise and extend the user-fee programs for prescription drugs and medical devices, to establish userfee programs for generic drugs and biosimilars, and for other purposes."  Health IT provisions are included.

This Act, S. 3187, is entitled the ‘‘Food and Drug Administration Safety and Innovation Act.’’  PDF fulltext is located at this link:  http://www.gpo.gov/fdsys/pkg/BILLS-112s3187enr/pdf/BILLS-112s3187enr.pdf

With regard to health IT, the Act states the following.  A risk-based regulatory framework pertaining to health IT is to be developed (emphases mine):



SEC. 618. HEALTH INFORMATION TECHNOLOGY.


(a) REPORT.—Not later than 18 months after the date of enactment of this Act, the Secretary of Health and Human Services (referred to in this section as the ‘‘Secretary’’), acting through the Commissioner of Food and Drugs, and in consultation with the National Coordinator for Health Information Technology and the Chairman of the Federal Communications Commission, shall post on the Internet Web sites of the Food and Drug Administration, the Federal Communications Commission, and the Office of the National Coordinator for Health Information Technology, a report that contains a proposed strategy and recommendations on an appropriate, risk-based regulatory framework pertaining to health information technology, including mobile medical applications, that promotes innovation, protects patient safety, and avoids regulatory duplication.


(b) WORKING GROUP.—
(1) IN GENERAL.—In carrying out subsection (a), the Secretary may convene a working group of external stakeholders and experts to provide appropriate input on the strategy and recommendations required for the report under subsection (a).

(2) REPRESENTATIVES.—If the Secretary convenes the working group under paragraph (1), the Secretary, in consultation with the Commissioner of Food and Drugs, the National Coordinator for Health Information Technology, and the Chairman of the Federal Communications Commission, shall determine the number of representatives participating in the working group, and shall, to the extent practicable, ensure that the working group is geographically diverse and includes representatives of patients, consumers, health care providers, startup companies, health plans or other third-party payers, venture capital investors, information technology vendors, health information technology vendors, small businesses, purchasers, employers, and other stakeholders with relevant expertise, as determined by the Secretary.


While a welcome development, it is to be determined if the Working Group representatives will include critical thinkers without conflict of interest, whose contributions to the health IT debate in this country are needed a lot more than the traditional hyper-enthusiasts, industry courtiers and opportunists.

I am actually not hopeful.

The "promotes innovation" and "avoids regulatory duplication" phrases are of especially great concern.  As I've written before, "innovation" that involves non-consented experimentation is not innovation at all, it is exploitation, and "regulatory duplication" can become an excuse for milquetoast regulation by the conflicted (e.g., regulatory capture) or poorly qualified.

I also note that this Act, while welcome, is long overdue - another example of putting the cart before the horse (link), with a national project (including CMS penalties for non-adopters) now several years underway.

Final thought:  if health IT were safe as has been claimed now for decades, or had been made safe through proper development and clinical trials-based testing, we would not need health IT provisions in a  "Food and Drug Administration Safety and Innovation Act" in 2012.

-- SS
8:14 AM
There's a health IT meme that just won't die (patients may, but not the meme).

It's the meme that health IT "certification" is a certification of safety.

I expressed concern about the term "certification" being misunderstood even before the meme formally appeared, when the term was adopted by HHS with regard to evaluation of health IT for adherence to the "meaningful use" pre-flight features checklist.  See my mid-2009 post "CCHIT Has Company" where I observed:

HIT "certification." ... is a term I put in quotes since it really is "features qualification" at this point, not certification such as a physician receives after passing Specialty Boards.

The "features qualification" is an assurance that the EHR functions in way that could enable an eligible provider or eligible hospital to meet the Center for Medicare & Medicaid Services' (CMS) requirements of "Meaningful Use."  No rigorous safety testing in any meaningful sense is done, and no testing under real-world conditions is done at all.

I've seen the meme in various publications and venues.  I've even seen it in legal documents in medical malpractice cases where EHR's were involved, as an attempted defense.

Now the WSJ has fallen for the health IT Certification meme.

An article "There's a Medical App for That—Or Not" was published on May 29, 2012.  Its theme is special regulatory accommodation for health IT in the form of opposition to FDA regulation of devices such as "portable health records and programs that let doctors and patients keep track of data on iPads."

In the article, this assertion about health IT "certification" is made:

... The FDA's approach to health-information technology risks snuffing out activity at a critical frontier of health care. Poor, slow regulation would encourage programmers to move on, leaving health care to roil away for yet another generation, fragmented, disconnected and choking on paperwork.

The process already exists for safeguarding the public for computers in health care. It's not FDA premarket review but the health information technology certification program, established under President George W. Bush and still working fine under the Obama Health and Human Services Department. The government sets the standards and an independent nonprofit [ATCB, i.e., ONC Authorized Testing and Certification Bodies - ed.] ensures that apps meet those standards. It's a regulatory process as nimble as the breakout industry it's meant to monitor. That is where and how these apps should be regulated.

It's a wonderful meme.  Unfortunately, it's wrong.  Dead wrong.

Certification by an ATCB does not "safeguard the public."   Two ONC Authorized Testing and Certification Bodies (ATCB's) admitted this in email, as in my Feb. 2012 post "Hospitals and Doctors Use Health IT at Their Own Risk - Even if Certified".  I had asked them, point-blank:

"Is EHR certification by an ATCB a certification of EHR safety, effectiveness, and a legal indemnification, i.e., certifying freedom from liability for EHR use of clinical users or organizations? Or does it signify less than that?"

I received two replies from major ONC ATCB's indicating that "certification" is merely assurance that HIT meets a minimal set of "meaningful use" guidelines, not that it's been vetted for safety.  For instance:

From: Joani Hughes (Drummond Group)
Sent: Monday, March 05, 2012 1:06 PM
To: Scot Silverstein
Subject: RE: EHR certification question

Per our testing team:

It is less than that. It does not address indemnification although a certification could be used as a conditional part of some other form of indemnification function, such as a waiver or TOA, but that is ultimately out of the scope of the certification itself. Certification in this sense is an assurance that the EHR functions in way that could enable an eligible provider or eligible hospital to meet the CMS requirements of Meaningful Use Stage 1. Or to restate it more directly, CMS is expecting eligible providers or eligible hospitals to use their EHR in “meaningful way” quantified by various quantitative measure metrics and eligible providers or eligible hospitals can only be assured they can do this if they obtain a certified EHR technology.

Please let me know if you have any questions.

Thank you,
Joani.

Joani Hughes
Client Services Coordinator
Drummond Group Inc.

The other ATCB, ICSA Labs, stated that:

... Certification by an ATCB signifies that the product or system tested has the capabilities to meet specific criteria published by NIST and approved by the Office of the National Coordinator. In this case the criteria are designed to support providers and hospitals achieve "Meaningful Use." A subset of the criteria deal with the security and patient privacy capabilities of the system.

Here is a list of the specific criteria involved in our testing:
http://healthcare.nist.gov/use_testing/effective_requirements.html

In a nutshell, ONC-ATCB Certification deals with testing the capabilities of a system, some of them relate to patient safety, privacy and security functions (audit logging, encryption, emergency access, etc.).

What was suggested in the email below (freedom from liability for users of the system, etc.) would be out of scope for ONC-ATCB testing based on the given criteria. [I.e., certification criteria - ed.] I hope that helps to answer your question.

I had noted that:

... My question was certainly answered [by the ATCB responses]. ONC certification is not a safety validation, such as in a document from NASA on aerospace software safety certification, "Certification Processes for Safety-Critical and Mission-Critical Aerospace Software" (PDF) which specifies at pg. 6-7:
In order to meet most regulatory guidelines, developers must build a safety case as a means of documenting the safety justification of a system. The safety case is a record of all safety activities associated with a system throughout its life. Items contained in a safety case include the following:

• Description of the system/software
• Evidence of competence of personnel involved in development of safety-critical software and any
safety activity
• Specification of safety requirements
• Results of hazard and risk analysis
• Details of risk reduction techniques employed
• Results of design analysis showing that the system design meets all required safety targets
Verification and validation strategy
• Results of all verification and validation activities
• Records of safety reviews
• Records of any incidents which occur throughout the life of the system
• Records of all changes to the system and justification of its continued safety

A CCHIT ATCB juror, a physician informatics specialist, has also done a guest post in Jan. 2012 on HC Renewal about the certification process, reproducing his testimony to HHS on the issue.  That post is "Interesting HIT Testimony to HHS Standards Committee, Jan. 11, 2011, by Dr. Monteith."  Dr. Monteith testified (emphases mine):

... I’m “pro-HIT.” For all intents and purposes, I haven’t handwritten a prescription since 1999.

That said and with all due respect to the capable people who have worked hard to try to improve health care through HIT, here’s my frank message:

ONC’s strategy has put the cart before the horse. HIT is not ready for widespread implementation. 

... ONC has promoted HIT as if there are clear evidence-based products and processes supporting widespread HIT implementation.

But what’s clear is that we are experimenting…with lives, privacy and careers.

... I have documented scores of error types with our certified EHR, and literally hundreds of EHR-generated errors, including consistently incorrect diagnoses, ambiguous eRxs, etc.

As a CCHIT Juror, I’ve seen an inadequate process. Don’t get me wrong, the problem is not CCHIT. The problem stems from MU.

EHRs are being certified even though they take 20 minutes to do a simple task that should take about 20 seconds to do in the field.  [Which can contribute to mistakes and "use error" - ed.] Certification is an “open book” test. How can so many do so poorly?

For example, our EHR is certified, even though it cannot generate eRxs from within the EHR, as required by MU.

To CCHIT’s credit, our EHR vendor did not pass certification. Sadly, our vendor went to another certification body, and now they’re certified.

MU does not address many important issues. Usability has received little more than lip-service. What about safety problems and reporting safety problems? What about computer generated alerts, almost all of which are known to be ignored or overridden (usually for good reason)?
 
The concept of “unintended consequences” comes to mind.

All that said, the problem really isn’t MU and its gross shortcomings, it is ONC trying to do the impossible:

ONC is trying to artificially force a cure for cancer, basically trying to promote one into being, when in fact we need to let one evolve through an evidence-based, disciplined process of scientific discovery and the marketplace.

Needless to say, as was learned at great cost in past decades, a "disciplined process" in medicine includes meaningful safety regulation by objective outside experts.

Further, the certifiers have no authority to do important things such as forcibly remove dangerous software from the market.  An example is the forced Class 1 recall of a defective system as I wrote about in my Dec. 2011 post "FDA Recalls Draeger Health IT Device Because This Product May Cause Serious Adverse Health Consequences, Including Death".   Class 1 recalls are the most serious type of recall and involve situations in which there is a reasonable probability that use of these products will cause serious adverse health consequences or death.

In that situation, the producer had been simply advising users (in critical care environments, no less) to "work around the defects" that could indicate incorrect recommended dosage values of critical meds, including a drug dosage up to ten times the indicated dosage, as well as corrupt critical cardiovascular monitoring data.  As I observed:

... I find a software company advising clinicians to make sure to "work around" blatant IT defects in "acute care environments" the height of arrogance and contempt for patient safety.

Without formal regulatory authority to take actions such as this FDA recall, "safeguarding the public" is a meaningless platitude.

It's also likely the ATCB's, which are private businesses, would not want the responsibility of "safeguarding the public."  That responsibility would open them up to litigation when patient injuries or death were caused, or were contributed to, by "certified" health IT.

I have in the past also noted that the use of the term "certification" might have been deliberate, to mislead potential buyers exactly into thinking that "certification" is akin to a UL certification of an electrical appliance for safety, or an FAA approval of a new aircraft's flight-worthiness.

The WSJ needs to clarify and/or retract its statement, as the statement is misinformation.

At my Feb. 2012 post "Health IT Ddulites and Disregard for the Rights of Others" I observed:

Ddulites [HIT hyper-enthusiasts - ed.] ... ignore the downsides (patient harms) of health IT.

This is despite being already aware of, or informed of patient harms, even by reputable sources such as FDA (Internal FDA memo on H-IT risks), The Joint Commission (Sentinel Events Alert on health IT), the NHS (Examples of potential harm presented by health software - Annex A starting at p. 38), and the ECRI Institute (Top ten healthcare technology risks), to name just a few.

In fact, the hyper-enthusiastic health IT technophiles will go out of their way to incorrectly dismiss risk management-valuable case reports as "anecdotes" not worthy of consideration (see "Anecdotes and medicine" essay at this link).

They will also make unsubstantiated, often hysterical-sounding claims that health IT systems are necessary to, or simply will "transform" (into what, exactly, is usually left a mystery) or even "revolutionize" medicine (whatever that means).

Health IT is a potentially dangerous technology.   It requires meaningful regulation to "safeguard the public."  How many incidents like this and this will it take before that is understood by the hyper-enthusiasts?

I've emailed the ATCB's that had responded to my aforementioned query for clarification on the WSJ assertion about their role, being that the statement is in contradiction to their earlier replies to me.  I also advised them of the potential liability issues.

However, if it turns out to be true that the ONC-ATCB's do intend themselves as the ultimate watchdog and assurer of public safety related to EHR's, that needs to be known by the public and their representatives.

-- SS

1:27 PM