De-implementation or “stopping practices that lack supporting evidence” is a popular topic in infection control circles. In fact, just yesterday I read a discussion where the authors suggested we no longer need to practice hand hygiene after removing gloves when caring for patients with CDI. I guess there aren’t randomized trials – you can’t be serious!
Which brings me to a recent review in the NEJM by Laura Mauri and Ralph D’Agostino titled “Challenges in the Design and Interpretation of Noninferiority Trials.” This review is very well written – perhaps required reading for epidemiology students well written. In infection control, it is important to recognize that most de-implementation studies are really non-inferiority trials. For example, when we discontinue contact precautions, we are really suggesting that “stopping contact precautions” is non-inferior to continuing contact precautions in preventing MDRO transmission – of course ignoring that compliance with contact precautions is probably so poor that they are basically the same intervention!
In the contact precautions example, we would be testing whether stopping contact precautions “is not worse than the control (continuing contact precautions) by an acceptably small amount, with a given degree of confidence.” The null hypothesis would be that discontinuing contact precautions leads to higher transmission of MDRO (i.e. is worse) and rejection of the null hypothesis is used to support the claim that discontinuing CP is noninferior. Here I suggest you stare at Figure 1 for a bit (probably easier to read in the paper with the description of each condition, but I have included it below anyway)
Further discussion about the design and analysis of these trials is way beyond the scope of a humble blog post; however, the authors include nice descriptions of methods for deriving noninferiority margins, the “constancy assumption” and statistical analysis approaches. But their 6th and 7th components of noninferiority trials are worth mentioning from an infection control standpoint:
6) Adequate ascertainment of outcomes: The authors write that “incomplete or inaccurate ascertainment of outcomes, as a result of loss to follow-up, treatment crossover or nonadherence, or outcomes that are difficult to measure or subjective, may cause the treatments being compared to falsely appear similar.” I would suggest that studies that seek to de-implement contact precautions that do not include admission/discharge surveillance cultures seeking to detect transmission events fail this criteria.
7) Issues with “Intention-to-Treat” in noninferiority designs: In a superiority studies (typical RCTs), intention-to-treat analysis, where anyone who receives the treatment is included even if they get one dose, is the gold standard. The authors write: “In a noninferiority study, however, if some patients did not receive the full course of the assigned treatment, an intention-to-treat analysis may produce a bias toward a false positive conclusion of noninferiority by narrowing the difference between the treatments. In some instances, a per-protocol analysis, which excludes patients who did not meet the inclusion criteria or did not receive the randomized, per-protocol assignment, may be preferable in a noninferiority trial. However, a per-protocol analysis may include fewer participants and introduce postrandomization bias. In general, both the intention-to-treat and per-protocol data sets are important. We suggest analyzing both sets and examining the results for consistency.”
Just some things to think about as we read the coming wave of de-implementation studies in infection control including diagnostic stewardship.