Covid-19 infection rates have been subject to a ” vast undercount,” by a factor of 50 or more. A pair of studies based on antibody tests in California had just been written up and posted to a preprint server, the first one going up last Friday–and while the results were shaky and preliminary, they still made headline news.
THE RESULTS ARE shocking: Covid-19 infection rates have been subject to a ” vast undercount,” by a factor of 50 or more. A pair of studies based on antibody tests in California had just been written up and posted to a preprint server, the first one going up last Friday–and while the results were shaky and preliminary, they still made headline news. The research methods were “dubious,” one infectious disease expert told WIRED, but he was gladdened by the fact that research of this kind was coming out. “Just to see the first bit of data is quite exciting,” he said.
Dubious, yet still quite exciting: We’ve seen this play before. In recent weeks, other deeply flawed–but also meaningful, important, quite exciting–findings about pandemic topics have made their way into the papers. Consider last week’s reporting on a preprint out of New York University, suggesting a link between obesity and severity of Covid-19. Leora Horwitz, a senior author on the study, “cautioned that the findings were preliminary,” according to a write-up in The New York Times, and “noted that some of the data was still incomplete.” Then, as if by magic, in the very next paragraph: “Dr. Horwitz said the implications for patient care were clear.” Preliminary findings, incomplete data, clear implications: Science!
Our evidence is never perfect, and we will always need to make decisions before our understanding of things is complete, especially when people are dying by the thousands every day, from a virus no one’s ever seen before. But a crude idea–if not an ideology–has taken hold amid the panic of this crisis: that any data, however shoddy, is better than none. “It’s not perfect,” admitted one author of the California antibody research, Stanford University’s John Ioannidis, “but it’s the best science can do.” That’s dangerous and wrong. As Ioannidis himself has previously shown, it can be risky to lower the bar for science in normal times. Right now, it may be even worse. It’s vital that we gather knowledge as quickly as we can, in the face of the pandemic–but sacrificing scientific standards won’t do anything to accelerate that process. If anything, it will slow it down.
The inclination to trade off rigor for the sake of speed appears at every level of research. Scientists are rushing to understand the virus and its spread, and to search for ways we might treat, prevent, or control it. Administrators too have brushed away the rules that might impede this research program, signing off on testing tools without the normal vetting process. Physicians are testing candidate drugs in uncontrolled studies, often many at a time, and under “compassionate use” protocols that are meant to be a last resort for critically ill patients who have no other options. And then the “exciting” data from this hasty work are coming out on preprint servers, and being taken up by doctors, reporters, and the public without ever undergoing peer review.
What, exactly, have we gained from this mad abandonment of careful scientific practice? It’s too soon to say for sure, but it’s quite possible that some results from slipshod studies will pan out in the end, just as some higher-quality studies may end up being wrong. In the meantime, though, the downsides are obvious. As careless work proliferates, it seeds journals, preprint servers, and internet rumor mills with unreliable data, making it more difficult for everyone to sort facts from wishful thinking. At the fringes, this can lead people to take up unproven and potentially dangerous treatments–like the Arizona man who died in March after attempting to self-medicate with chloroquine. But the biggest harms are those that spread into the mainstream of our health care system and research. Half-baked studies don’t just produce misleading results–they also steal attention and precious resources from projects that have a real chance of producing actionable information.
When incomplete information points in the wrong direction, it can have enormous costs, according to Jonathan Kimmelman, director of the biomedical ethics unit at McGill University. Obviously you don’t want doctors to deploy an intervention that’s unsafe. But even when a treatment is merely ineffective, using it diverts attention and resources away from research on other interventions that might work better, he says. When medical practice moves toward promising but unproven interventions, it can make high-quality trials more difficult to run. Patients and doctors start believing that an experimental treatment is effective, based on imperfect data from small or poorly designed studies; that makes them more reluctant to participate in bona fide clinical trials, which might assign them to the control group. “Why would you ever enroll in a placebo-controlled trial when you can get a drug from your physician?” Kimmelman asks.
In the worst-case scenario, an unproven intervention becomes fixed in medical practice long before the evidence has come in. That’s in danger of happening with chloroquine, where patients appear to be clamoring for the drug in such numbers that patients who need it for approved purposes, like lupus, are finding their medication in short supply. When this has happened in the past, doctors have been much slower to rigorously evaluate those treatments than they would under normal conditions.
The classic example of this is autologous bone marrow transplantation for treating breast cancer. In the early 1990s, some small, uncontrolled studies suggested that giving breast cancer patients high-dose chemotherapy, along with bone marrow transplants to help them tolerate the toxic treatment, could prolong their lives. It made sense that you could better eradicate cancer by blasting it with lots of chemo, and this treatment approach quickly became the standard of care for tens of thousands of women. When results from randomized clinical trials were finally released in 2000, they showed the treatments provided no survival benefit and more toxicity and side effects. (It also eventually came to light that some of the earlier research supportive of the treatment had been fraudulent.)
It’s a story that’s been repeated again and again. “The history of medicine has taught that once a practice has gained traction in a community, it’s very hard for people to accept negative evidence” that overturns it, says Vinay Prasad, a hematologist at Oregon Health and Science University who wrote a whole book on the topic.
The hazards of this corner-cutting aren’t limited to potential treatments for Covid-19. Diagnostic tools are also being rushed through the normal scientific process. The Food and Drug Administration has instituted special guidelines that allow Covid-19 antibody tests to be sold without the regular validation and approvals. This has led to problems all over the country, as buyers of the tests discover that they don’t perform as promised, and produce results that might not be reliable enough to form a basis for clinical or public health decisions. Even an antibody test that’s been found to be 97 percent accurate (and many of the tests available right now fall short of this) will be a disastrous failure if it’s given out to a population with a 3 percent prevalence of the disease. Under those conditions, more than half of the positive results will be mistaken. “We sacrificed quality for speed, and in the end, when it’s people’s lives that are hanging in the balance, safety has to take precedence over speed,” University of Minnesota infectious disease researcher Michael T. Osterholm told The New York Times earlier this week.
We can do better, if we understand the difficulties ahead. “Even under the best circumstances, it’s really, really hard to do science,” Kimmelman says, adding that “the challenges we have under normal conditions are amplified in a crisis atmosphere.” In a commentary published Thursday in Science, “Against pandemic research exceptionalism,” Kimmelman and his colleague Alex John London argue that when getting to the truth is at a premium, we need to employ our very best tools and methodologies, not relax our standards to let anything go.
We are living in a sea of uncertainty right now, and flooding the environment with data from poorly designed studies pollutes our information ecosystem in a way that makes it harder to figure out what’s true. Prasad points to a paper published April 10 in the New England Journal of Medicine on the promise of the antiviral medicine remdesivir for treating Covid-19. The study describes 61 patients who were given the drug under compassionate use rules, and reports improvements among two-thirds of the 53 patients who completed the study. But there was no comparison group, so it’s hard to know how well similar patients might have done without the drug. Nor was there any explanation of what happened to the eight patients who were somehow lost along the way. (Did they die or stop taking the drug? Given that they were all hospitalized, it’s strange that the researchers couldn’t provide any more information, Prasad says.)
Another problem is that the patients weren’t randomly selected to receive the drug. Most people who get enrolled in a compassionate use program are probably better off to begin with than the average patient: less likely to have other health issues, and wealthy enough to be in the sort of hospital that can push for experimental treatments of this kind. “You can’t draw conclusions from this,” Prasad notes. “Crappy data isn’t better than no data.” If anything, these results create a false sense of knowing. (Data from a clinical trial for remdesivir is imminent.)
It may take some time, but that ersatz knowledge will be outed in the end. Then we may well wonder why we wasted so much precious time. Consider the lessons of the 2014 Ebola outbreak. Numerous studies of treatments and vaccines for that disease were conducted, but “almost all of them failed to use proper comparator groups or randomization,” Kimmelman says. “People made the argument that because we’re in a crisis setting we can’t afford to use randomization, or it would be unethical, or we have to make compromises on our scientific method.”
As a result, when the outbreak was quelled in 2015, scientists were left not knowing much more about the utility of their interventions than they had before the crisis even started. If researchers had implemented more rigorous study designs, Kimmelman adds, “it would have put us in a much better position to know how to manage Ebola than using a lot of small, uncoordinated, poorly designed studies.”
Comments