There’s a lot of news to catch up on, and to keep things straight I’ll divide the hydroxychloroquine part out into this post, and cover others in the next one. My previous reviews of the clinical data in this area are here.
First up is this study from France. It’s another very small one, and all the usual warnings apply because of that. It’s from a team at the University of Paris and Saint-Louis Hospital there, and they evaluated 11 consecutive patients admitted there with the same course of treatment as the Marseilles group first reported (hydroxychloroquine 600mg/day and azithromycin, 500mg the first day and 250 mg/day thereafter). The mean age of their patients was 58.7 years, and (notably) 8 of the 11 had significant comorbidities (two obese, 5 with various forms of cancer, one with HIV). That’s a tough population, and unfortunately, the HCQ/AZ combination did nothing. One patient died (and two others went on to the ICU) and of the ten remaining, 8 were still positive for the virus by nasal swab on days 5/6 after treatment. One patient had to discontinue therapy on day 4 because of QT prolongation, a known side effect of hydroxychloroquine that can lead to fatal heart arrhythmia.
So while this is a small study and not a perfect match, it provides no evidence to show that the HCQ/AZ combination had any benefit at all. While we’re on the subject of QT prolongation, there’s this preprint from a medical team at NYU that was also treating patients with the same combination of drugs. In 84 patients, they found notable QT prolongation in about 30% of them, and another 11% were to a level (>500 milliseconds) that put them at a high risk for arrhythmia. This group’s mean age was 63, 74% male. No cancer patients in this group, but 65% did have hypertension and 20% were diabetic (which from many reports is actually a reasonable look at the patients most likely to progress to severe disease). The strongest predictor of dangerous QT numbers was the development of renal trouble while on the drug combination.
There are a couple of other things that need to be noted. One is that hydroxychloroquine itself actually lowers the activity of the innate immune system; that’s why people take it for lupus and for rheumatoid arthritis. Many people are saying that perhaps it will work best if taken early in the course of infection, but this effect (which is mediated through TLR receptors) should be kept in mind. Another potentially important point is raised in this preprint – which, it has to be said, is not human data but mouse toxicology. But with that in mind, the authors report what looks like a bad interaction in that species between HCQ and metformin. And by “bad”, I mean about 30% mortality. If this translates at all to humans, it could be bad news, because (as mentioned above) diabetics look like a high-risk group and many patients may well have been taking metformin when they present at the hospital. We need more information on this. An investigational drug combination that showed this effect in mice would not move forward in the normal course of things.
Finally, I would like to point out this preprint from a multi-country team (Denmark, Netherlands, UK) which goes back over the original Marseilles report and reanalyzes its statistics. The problems that many noted with that paper show up in detail here, and the lessons that you take from it can vary a great deal depending on the details that were not well reported or characterized:
Performing a Bayesian A/B test, we found that for the original data, there was strong statistical evidence for the positive effect of HCQ mono improving the chances of viral reduction when compared to the comparison group. However, we found that the level of evidence drops down to moderate evidence when including the deteriorated patients, and it drops further to anecdotal evidence when excluding the patients that were not tested on the day of the primary outcome (day 6). For context, anecdotal evidence is generally considered ‘barely worth mentioning’ (Jeffreys, 1961) We were able to qualitatively reproduce the finding of an improvement of HCQ +AZ over HCQmono . However, although this finding was statistically significant in the original finding, our reanalysis revealed only anecdotal evidence for the positive effect of HCQ +AZ over HCQmono . However, when we included the deteriorated patients into the analysis, this evidence increased to moderate.
It’s no wonder that this work has set off so many arguments: statistically, it’s like a funhouse mirror. Here, though, is where some of the folks pinging me on Twitter and sending me emails tend to get more worked up, especially to that point about anecdotal data. I can see where they’re coming from: if you haven’t done this stuff, you can look at a report of people responding to such a treatment and figure that the answer is here – right here, and anyone who doesn’t see it must have some ulterior motives in ignoring what’s in front of their face. But that’s not how it works.
It’s weird and startling, though, if you haven’t had the opportunity to go back through clinical research (and even patient treatment) and seen how many things looked like they worked and really didn’t. It happens again and again. Alzheimer’s drugs, obesity drugs, cardiovascular drugs, osteoporosis drugs: over and over there have been what looked like positive results that evaporated on closer inspection. After you’ve experienced this a few times, you take the lesson to heart that the only way to be sure about these things is to run sufficiently powered controlled trials. No short cuts, no gut feelings – just data.
What do I mean by “sufficiently powered”? That gets to the concept of “effect size”, which is something that most people outside of medical research probably don’t spend much time thinking about. One of the favorite arguing points that I get sent my way is “You don’t have to run a controlled trial to see that parachutes work! What are you going to do, take up a planeload of people and toss half of them out without a chute to prove your point?” Ah, but the effect size of having a parachute at 10,000 feet is very, very large. And the larger the effect size, the smaller a trial can be and still have meaning. In drug research, though, we do not approach parachute levels of difference very often. Drugs help some parts of the patient population, to varying degrees, whereas a parachute helps every single person who’s tossed out of a plane (and the result shows up in a very hard, dramatic, and easily measurable endpoint!)
It may seem like the Covid-19 epidemic is more like the parachute situation, but consider that many people get infected with the virus without ever really knowing that they’re sick – which is not the case with being heaved out of an airplane, for the most part. Most of the people who get the coronavirus survive – the fatality rate is of course being argued about, but is probably in the 1% range, give or take. Now, from a public health standpoint, that’s an awful figure, ten times as bad as the seasonal flu and with a more infectious virus as well. But for figuring out therapeutic options, it’s tricky: if most everyone will eventually recover with the current standard of care, how do you test your new idea?
Well, you have to look at disease progression, for example: how many go on to the ICU, how long they’re in the hospital total, and so on. These are very important things to improve, and you’ll notice that these are patient-centered endpoints as well. Ideally, that’s what you want, as opposed to surrogate endpoints – in this case, viral load would be an example of one of those. You would think that viral load would correlate with the patient-centered ones, but that has to be proven and it might not be as tight a connection as you would like, either. As more studies collect such data you can start to use surrogate endpoints once you feel that they’re useful, but you should never just assume their utility up front. What counts in this epidemic is how many people get sick, how sick they get, and how quickly they recover. There are a lot of variables involved in all of those, and we need a lot of quality data to see what’s really going on.
One more point: someone last night was trying to tell me that my job was to “bring people hope” and that my attitude wasn’t helping with that task. Let me clear that up. I am not a physician, and I am not a clinician. I have spent my career in very early stage drug discovery, not at the bedside. Unfortunately, my lab skills are not well matched to the current epidemic – my own research has been more oncology-focused, and it’s way back in the pipeline. None of the last three companies I’ve worked for currently have any antiviral research. So as for my contribution to fighting the coronavirus, well, you’re looking a significant part of it right now. I can curate and annotate the news, add my own opinions after thirty years of drug discovery work and (I hope) make people smarter about what’s going on.
But keep in mind, most of what I’ve done, the great vast majority of what I’ve done over that thirty years has not worked in the clinic. Most things don’t. My job as a researcher has not to been to raise people’s hopes without data in hand, my job has been to try to produce such data so as to raise hopes with some reason to do so. When I see something to be hopeful about, I’ll say so, and when I think people are getting ahead of what we know, I’ll say that, too. Go back to the first things I wrote about the hydroxychloroquine/azithromycin work: I called it “potentially very interesting” and called for more data to see if it was real. That’s where I still am. Raising hopes just for the sake of raising hopes is not where I am, though, and in fact I find that whole idea to be cruel. We’re going to defeat this virus, this epidemic, by being as hard-nosed as we can be about collecting real data on real-world outcomes as quickly as efficiently as we can, not by talking vaguely about miracle cures and isn’t it something and wouldn’t it be great. You’ll need to go somewhere else for that. Try Dr. Oz, he’s good at that crap. I’ll stick to what I’m good at here.