A month ago, the media got very excited about an HIV vaccine. Study results, released in Thailand with a maximum of fuss and a minimum of detail, showed that the two-step vaccine might protect about a third of the people who get the shots against HIV. Then the doom-mongers weighed in: without more information, we might be overestimating the effects of the jabs. So, is the syringe half full or half empty?
Now more details have been released, we’re still looking at half a glass. Hurrah! said some reports. The vaccine really is protective. Boo hoo, said others. Unless you torture the statistics, they don’t confess to much of an impact. I finally got around to combing through the full report of the trial in the New England Journal of Medicine. Both the optimists and the pessimists are right. It really depends on what your hopes and expectations were. If you are a basic scientist (as most of the people involved in the study were) you’d be pretty thrilled by the results, because they show that vaccines might one day work. If you are a public health boffin such as myself, you’d be pretty disappointed, because the study suggests that that this vaccine doesn’t work for the people who really need it — a point much underplayed in the official reports.
In their full paper the research team reported three sets of results for this study among young men and women in Northern Thailand; only the most optimistic of these was reported to the press in the initial release of results a month ago. Keith Alcorn of Aidsmap has produced a typically sound and balanced summary of the paper if you want more details. But here’s my more opinionated take.
Analysis 1: Real World.
This is technically known as an “intention to treat analysis”. The final analysis includes everyone who was enrolled in the study, regardless of whether or not they followed all the procedures correctly. This is the analysis which is most interesting to public health boffins, because it comes closest to showing how things might happen in the messy reality of life, where people forget to show up for appointments, get given the wrong dose by mistake, etc. In this analysis, people in the vaccinated group were 26.4% less likely than those in the unvaccinated group to get infected with HIV. There was a 92% probability that this difference was not due simply to chance; that means that the difference would not be considered “significant” by those who cleave to the mystical figure of 95% to dictate what is or is not worth considering. Taking into account random differences between the people assigned to the vaccine and the placebo groups, the researchers were 95% sure that the real effect of the vaccine did something between making you 48 percent less likely to contract HIV, and making you four percent more likely to get infected.
Analysis 2: Ideal World.
Known as the “per protocol analysis”, this looks only at the people who got all their shots on time, in the right doses. This is more or less the human equivalent of doing things in lab conditions, and is the sort of analysis that is most useful for basic scientists. Only three quarters of all the study subjects qualified. That in itself is worrying to people like me; if we can’t deliver four doses of vaccine to a quarter of the participants in an incredibly well-organised, well-funded study with hugely well-motivated study staff, how the hell are we going to do it in the real world? More worrying to the basic scientists, I would have thought, is the fact that in this sub-population of people who did everything exactly comme il faut, the vaccine did not have a more pronounced effect (26.2%, with an 16% chance that the effect was due to chance). Because numbers were smaller there was an even wider range that might have reflected the “true” outcome, from increasing infections by 13.3 percent to cutting them by 51.9 percent.
Analysis 3: Tidied-up World
Not a common convention, the “modified intention to treat analysis” essentially reflected the real world with the messiest bits knocked off. In this analysis, the researchers included everyone in the study, except the seven people who it turned out were already infected before their first jab. These people were missed in the initial screening test because they were still in the “window period” during which a person has the virus, but not yet the antibodies which cause a test to show up positive. They were discovered because they had turned positive by their last jab; the team then went back and used a (much more expensive) test for the virus itself on the original screening sample and found that they had already been infected. From a basic science point of view, it makes perfect sense to excluse these people from the analysis; obviously, a vaccine can’t protect people who are already infected. From a public health point of view, it’s debatable whether we should tidy up the data like this. If we put huge national vaccine programmes in place, we’re going to be vaccinating people who are in the window period, especially in the early years, and in groups at highest risk. I’d say we want to take that into account when estimating the potential effect of a vaccine. it was this “tidy” analysis that hit the headlines a month ago, and gave the study its only “significant” result — a 31.2% reduction in HIV infection, with a 96% probability that the effect was not a statistical fluke. This time, we could be 95% sure that the vaccine didn’t make things worse, that it reduced infection by at least 1.1%, and perhaps by as much as 51.2%.
As a public health nerd, I’m most interested in the Real World Analysis. But I’m even more interested in something that’s buried down at the bottom of Table 2.
Here, the research team looks at the effect of the vaccine on people with different levels of risk behaviour. In a shockingly poor piece of paper writing/ editing, it is not actually possible to tell from the Methods section of this paper how the different levels of risk are defined. But the definition for high risk does seem to include at least some of the usual suspects: needle sharing during drug injection, same sex partners for men, commercial sex etc. And what Table 2 shows is that the vaccine makes no difference at all for those at highest risk. It might cut infection rates by nearly half in that group, or it might increase the chance of getting HIV by nearly three quarters. The best-guess estimate is that it cuts infection rates by under 4% among the people who are most likely to be exposed to the virus. Four percent is as good (or bad) as nothing.
The researchers point out that the study was not designed to look at these differences, but call the results “intriguing”. To an immunologist, they must be. Perhaps the immunity conferred by the vaccine is not strong enough to withstand the repeated assaults suffered by someone who shares needles daily or turns tricks three times a week — I have no idea. But to public health workers, it is not intriguing, it is devastating. If a vaccine doesn’t work for the people who need it most, what’s the point? It depends on costs, of course. But would we really develop something that we could give to people who have a very low probability of exposure, while leaving those who are likely to be at risk for HIV unprotected?
It’s a false dichotomy, of course. This trial is a triumph for basic science, because it gives us something positive to work with. It is very far from being a triumph for public health, and it is not helpful that in the early rush of euphoria it was presented as such. I’d even be wary of the language used by the authors of the NEJM paper: in their headline result, they reported quite wrongly that the study showed that “there was a trend toward the prevention of HIV-1 infection among the vaccine recipients”. A trend is something that develops over time. If anything, the data suggest that the effect of the vaccine was weakened over time, so the trend was away from protection, not towards it. But I’m splitting hairs. With vaccines, the basic science has to be right before we even think about the public health questions. This study will send the immunologists back to the drawing board. They need to figure out how we have a possibly succesful vaccine that makes no difference to viral loads in those who do get infected. They need to understand why people who are most exposed to HIV are least likely to be protected. They need to parse out the mechanism by which these two sets of shots, each of which has failed on its own, might be working together. If (and it is still only an if) they can do all of that, then develop something that really does work, the public health nerds can start worrying about how to deliver it, and to whom.