A new study of e-cigarettes’ efficacy in smoking cessation has not only pitted some of vaping’s most outspoken scientific supporters against among its fiercest academic critics, but in addition illustrates lots of the pitfalls facing researchers on the topic and those – including policy-makers – who must interpret their work.
The furore has erupted over a paper published in The Lancet Respiratory Medicine and co-authored by Stanton Glantz, director from the Center for Tobacco Control Research and Education at the University of California, San Francisco, plus a former colleague – Sara Kalkhoran, now of Harvard Medical School, who is actually named as first author but will not enjoy Glantz’s fame (or notoriety) in tobacco control and vaping circles.
Their research sought to compare the success rates in quitting combustible cigarettes of smokers who vape and smokers who don’t: quite simply, to learn whether use of e-cigs is correlated with success in quitting, that might well imply that vaping can help you quit smoking. To get this done they performed a meta-analysis of 20 previously published papers. Which is, they didn’t conduct any new information right on actual smokers or vapers, but alternatively made an effort to blend the results of existing studies to find out if they converge over a likely answer. It is a common and well-accepted method of extracting truth from statistics in numerous fields, although – as we’ll see – it’s one fraught with challenges.
Their headline finding, promoted by Glantz himself online along with from the university, is the fact vapers are 28% less likely to avoid smoking than non-vapers – a conclusion which will advise that vaping is not only ineffective in smoking cessation, but actually counterproductive.
The result has, predictably, been uproar from your supporters of Top E Cigs within the scientific and public health community, especially in Britain. Amongst the gravest charges are those levelled by Peter Hajek, the psychologist who directs the Tobacco Dependence Research Unit at Queen Mary University of London, calling the Kalkhoran/Glantz paper “grossly misleading”, and also by Carl V. Phillips, scientific director of the pro-vaping Consumer Advocates for Smoke-Free Alternatives Association (CASAA) within the Usa, who wrote “it is apparent that Glantz was misinterpreting the information willfully, instead of accidentally”.
Robert West, another British psychologist and also the director of tobacco studies at a centre run by University College London, said “publication with this study represents a major failure in the peer review system within this journal”. Linda Bauld, professor of health policy on the University of Stirling, suggested the “conclusions are tentative and quite often incorrect”. Ann McNeill, professor of tobacco addiction in the National Addiction Centre at King’s College London, said “this review is not scientific” and added that “the information included about two studies which i co-authored is either inaccurate or misleading”.
But what, precisely, would be the problems these eminent critics discover in the Kalkhoran/Glantz paper? To respond to some of that question, it’s essential to go under the sensational 28%, and examine what was studied and just how.
Meta-analysis is actually a seductive idea. If (say) you might have 100 separate studies, every one of 1000 individuals, why not combine those to create – in effect – just one study of 100,000 people, the outcomes that should be significantly less prone to any distortions that might have crept into an individual investigation?
(This might happen, for instance, by inadvertently selecting participants using a greater or lesser propensity to quit smoking because of some factor not considered through the researchers – a case of “selection bias”.)
Obviously, the statistical side of the meta-analysis is pretty more sophisticated than merely averaging out your totals, but that’s the typical concept. And also from that simplistic outline, it’s immediately apparent where problems can arise.
If its results are to be meaningful, the meta-analysis needs to somehow take account of variations in the design of the person studies (they could define “smoking cessation” differently, as an example). If this ignores those variations, and tries to shoehorn all results into a model that many of them don’t fit, it’s introducing their own distortions.
Moreover, if the studies it’s based on are inherently flawed in any way, the meta-analysis – however painstakingly conducted – will inherit those same flaws.
This is a charge produced by the reality Initiative, a U.S. anti-smoking nonprofit which normally takes an unwelcoming look at e-cigarettes, regarding a previous Glantz meta-analysis which will come to similar conclusions for the Kalkhoran/Glantz study.
In a submission this past year towards the United states Food and Drug Administration (FDA), answering that federal agency’s demand comments on its proposed e-cigarette regulation, the reality Initiative noted that it had reviewed many studies of e-cigs’ role in cessation and concluded they were “marred by poor measurement of exposures and unmeasured confounders”. Yet, it said, “many of these have already been included in a meta-analysis [Glantz’s] that states show that smokers who use e-cigarettes are not as likely to stop smoking in comparison to those that tend not to. This meta- analysis simply lumps together the errors of inference from all of these correlations.”
Additionally, it added that “quantitatively synthesizing heterogeneous studies is scientifically inappropriate and the findings of the meta-analyses are therefore invalid”. Put bluntly, don’t mix apples with oranges and anticipate to get an apple pie.
Such doubts about meta-analyses are far away from rare. Steven L. Bernstein, professor of health policy at Yale, echoed the Truth Initiative’s points when he wrote inside the Lancet Respiratory Medicine – the same journal that published this year’s Kalkhoran/Glantz work – that the studies contained in their meta-analysis were “mostly observational, often without any control group, with tobacco use status assessed in widely disparate ways” though he added that “this is not any fault of [Kalkhoran and Glantz]; abundant, published, methodologically rigorous studies simply do not exist yet”.
So a meta-analysis could only be as effective as the study it aggregates, and drawing conclusions as a result is only valid when the studies it’s according to are constructed in similar approaches to one another – or, at the very least, if any differences are carefully compensated for. Of course, such drawbacks also pertain to meta-analyses which can be favourable to e-cigarettes, including the famous Cochrane Review from late 2014.
Other criticisms in the Kalkhoran/Glantz work exceed the drawbacks of meta-analyses in general, while focusing on the specific questions posed by the San Francisco researchers and also the ways they made an effort to respond to them.
One frequently-expressed concern has been that Kalkhoran and Glantz were studying the wrong people, skewing their analysis by not accurately reflecting the true number of e-cig-assisted quitters.
As CASAA’s Phillips points out, the e-cigarette users inside the two scholars’ number-crunching were all current smokers who had already tried e-cigarettes if the studies on their own quit attempts started. Thus, the study by its nature excluded people who had started vaping and quickly abandoned smoking; if such people exist in large numbers, counting them could have made e-cigarettes seem an infinitely more successful way to smoking cessation.
Another question was raised by Yale’s Bernstein, who observed which not all vapers who smoke are attempting to quit combustibles. Naturally, people who aren’t attempting to quit won’t quit, and Bernstein observed that when these folks kndnkt excluded from your data, it suggested “no effect of e-cigarettes, not really that e-cigarette users were not as likely to quit”.
Excluding some who did have the ability to quit – then including individuals who have no goal of quitting anyway – would likely manage to affect the result of a report purporting to measure successful quit attempts, although Kalkhoran and Glantz argue that their “conclusion was insensitive to a wide range of study design factors, including whether the study population consisted only of smokers interested in smoking cessation, or all smokers”.
But there is also a further slightly cloudy area which affects much science – not simply meta-analyses, and not simply these particular researchers’ work – and, importantly, is frequently overlooked in media reporting, as well as by institutions’ publicity departments.