Homeopathic Study of Cancer Treatment Fails. Homeopaths Conclude It Works.
Given that it is Homeopathy Awareness Week again, I thought it would be worth exploring how Homeopaths mislead us about science and evidence.
So, from that site of uniformly misleading health advice, What Doctors Don’t Tell You, we learn that,
Homeopathy has a ‘clinically relevant’ effect way beyond placebo
Critics have always dismissed homeopathy as offering nothing more than a placebo effect – you just think it’s making you better. A new study has proved them wrong. Classical homeopathy has clear benefits to cancer patients that are “clinically relevant and statistically significant”, say researchers.
This is of course wrong. And as always, when homeopaths are wrong they are wrong in interesting ways that illuminate the nature of science and evidence.
The paper in Question (Classical homeopathy in the treatment of cancer patients – a prospective observational study of two independent cohorts, Rostock et al.) was recently published in the open access online journal BMC Cancer. The study was conducted in Switzerland and Germany and tried two compare to groups of patients undergoing treatment for cancer. All patients in the study were undergoing conventional treatment for a range of cancers. However, 259 of them had chosen to attend a hospital that also gave them a homeopathic sugar pill. Another 380 patient had chosen to attend hospitals that only gave conventional treatments. The purpose of the study was to see if any differences emerged between the groups such as in their quality of life, levels of fatigue and depression and also their satisfaction with their treatments.
It is possible with this sort of observational study to get some evidence regarding the effectiveness of treatments, but you have to be very careful. The reasons for this are that there may be very many differences between the groups apart from what hospital they attended. The gold standard of trials involves randomisation where the patient does not get a choice as to which arm of the study they are in – a coin is tossed. This should smooth out differences if you have enough patients. The other ‘gold standard’ method is to blind the participants. They are randomly assigned to an arm, but also not told which arm they are in. This reduced biases where patients may prefer one treatment or another and so feel differently in their satisfaction.
The researchers knew this was a problem but realised that there were practical difficulties in blinding and randomising – so, they chose the next best option – a matched pair analysis.
Matching pairs involves finding people in both groups who appear to be very similar in lots of key area such as age, type of cancer, stage of disease, stage of treatment, occupation, income and so on. The more factors you can match up the more confident you can be that the differences are due to the treatment and not due to some unmatched quantity, such as alcohol consumption.
The researchers intended to do this, as without having matched pair it is near impossible to draw conclusions from this sort of observational study.
What the researchers saw was indeed very large differences between the groups. They say,
Patients in the two groups differed in several sociodemographic and disease variables. Homeopathy patients were younger (54 vs. 60 years), had a much higher level of post-16 education (post secondary school/A-level, 54% vs. 25%), and were more likely to be white collar workers or in self-employed jobs (workers, employees 48% vs. 75%).
In both groups the most frequent tumour diagnosis was breast cancer (32% HG vs. 37% CG). In CG more patients with colorectal cancer were found (15% vs. 7%), while more patients with prostate cancer (7% vs. 3%) or melanoma (5% vs. 1%) sought the complementary homeopathic treatment. Patients from the HG were more likely to have a more severe diagnosis or progressed tumour stage (stage I-III only 30% vs. 43% in CG). Homeopathy patients also had a longer elapsed time since their first diagnosis (10 months vs. 3 months), and were more likely to have already had some previous cancer treatment (50% chemotherapy vs. 33%)
In other words, the groups were chalk and cheese.
Indeed, so different were the groups that the authors only managed to match 11 pairs out of the hundreds of participants. The authors noted that this was far too small a group to do an analysis and so none could be done.
In other words, the study failed.
But that has never stopped a group of homeopaths from drawing positive conclusions.
Although it was quite clear that the groups were very different and unmatchable, the authors go on the conclude things like,
During homeopathic care we saw a significant and stable improvement in QoL which, as measured by the FACT G, is sizeable at more than half a standard deviation. We do not see a comparable increase in QoL in the conventionally treated cohort. Such an effect size of more than half a standard deviation is by all standards a clinically relevant improvement.
It is from statements such as these that the homeopaths are able to say things like “A new study has proved [the sceptics] wrong.”
But this is a complete misunderstanding of the meaning of statistical significance.
I remember from my earliest scientific education learning about how important it was to understand the difference between precision in an experiment and accuracy. Precision is a measure of how good your measurement techniques are. Accuracy is a measure of how close your answer is to the truth. Both sorts of errors can mislead. But understanding your accuracy is always the most difficult and most important. And that is because accuracy is a measure of the biases in your experiment and that is not always clear. My decades old wooden ruler may have shrunk over time, and I still may feel its precision is millimetre good, but its accuracy cannot be relied upon to any degree as I do not know how much it has distorted unless I take care to check it.
And that is what these researchers are doing. They are now ignoring that the trial is now a tool full of difficult to quantify biases. There are so many differences between the groups that it is not possible to untangle what is the effect of the treatment and what is mere artefact.
And the tragedy is, of course, is that the whole trail is a pantomime as we know homeopathic sugar pills cannot have specific effects. Any effects from the homeopathic group may well just be a placebo effect from the extra two weeks they had in hospital. The authors do go so much as to admit this possibility.
But there are a few pieces of data that we ought to be alarmed about. There are differences between the groups that should concern.
In the patients that were in the conventional hospital, only 6.6% received no conventional treatment, whereas in the homeopathic group the corresponding figure was 25.6%. Whilst this large difference may be due to the fact that the group in the homeopathic hospitals were more advanced in their treatment regimes and so may have run out of conventional options. However, it is also noted that “10% of the [Homeopathic Group] had an indication for treatment from an oncological point of view but had refused it.”
It might be worth noting that 23% of the homeopathy group died compared to 20% in the conventional group. Of course, I cannot say whether this is a meaningful result and am reluctant to draw conclusions.
But.
And here is the big thing. The one conclusion that I can draw is that there is evidence here that a significant number of people offered homeopathy as a complementary therapy to mainstream therapy refuse genuine therapies when they might save their lives.
That is a horrible possibility that ought to send shock waves through the supporters of co-called Integrated Medicine, such as the College of Medicine. Their good intentions to ‘blend the best of mainstream and CAM’ might actually be doing measurable harm by sucking people into the intellectual black hole of superstitious and pseudoscientific treatments.
39 comments