Exaggerated claims about autism study

by on October 26, 2016 in society

This will be the first of a series of articles on ‘fake science’. The current academic/scientific community works with the media to promote claims about ‘reducing symptoms’ in various specified conditions/diseases. Here the categories of psychiatry come into play. Using categories the overall fact – behaviour (in anyone) can be improved by behavioural interventions can be presented as “scientific findings” about “symptoms” of a given condition such as ‘ADHD’, ‘learning difficulties’ or, as here, autism. Where would they be without all these categories? The claims produced are very typically based on intensive studies which have been set up to force a result. A small statistical result is then misreported to represent a major finding. The misreporting often starts with the researchers who tip the reporting in a certain way. It is often not a case of the press simply misreporting the data.

This is an example of a report on a recent paper on autism. The report is in the Guardian though the Daily Telegraph also gives it the same treatment.

The study used a sample of 152 young people. There was a control group (no intervention) and the test group who received the intervention. The intervention was an intensive parental-training programme.

The Guardian reports: at the start of the study 50% of participants in the control group were assessed as ‘severely autistic’ and 55% in the test group as ‘severely autistic’. At a 6 year follow-up study 63% in the control group were assessed as ‘severely autistic’ and 46% in the test group.

Often these studies rely on parents for the ratings. This introduces a special situation of relativity. Are we measuring something objective in the young people or a change in their parents’ perceptions and feelings? Even when this is not the case often the raters aren’t blind. They know which group received the intervention and thus may be subconsciously influenced. In this case we haven’t had a chance yet to review the source paper so we don’t know whether either of these were the case in this study. Another problem that can occur is that the control group is not representative of what really happens and this can tilt the statistics towards the much desired dramatic result. Often the intervention is of a “pure” form which is unlikely to be replicated in the real world. Another problem is that the focus of measurement is on “symptoms”, which means behaviour. Do modifications in behaviour represent a real benefit to the person being treated? In this study for example, while “symptoms of autism” were improved, according to the Guardian report, there was no improvement in anxiety or “depression” in the young people. Perhaps they had merely been got to behave a bit better. Their “autism” (a physical defect?) was not effected at all.

At any event. Taking the results as they stand it would appear that some young people in the test group have been moved down at least one category in the level of their autism assessment. A small but significant percentage of the test group showed this improvement. In the control group (no intervention) this figure went up. So. The study shows (other potential problems aside) that this intensive intervention can have an impact at improving the worst behavioural excesses of autism in a small but significant number of young people.

That’s great.

But any possible real-world benefit from this (which is in reality no more than a confirmation of what is already known – intensive behavioural interventions can modify behaviour in autistic children – is lost in the exaggerated reporting. The title of the piece in the Guardian is already a piece of drama:

“Study offers potential breakthrough in care of children with autism”

And this, the first sentence, is phantasy:

A new form of therapy has for the first time been shown to improve the symptoms and behaviour of autistic children, offering a potential breakthrough in care for millions of families

Perhaps the author should be writing adverts for ‘new’ chocolate bars. She manages to get the words “new” and “first time” and “breakthrough” and “millions of families” into the one sentence. Of course none of this is strictly speaking not true. The idea of training parents in behavioural interventions for ‘behavioural disorders’ is not new. But you could argue that the programme put together for this study was a “new form of therapy” – by definition. And yes given that the study showed a small percentage benefit you can, if you imagine a large enough population, extrapolate from the study to claim that if applied in the real world “millions” would benefit. “Breakthrough” would be more contestable but then ‘breakthough’ is a vague word capable of wide interpretation. Nonetheless there is nothing really new here. The improvements were in a lab study. The problematics of translating this to the real world are not addressed (costs of running the programmes, resistance of parents to take part, and so on).

To be fair the authors of this study are reported as acknowledging limits of their study. E.g.

This is not a cure, in the sense that the children who demonstrated improvements will still show remaining symptoms to a variable extent, but it does suggest that working with parents to interact with their children in this way can lead to improvements in symptoms over the long term

Nonetheless there is a problem here. Precisely because of the marketing style hype the actual real findings, the real applicability, are lost. In some cases of ‘behavioural disorders’ training parents in behavioural interventions can be helpful. Perhaps this particular study puts a new spin on it but there is nothing here of the order of a “breakthough”. The hard work would be implementing this already existing knowledge. In the current context that might mean policies and programmes set at the highest level by the Department of Health and the NHS. It would mean a big budget allocation. There is a kind of a split here. On the one hand researchers produce these papers – after all they want to get published and attract funding – and the press reports them in sensational terms – after all they want to sell more papers – and on the other hand there is the real world of limited resources in the NHS. (As well as particular problems with parent resistance which can often be a problematic in getting take-up of parent training programmes).

A more concerned approach would focus on how the already existing knowledge could be meaningfully applied in the real world. Indeed (with Illich in mind) rather than look to an implementation of costly parent-training programmes by the NHS we could envisage the NHS playing a role disseminating information which was then taken up by local ‘champions’ and voluntary associations of parents who implemented the ideas – without the requirement for a ‘professional’ paid ‘expert’ to tell them what to do. I.e. the dream of a scientifically literate and empowered democratic population.

Exaggerated, and therefore misleading reporting aside, such studies do have value. But simply shouting about them (at the top of your voice) from the rooftops is not going to get anywhere. We need to overcome this institutional paralysis and cultivated dependence on hierarchy and realise that if good ideas are to be taken advantage of the people themselves are going to have to self-organise and do that.

Add to Favorites Print article