It has been dubbed the “bible” of psychiatry, and indeed the Diagnostic and Statistical Manual of Mental Disorders (DSM) is taken by many as exactly that. Every new version of this publication, prepared by the American Psychiatric Association, is considered to contain the latest and more advanced criteria for the classification and diagnosis of mental disorders.
Acceptance has not been unanimous, though. For many of its critics, the DSM has been too unreliable, far too prescriptive and yet quite vague, very much geared towards the compartmentalization of human behaviour, very much conforming to the wishes of the big Pharmaceutical companies –in short: very problematic.
The news, then, that after more than sixty years of near hegemony –at least in the U.S.– the DSM is pushed aside by the US National Institute of Mental Health (NIMH), cannot but be welcome. A research framework is being introduced for collecting data for a new understanding of mental disorders, a "new nosology", away from DSM.
Is there, at last, room for optimism? Are we finally about to enter an era of scientific psychiatry which will (hopefully) settle all disagreements and clear out all ambiguities for good?
Last week I wrote about a scientific paper that claimed that “most published research findings are false”. I identified the three slightly different conceptions of truth that the abstract of that paper was alluding to, and suggested, as a work hypothesis, that we differentiate between “real truth” and “scientific truth”.
I ended that post rather abruptly and at a somewhat provocative point. I claimed that science does not have anything to do with reality.
I acknowledged, however, that this would need to be clarified.
This is what I shall attempt to do today: to clarify.
So science “does not have anything to do with reality”.
How did we get to this conclusion? What does it mean?
The other day I came across a very intriguing research paper, bearing a very provocative title: “Why Most Published Research Findings are False”. Published in 2005, this paper was written by John P.A. Ioannidis, a medical professor specializing in epidemiology. His claim is simple (I quote from the abstract of the paper):
The probability that a research claim is true may depend on study power and bias, the number of other studies on the same question, and, importantly, the ratio of true to no relationships among the relationships probed in each scientific field. In this framework, a research finding is less likely to be true when the studies conducted in a field are smaller; when effect sizes are smaller; when there is a greater number and lesser preselection of tested relationships; where there is greater flexibility in designs, definitions, outcomes, and analytical modes; when there is greater financial and other interest and prejudice; and when more teams are involved in a scientific field in chase of statistical significance.
Now, as you might imagine, a scientific paper with such a subject matter would be sure to attract a lot of attention, both positive and negative, and that was indeed the case. But I do not intend to participate in the debate, and this is not the reason I am bringing up this paper here.
I am more interested in the concept of truth, especially in the way it is employed in papers such as Ioannidis’, i.e. in current scientific research.
This is the blog of
a psychoanalyst practising
in North West London.
For more information,
please click here.
For a list of all posts,
please click here.