Discover millions of ebooks, audiobooks, and so much more with a free trial

From $11.99/month after trial. Cancel anytime.

What Happened?: What mental health is really about
What Happened?: What mental health is really about
What Happened?: What mental health is really about
Ebook381 pages3 hours

What Happened?: What mental health is really about

Rating: 1 out of 5 stars

1/5

()

Read preview

About this ebook

If you consult a psychiatrist for assistance with a mental health problem you will be subjected to the "What's wrong with you?" approach. You will be assessed, diagnosed and then treated, most commonly with a pill to combat your purportedly biologically based ill.


But what if this is a hazardous and flawed paradigm? What if psy

LanguageEnglish
PublisherKMD Books
Release dateMay 13, 2021
ISBN9780645135381
What Happened?: What mental health is really about

Related to What Happened?

Related ebooks

Medical Biographies For You

View More

Related articles

Related categories

Reviews for What Happened?

Rating: 1 out of 5 stars
1/5

1 rating0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    What Happened? - Dr Bill Saunder

    Chapter One

    What’s wrong with

    What’s Wrong With You?

    How you understand something dictates how you respond to it.

    In the world of mental health, there are various paradigms of understanding; the dominant one being the What’s Wrong With You? model. In this essentially psychiatric perspective, the key to responding to mental health problems is to assess, diagnose then treat the sufferer for their ‘purportedly’ biological mental illness. This treatment is unfailingly psychopharmacological; a pill for every mental ill.

    But, there is an emerging, increasingly persuasive and well-substantiated paradigm, that this book contends is now sufficiently robust to challenge the dominant discourse. It is the, What Happened To You? model.

    Here, the understanding is that What Happened To You, especially as a child, is critical in the determination of your mental health. In this model, the intervention is trauma-informed psychotherapy. From this perspective the What’s Wrong With You? model is seen as fatally flawed and detrimental to the wellbeing of people with mental health issues; numbing or masking emotional pain with medication is simply not acceptable.

    Significantly, in the What Happened To You? model, there is no diagnosis. Importantly, unlike the What’s Wrong With You? model, the aetiology of mental ill health is known; the myriad of so-called ‘mental illnesses’ are manifestations of childhood neglect and/or abuse.

    This statement is driven by the experience of being a Clinical Psychologist who was trained in, and fully believed in, the What’s Wrong With You? model for decades. However, gradually, through an amalgam of clinical experience, research, reading, exposure to different interventions and a very real dissatisfaction with the limitations of biological psychiatry, I have transferred my allegiance to the What Happened To You? model.

    To start with, I want to quickly (and hopefully effectively) demolish the What’s Wrong With You model.

    In doing so, I understand that I am challenging the pre-eminent model on which governments spend billions of dollars each year, especially in the Western world. It is also very effectively championed by the psychopharmacological industry (Big Pharma) and, of course, the profession of psychiatry.

    I also understand that one chapter is not going to change the world. However, hopefully, the following will cause you to pause and perhaps reconsider the possibilities.

    There is a passage in Lewis Carroll’s Alice in Wonderland that I delighted in, when I read it as a nine or ten year old. It goes like this:

    Alice laughed. There’s no use trying, she said: One cannot believe impossible things.

    I dare say you haven’t had much practice, said the Queen. When I was your age, I always did it for half an hour a day. Why sometimes I’ve believed as many as six impossible things before breakfast.

    So, let’s begin the demolition of the What’s Wrong With You? model, with an invitation for you to believe six impossible things before the end of the chapter. Here we go.

    The maxim of the Clinical Psychology master’s course that I completed many years ago was get the diagnosis right and the right treatment will follow.

    There was a built-in corollary which was, if the patient failed to improve it was because the diagnosis was wrong. Such was the emphasis on diagnosis.

    Some twenty-odd years later when I ended up in charge of a Clinical Psychology course, we were mandated by our accreditors to include specific teaching on the recognition of mental health disorders and their specific management. Diagnosis was again seen as the key to appropriate intervention.

    In clinical medicine, diagnosis works like this. You present at your doctor’s, describe your symptoms, and the GP will send you off for X-rays, or scans, or blood tests, to confirm his or her hunch about what ails you. In clinical medicine there are, in this diagnostic process, hard end points and biologic markers for diagnosis.

    Let’s take an exemplar. The patient, let’s make him a fifty-seven-year-old male, turns up at his doctor’s talking about an urgency to urinate regularly, especially at night, but has difficulty in doing so. The doctor will have his or her suspicions as to what is wrong. Could it be prostate cancer?

    A rectal examination of the patient’s prostate gland may occur, but a Prostate Specific Antigen (PSA) blood test will also be undertaken. Let’s say the rectal examination is equivocal, as is, when it comes back, the PSA score. The patient has a PSA score of 6.2. This is an elevated score, but is not of itself conclusive that the patient has prostate cancer. So, the doctor may decide to run another PSA test in six to eight weeks, but in the meantime prescribes an antibiotic just in case the prostate is infected.

    The second PSA blood test comes back at 9.2. Not good news at all, but still not certain news. However, on the basis of the results so far an MRI of the prostate is warranted. This shows several cellular abnormalities, so a biopsy of the prostate is undertaken. Under pathological examination the results come back revealing that tissue from the prostate is abnormal and in the two worst areas the abnormalities are scored as being 9/10; very significant abnormality. The patient has prostate cancer. Treatment then follows.

    All the way through this process, the doctor’s initial opinion was evaluated against independent objective testing. Diagnosis was achieved through the use of a specific physiological test, scans and a biological procedure. The original doctor’s hunch that the patient had prostate cancer was independently confirmed.

    Unfortunately, in psychiatry, and in the management of mental health disorders, there are no hard endpoints at all. There are no independent biological markers whatsoever. There is no independent third-party confirmation. Despite all the breakthrough claims made for biological psychiatry, not one definitive test exists.

    In psychiatry this difficulty is, in fact, well-recognised. For example, in 2013, at the time of the publication of the 5th Diagnostic and Statistical Manual of Mental Disorders (known colloquially as the DSM 5 - the psychiatric diagnosis bible), the President of the American Psychiatric Association (APA), psychiatrist Jeffrey Lieberman, noted that genetics and biomarkers were not included in DSM 5.0 as there was not yet sufficient evidence for their inclusion.

    Interestingly, on the publication of DSM 5.0, Dr. Tom Insel, the head of the National Institute for Mental Health (NIMH), the lead American mental health research agency, commented that patients deserved better than the DSM 5.0.

    Insel then advocated for a new diagnostic system based on bio markers, neurobiology, genetics and brain functioning. He wanted a precision medicine approach to mental illness. That, in keeping with the zeitgeist of the need for independent diagnostics, was both warranted and laudable.

    However, four years later, Insel wrote that while he had spent thirteen years in his position at NIMH looking into the genetics and neuroscience behind mental disorders, and despite research undertaken during that time to the tune of around $20 billion, he acknowledged that they hadn’t reduced the rates of suicide or hospitilisations.

    A startling admission, a very concerning admission, and one that confirms that there are still no biological markers for any of the mental health disorders, because the biological causes of mental illness remain unknown. This is important. In the dominant model of mental illness, the cause, of all but one, remain unknown. Interestingly, the one that is known is Post Traumatic Stress Disorder (PTSD), which isn’t of course biological, but rather a consequence of an adverse experience, or What Happened To You?

    The failure to know what causes the 250 plus other mental illnesses may be because the biological causes have not been found yet. Or, could it be that they are not biologically caused? Unfortunately, you can never prove a negative, so the expensive (fruitless?) search goes on.

    Today the diagnosis of a mental health disorder is, just like a hundred years ago, nothing other than an opinion. I think you’ve got bipolar, I think you have schizophrenia, I think you have an anxiety disorder, I think you have depression.

    It may be an informed opinion based on training and experience, but an opinion nonetheless. The trouble is that people’s opinions differ, often markedly.

    To demonstrate the precariousness of this complete lack of hard endpoints, of biological markers, of anything resembling objective science on the management of mental health, and also remembering the Queen’s conversation with Alice, here is the first impossible thing to believe about the What’s Wrong With You? model, that I’d like you to believe.

    Impossible thing to believe #1

    Consider the case of Mr. Anders Breivik. Mr. Breivik was the man who, in 2011, exploded a bomb in Oslo then drove to an island off the coast of Norway and shot sixty-nine young people dead. Mr. Breivik was arrested by the police. His justification for his slayings was that Norway was becoming contaminated by immigrants.

    As part of the preparation for Breivik’s trial, the prosecution had Mr. Breivik interviewed by two independent psychiatrists. After an extensive assessment, they concluded that Mr. Breivik was suffering from schizophrenia. He was therefore not responsible for his actions because it was deemed that on the day of the shooting he was actively psychotic (mad).

    If you refer to page ninety-nine of the current psychiatric diagnostic bible (DSM 5.0) you will find that there are five key symptoms of schizophrenia.

    These are:

    Delusions

    Hallucinations

    Disorganised speech

    Grossly disorganised or catatonic behaviour

    Negative symptoms such as avolition or diminished emotional expression

    However, as this diagnosis became known it became increasingly unpopular. Even Mr. Breivik didn’t like it. So the defence got two different psychiatrists to assess Mr. Breivik. After an extensive assessment, they concluded that the original diagnosis was incorrect.

    In their opinion, Mr. Breivik had an anti-social personality disorder. On page 659 of DSM 5.0 you will find the diagnostic criteria for anti-social personality disorder. There are seven key features and you need three or more to earn the diagnosis. They are as follows:

    A failure to conform to social norms with respect to lawful behaviour

    Deceitfulness

    Impulsivity and a failure to plan ahead

    Reckless disregard for self or others

    Irritability and aggressiveness

    Consistent irresponsibilty

    Lack of remorse

    If you compare the criteria for schizophrenia and those for anti-social personality disorder, it is clear that none of the diagnostic criteria overlap. The two disorders are literally as different as chalk and cheese. Yet, both teams of psychiatrists were convinced as to the correctness of their totally different diagnoses.

    But, then again, people are usually disposed to think that their opinions are right.

    This was an important finding because it meant that Mr. Breivik could be tried for his crimes. He was bad, not mad. This second opinion stood, and Mr. Breivik was sentenced to the Norwegian equivalent of life in prison.

    So how can this be? How can schizophrenia, which is considered to be the major mental health illness, be confused with a disorder of personality? Especially, as personality disorders are, in the DSMs, not conceptualised as being mental health illnesses at all.

    Surely the basic, fundamental test of any diagnostic system is to be able to determine which people have an illness and which do not?

    The Breivik case demonstrates that the current psychiatric diagnosis system failed this basic test.

    Yet, the Breivik case was not really surprising. His situation was mirrored totally in a 1970s study where psychiatrists in Canada, the USA and Britain were asked to independently diagnose case studies.

    In one case, Case F, fifty-three percent of the 250 American psychiatrists in the study concluded that the patient had schizophrenia; a one in two agreement rate, which you might consider surprisingly low. After all, schizophrenia is deemed to be the major mental health illness.

    However, it got worse. Of the 115 Canadian psychiatrists involved in the study, only twenty-seven percent agreed that Case F was schizophrenic; a one in three agreement rate. If you put the American and the Canadian psychiatrists together then the agreement rate for the 365 psychiatrists involved was forty-five percent. Thus, the interrater agreement rate in the diagnosis of whether Case F was schizophrenic was now below one in two.

    If you toss a coin 365 times the number of heads that come up will be, purely by chance, somewhere between forty-five and fifty-five percent of the coin tosses.

    And still, the results got even worse.

    Of the 194 British psychiatrists only two percent, yes, two percent, that’s four British psychiatrists, considered Case F to have schizophrenia; a one in fifty agreement rate. The majority of British psychiatrists considered that, in their opinion, Case F had a personality disorder.

    Thus, for the entire study, the inter-rater agreement on whether Case F had schizophrenia was thirty percent. The agreement rate on whether Case F had a personality disorder was thirty-seven percent and other diagnoses were thirty-three percent.

    How can this possibly be? How can the most major of mental health illnesses, schizophrenia, only be recognised by a third of over 550 practising psychiatrists?

    If you take a pack of cards and remove, say, all the diamonds, so that you have three categories (hearts, clubs and spades or metaphorically schizophrenia, personality disorder and other) and you then shuffle the cards, put them face down into three piles, the number of spades, clubs and hearts you will have in each pile will be somewhere in the thirty percents, purely by chance.

    So, impossible thing to believe number one: based on the 1970s study, and the fact that nothing in the way in which diagnosis is achieved has subsequently changed, psychiatrists cannot reliably distinguish people with the mental illness of schizophrenia from those with non-mental illnesses such as personality disorders. Indeed, they do no better than chance.

    Impossible thing to believe #2

    Diagnostic reliability is essentially when two people see the same thing, say, a giraffe, and they independently and without prompting agree that it is a giraffe. Then if you extend the test out to include, say, elephants, cheetahs, leopards, tigers, lions and chipmunks, and again ask two or more independent witnesses what they see, it is possible to determine what the inter-rater reliability rate is. Obviously one hundred per cent is optimum, but it is seldom, if ever, achieved. When asked, human beings cannot unanimously agree which day of the week it is; apparently ninety per cent for day of the week is very good.

    So when it comes to psychiatric diagnosis, just how good are psychiatrists at agreeing about what they see?

    Well it depends who you ask. Jeffrey Lieberman, former President of the APA and author of the acclaimed book Shrinks has written: "Mental disorders are abnormal, enduring, harmful, treatable, feature a biological component and can be reliably diagnosed."

    Unfortunately, in my opinion, his assertion of reliability is, unreliable. Similarly, the assertion of a biological component is contradicted, as there is a pronouncement in the same book that no biological causes have been found (yet) for any of the 260 plus mental illnesses outlined in DSM 5.0.

    Here is why, I believe, his claim is unreliable.

    With the advent of DSM III in 1980 and its putatively new and improved diagnostic categories, considerable effort was made to address the diagnostic reliability problems identified above; that very troublesome Canadian, USA and UK study was undertaken in the early 1970s and based on DSM II.

    DSM III was deemed to be a considerable improvement on its 1960’s predecessor DSM II. As a way of addressing the known difficulties of opinion-based diagnostics, the principal redress was to define clear criteria for each mental illness. The man behind this work was Dr. Robert Spitzer, who was given the monumental task of creating the diagnostic criteria for all of the eventual 256 mental health illnesses that came to be in DSM III. It took him six years to do so.

    A structured clinical interview (known as the SCID) was also created based on the same diagnostic criteria outlined in DSM III and then its revision in 1986, the DSM III R. After the introduction of the SCID, the DSM IIIR was put to the test.

    This study involved some six hundred patients, attending five sites in America and one in Germany, being interviewed by twenty-five clinicians, all of whom had been trained in the use of the SCID. Each patient was separately interviewed by two clinicians and an inter-rater agreement score was then determined.

    However, in cases of diagnostic disagreement, the two interviewers were allowed to reconsider their original opinions and were invited to arrive at consensual diagnoses.

    Interestingly, the authors did acknowledge that the little device of allowing dissenting raters to arrive at consensus by post-interview consultation may have artificially raised reliability. Now that’s a very real probability, but they then dismissed this probability because they considered that the achieved inter-rater reliability values were too low for this too have occurred.

    How does that work? I have a little suspicion that if the researchers had not consulted in this way then the achieved values may have been even lower.

    From a research perspective, this is a highly dubious practice, but nonetheless the best overall reliability scores the raters could achieve for their independent diagnoses was a kappa score of 0.61. (A kappa score is a statistical measure of reliability).

    For this study, Dr. Spitzer determined that a kappa score of 0.7 and above was high, from 0.5 to 0.7 fair and below 0.5 poor.

    This categorisation is in itself of interest because in earlier work, Dr. Spitzer had declared that a kappa score of 0.7 was only satisfactory, and below 0.7 poor.

    This lowering the bar, meant what was claimed as fair reliability in this study, would, in previous research, have been labelled as poor.

    That is of itself interesting. Additionally, and also problematically, the lead researcher on this independent study, Janet Williams, later became Dr. Spitzer’s wife. Would it be outrageous to suggest that the impartiality of the research team was in doubt?

    Just to confound the results a little further, another sleight of hand occurred. The definition of what a match was, well, let’s call it generous. In DSM III, there are 265 possible reliable diagnoses, but in this research these 265 possibilities were collapsed into classes.

    It’s like testing people to name specific types of cars, such as a Volkswagen Golf GTi, a Range Rover Sport, or a Mazda 3, but all you need to claim that they both called it the same thing, is for them to say oh, it’s a Mazda, it’s a Land Rover, or it’s a Volkswagen. Now that’s easier. Thus, agreement as to the same class of diagnosis was deemed a match even if the two raters disagreed, for example, on the type of personality disorder that a patient had. Thus, as long as they both said, it’s a Mazda, despite one saying it’s a Mazda 3,and the other it’s a Mazda CX 30, – then it’s a match!

    From my perspective, I wonder if that could also have artificially inflated the achieved reliability values.

    A two dollar coin and a five cent coin are both money, yet we certainly expect our banks and our accountants to do better than considering them both the same.

    If this was not damning enough, there was another awkward finding. The sample of patients was drawn from six sites. Four of these were clinical units, but two were not. These non-psychiatric sites were a community medical facility and a sample of anxious, worried or depressed people recruited by advertising in community newspapers. The inter-rater reliability Kappa scores achieved here were 0.32 and 0.38 respectively.

    I think that however you jiggle it, these results would fall in the very poor or totally unsatisfactory categories.

    The thing is, as people become more distressed, they become more conspicuous.

    Diagnosing the acute, severe cases seems easier, but only to a certain point.

    Finally, and most tellingly, all the raters in this study were trained using a Standardised Clinical Interview (the SCID). This is a flawed model. No every day, ordinarily practising psychiatrist ever uses such a thing. The very device invented to improve reliability in clinical practice is totally eschewed by the profession it was invented for.

    To be fair to the authors of the above study, they did finally admit defeat: Prior to this study we expected higher reliability values. We are at a loss to explain why this was not the case.

    For me, there is one very simple explanation, which is that the 265 disorders identified in the DSM III are not mutually exclusive, that is, they are not separate disorders at all. They blur into each other because they are merely variations on a theme. They are different manifestations of the same aetiology: childhood neglect and/or abuse.

    Interestingly, in the lead up to the production of the next DSM (DSM IV) the APA received funding from an independent charitable foundation to undertake a new reliability study. Tellingly, although the data collection phase of the project was completed, the findings of the study were never published. The project’s director said at the time that the APA ran out of money.

    That is surprising, because as a former researcher, I am all too aware that in any research project a budget has to be submitted at the time of application for any grant. It is always the data collection, not the analysis, that is the most expensive part, and if things go astray, then the budget can be blown. Running out of money in the analysis stage is less common, and even if you do, funding bodies are often sympathetic to valid over-runs. If this very important project had run out of money, then surely in the interests of science additional funding could have been sought from either the original funders or elsewhere.

    Indeed, given that the sales of DSM III and IIIR generated in excess of $25 million for the APA (in 1980s value), then surely any shortfall could have been met by the APA themselves. Especially as, if the results had been as good as has been subsequently claimed, that would have further strengthened the marketing of later editions of the DSMs. It is reported that over one million copies of DSM IV were sold for an impressive $80 million. (DSM IV was published, the study about its reliability wasn’t). Surely there would have been something in the kitty to fund an analysis of the earlier obtained results?

    Then there is just the small matter of the field trials that occurred in the lead up to DSM 5.0.

    As for all previous DSM editions, the methods used to assess reliability were claimed to reflect current standards for psychiatric investigation. Independent interviews by two different clinicians trained in the diagnoses, each prompted by a computerised checklist.

    Well such methods may reflect current research practice, but as will be apparent, they do not reflect everyday clinical practice where a single practitioner makes the diagnosis without any help from any computer check-list or special pre-study diagnostic training. The only training in diagnosis in real life practice is what psychiatrists were exposed to when studying for admission to psychiatry. That could, of course, have been anywhere from five to fifty years ago. So although the study may look good from a research perspective such work clearly lacks ethnographic validity; that is it fails to replicate real life.

    So how did the field trials for the next DSM go? Well according to the researchers they went very well. They even claimed that at last in psychiatric practice a rose is a rose is a rose.

    Ah, unless you have a generalised anxiety disorder, the inter-rater kappa score was 0.2 (remember Dr. Bob Spitzer claiming that anything below 0.7 was poor), so for anxiety, a rose is a dandelion or perhaps a cactus.

    For the illness of major depression the raters could only achieve 0.28 (a rose is a cabbage or a banana perhaps), and anti-social personality disorder came in at 0.26, (a rose is a geranium or a fir tree).

    Interestingly, the authors also claimed that their research showed the problem in distinguishing schizophrenia, bipolar disorder, and schizoaffective disorder—a prior, vexed issue—has largely been resolved, and all three conditions have good Kappa statistics.

    Excellent, except the purported good kappa statistics, were 0.46, 0.56 and 0.50 respectively. The researchers achieved so-called good kappas not by improvements in their diagnostic categories but by the redefining of good Kappadown from above 0.7 to 0.45.

    Furthermore, these good results were only obtained by the artifice of having specially trained for the study psychiatric researchers aided by a computer checklist.

    Given the above, the following comment by trenchant critic of psychiatry, Bonnie Burstow, seems considered and very reasonable. She noted that the claims of high reliability, indeed of improved reliability, in short is not a reality, but a discursive product.

    Two psychologists, Kutchins and Kirk, after reviewing the literature relating to diagnostic reliability concluded, the DSM revolution in reliability has been a revolution of rhetoric not reality.

    I could be blunter, in fact, I will be. The claims of improved reliability, even reliability per se, are unreliable.

    So what does Dr. Bob Spitzer, the man who set himself the task of improving the reliability of psychiatric diagnosis, really think? Well, this is

    Enjoying the preview?
    Page 1 of 1