Monday, March 31, 2014

Losing the White Coat Part 2: Residency

This is part 2 of a series on the evolution of my approach to psychiatry. Part 1 was about my medical school experience, and A Most Influential Professor described a key experience I had in college.

I went into psychiatry because I was fascinated by the variety of human emotions, behavior, and psychopathology, and I wanted to explore the plethora of influences (cultural, social, psychological, and biological) on those aspects of humanity. My medical school emphasized the biological approach, so I decided to continue my training elsewhere for residency.

At my residency program, while there was more of an emphasis on psychotherapy compared to my medical school, the biological psychiatrists still reigned supreme. The university had some well known psychotherapists, but they tended to have titles such as "emeritus professor" or "clinical professor," meaning that they were not around very much. And I doubt they would have felt welcome, with the residents' main jobs being completing paperwork and adjusting medications during the majority of their rotations, rather than running groups or conducting therapy.

It was easy to see who the big money-makers of the department were: the researchers who focused on the neural basis of mental disorders while providing biological treatments in their clinical practice. There was a bipolar disorder expert, who once had a patient on 10 different medications, to the point that it was impossible to tell what was the patient's "disease" and what were the side effects. There was the schizophrenia expert who headed the locked inpatient unit, who frequently gave talks to psychiatrists in the community advertising the newest antipsychotic medications. She claimed that because she was on the speaker bureau for all the big pharma companies, she was unbiased in her assessment of the medications. And then there was the renowned depression expert, who once told us, "Even if the medications are no more effective than placebo, it doesn't mean that you shouldn't treat the patients." Make of that what you will.

However, the experience that opened my eyes most to the flaws of a purely biological approach to psychiatry was what I saw happening with Dr. Z, one of the psychiatrists on the electroconvulsive therapy (ECT) service. He gave great lectures, drawing up pretty diagrams of the circuits in the brain believed to underly mood and depression. Unlike most psychiatrists, he often walked around in scrubs, and he had a confident charm to go along with a cheerful disposition. Perhaps appropriately so, since he offered a treatment unparalleled in its effectiveness for patients with severe psychotic depression and bipolar disorder.

The problem, though, was that the bipolar disorder diagnosis (and its attendant "treatment resistant depression") became so loosely applied that practically anyone with mood swings was being diagnosed with "bipolar II," and Dr. Z fully embraced this trend. His evaluations for whether a patient was a good candidate for ECT were thorough, to a point. There was meticulous documentation of the medications that the patient has tried and the inadequate response to them. Mostly ignored, however, were details about what the patient's life was actually like and what factors may have been influencing their symptoms. Thus, plenty of patients who clearly had borderline personality disorder (BPD) were deemed "excellent candidates" for ECT; none of the depression medications that they had tried ever did lasting good, since their moods would turn depressed or irritable in response to interpersonal stress, regardless of what meds they were taking.

I remember hearing two stories in particular about his patients (details altered to protect anonymity). One day, a patient of Dr. Z's arrived in clinic holding a knife to her chest after her boyfriend broke up with her. She told the astounded clinic receptionist that she would stab herself if she did not see Dr. Z right away. Dr. Z was not in, and the patient ended up walking into the office of another psychiatrist, who managed to calmly talk her down while security was notified. Another time, a patient was dragged kicking and screaming into the ER after swallowing a handful of pills during an argument with her husband. She was heard yelling, "I'll only talk to Dr. Z! Where is he? I know he's coming because he loves me!" Dr. Z clearly had a profound effect on his BPD patients, even if the benefits of ECT for those patients was very temporary.

Recently, I read Dr. David Allen's post on the difference between the symptoms of major depression and the depression often seen in BPD. But even back then something felt off to me about doing ECT on patients who had "treatment resistant depression" because of a personality disorder, which brings me back to the title of this post. At the institutions where I trained, the psychiatrists who wore the white physician's coats, not surprisingly, tended to be the more biologically-oriented ones. Thus, in my mind the white coat became associated with their view of psychiatry, one that I did not share.

Thankfully, my mind was already set on being a child psychiatrist. At least in the world of child psychiatry, despite the influence of biological psychiatrists like Harvard's Biederman, many (I don't dare to claim "most," given the direction things seem to be heading) child psychiatrists still consider the influence of things like family, parenting, and developmental trauma on behavior, rather than just focusing on figuring out the black box of the brain.

Saturday, March 22, 2014

There's a Pill for That…Or Not

A few days ago, I read an article by Los Angeles-based psychiatrist Joseph Pierre titled "A Mad World" (thanks to Vaughn Bell over at Mindhacks for the link). Right off the bat, I was suspicious of the SEO link-baiting done by the Aeon editors: the title that appears on the web browser's title bar is not "A Mad World," but is "Do psychiatrists think everyone is crazy?", and the URL of the article ends in "have-psychiatrists-lost-perspective-on-mental-illness." Of course, Dr. Pierre likely had no control over how the publication chose to market his article; more on this later.

He does make some good points about how mainstream psychiatrists are not as bound by the DSM as some may think:
The truth is that while psychiatric diagnosis is helpful in understanding what ails a patient and formulating a treatment plan, psychiatrists don’t waste a lot of time fretting over whether a patient can be neatly categorised in DSM, or even whether or not that patient truly has a mental disorder at all. A patient comes in with a complaint of suffering, and the clinician tries to relieve that suffering independent of such exacting distinctions. If anything, such details become most important for insurance billing, where clinicians might err on the side of making a diagnosis to obtain reimbursement for a patient who might not otherwise be able to receive care.
However, reading through the piece gave me a very uneasy feeling. After thinking about it some, I think my discomfort comes from the author's uncritical acceptance of American consumerism, as illustrated by sections like this one:
Though many object to psychiatry’s perceived encroachment into normality, we rarely hear such complaints about the rest of medicine. Few lament that nearly all of us, at some point in our lives, seek care from a physician and take all manner of medications, most without need of a prescription, for one physical ailment or another.
Now, I love being a consumer as much as most other Americans. The convenience of being able to do just about anything ("There's an app for that") on a handheld device is incredible. But what happens when you mix healthcare and consumerism? You get commercials telling viewers to "ask your doctor about" lots of different conditions, whether it's chronic dry eye, low T, or whatever. We already live in a county where the costs of healthcare far outweigh the benefit in terms of life expectancy, so isn't the over-medicalization of normality worth lamenting, even if most people don't? And certainly when it comes to healthcare, the consumer is not always right, though doctors often give in and end up doing things like prescribing antibiotics for viral illnesses.

And then there was this passage, straight out of a pharma executive's wet-dream:
Pharmacotherapy for healthier individuals is likely to increase in the future as safer medications are developed, just as happened after selective serotonin re-uptake inhibitors (SSRIs) supplanted tricyclic antidepressants (TCAs) during the 1990s. In turn, the shift to medicating the healthier end of the continuum paves a path towards not only maximising wellness but enhancing normal functioning through ‘cosmetic’ intervention. Ultimately, availability of medications that enhance brain function or make us feel better than normal will be driven by consumer demand, not the Machiavellian plans of psychiatrists.
SSRI's have been around for over 25 years, and where are the safer medications? Has he not been reading the headlines? He continues with:
The legal use of drugs to alter our moods is already nearly ubiquitous. We take Ritalin, modafinil (Provigil), or just our daily cup of caffeine to help us focus, stay awake, and make that deadline at work; then we reach for our diazepam (Valium), alcohol, or marijuana to unwind at the end of the day. If a kind of anabolic steroid for the brain were created, say a pill that could increase IQ by an average of 10 points with a minimum of side effects, is there any question that the public would clamour for it?
Suppose we have a pill that could take away pain but has no side effects whatsoever, including constipation, sedation, deadliness in overdose, etc. Isn't it obvious that an emergent effect (whether or not you call it a "side effect" or not) is that people would be hooked on this pill, and it would have enormous street value? Perhaps I'm just old-fashioned, but I get the sense that side effects (e.g. passing out drunk, getting jittery from caffeine, becoming duller from chronic cannabis use) are nature's way of keeping us from going overboard with too much of a good thing.

Dr. Pierre states toward the end:
In the final analysis, psychiatrists don’t think that everyone is crazy, nor are we necessarily guilty of pathologising normal existence and foisting medications upon the populace as pawns of the drug companies. Instead, we are just doing what we can to relieve the suffering of those coming for help, rather than turning those people away.
Drug dealers can make the same argument, that they're just there to meet a consumer demand, relieve suffering, and increase happiness. Of course, with drug dealers it's easy to see that their products have very short-term benefits but cause long-term harm. While not nearly as extreme, I believe that if all we're doing is relieving suffering without helping patients recognize and change what led to the suffering, we are not truly helping them in the long run; we're just turning them into repeat customers. And suffering itself is not always a bad thing. In the field of child psychiatry & psychology, for example, we are increasingly recognizing how parents' efforts to protect their children from any suffering can lead to those kids having trouble being on their own or dealing with stress later in life.

I don't advocate turning people away, but I often emphasize to patients that feeling better is going to take significant time and effort on their part, and is not something I can just give them, which may not be what a good consumer wants to hear. Which brings me back to my first paragraph. I not only pointed out the methods Aeon used to market the article to consumers, but I also mentioned that Dr. Pierre works in L.A. This is not a coincidence, as L.A. is one of the epicenters of American consumerism. This article is emblematic of how consumerism operates and is a defense of it, by someone who may be too immersed in a consumerist culture to see its follies.

Sunday, March 16, 2014

Insel's Techno-Utopia

In a recent post, 1BOM likened the key opinion leaders (KOLs) in psychiatry who gathered at two glitzy meetings to fashion commentators at the Oscars:
It was Dr. Schatzberg’s comment at the end, "More needs to be done now if we are to have new treatments in the next decade..." It sounded to me like those dress designers thinking about what new innovations they could muster for the coming year. It has to be something new – novel, innovative, a new look, a new material, a new something to drape the pretty ladies with on the red carpet next year. […]

The psychopharmacology era from 1987 to the recent past has been like that. The next wonderful new antidepressant has to be more desirable than the last one. The novel way of accessorizing [sequencing, combining, augmenting] has to be value added. Without a pipeline producing something new periodically, the shine wears off of the old drugs and their foibles begin to show. So we need a new design, a new line [as they say in the garment industry], something to keep the momentum flowing. But our KOLs haven’t been like the dress designers, they’ve been more like the Red Carpet commentators who talk about other peoples’ designs – giving talks like recent advances in …, or neurobiology of ..., or writing review articles, or signing on to the industrial clinical trials – more groupies than stars. That must be why they’re so frantic. Commentators with nothing to comment about.
I think that's an interesting analogy, but for me it doesn't quite hit the nail on the head. To me, a more apt analogy is along ideological lines. Lately I've been reading Steven Pinker's The Better Angels of Our Nature, a very detailed and data-driven book about why violence has declined over time. I was struck by the following passages about the roles of utopian ideologies in promoting violence:
Institutionalized torture in Christendom was not just an unthinking habit; it had a moral rationale. If you really believe that failing to accept Jesus as one's savior is a ticket to fiery damnation, then torturing a person until he acknowledges this truth is doing him the biggest favor of his life; better a few hours now than an eternity later. And silencing a person before he can corrupt others, or making an example of him to deter the rest, is a responsible public health measure. [p. 16-17]
Revolutionary and Napoleonic France, Bell has shown, were consumed by a combination of French nationalism and utopian ideology. The ideology, like the versions of Christianity that came before it and the fascism and communism that would follow it, was messianic, apocalyptic, expansionist, and certain of its own rectitude. And it viewed its opponents as irredeemably evil: as existential threats that had to be eliminated in pursuit of a holy cause. [p. 239]
Of course, none of these KOLs are like Napoleon, Stalin, Mao, etc. Thank the deity of your choice that they do not have that much power, and that we live in a much more peaceful time. These days, scathing editorials questioning a person's integrity are as dirty as the KOLs get.

But just like those utopian movements, much of academic psychiatry (especially the KOL segment) is driven by an ideology: the tenets of biopsychiatry, which the 1 Boring Old Man blog has described in detail. This ideology is assumed to be correct, and because its believers think that this system will result in a huge amount of good (i.e. "NIMH envisions a world in which mental illnesses are prevented and cured"), then people who seem to oppose this ideology are at best deeply misguided, at worst causing irrevocable harm. Furthermore, anything short of this grand vision is deemed not worth pursuing.

In my mind, this is the best explanation for why things like Paxil study 329, or the Markingson case, or problematic conflicts of interest, or millions of mentally ill being locked up in jails and prisons, get ignored by the leaders of academic psychiatry. They're seen as relatively insignificant bumps in the road in the grand utopian scheme. Last year, I wrote a somewhat tongue-in-cheek post about what if the NIMH succeeds in its utopian vision. With the recent news that the NIMH will only fund treatment studies that also examine biological etiology, things are much more serious than I thought.

Friday, March 14, 2014

Why Aren't Mental Conditions Like Physical Ones?

Earlier this week, Dr. David Rettew tweeted me his article on Psychology Today, "What If We All Got Mentally Ill Sometimes?" I thought it was well reasoned and thought-provoking. Here's the last paragraph, which summarizes his argument:
What I am really trying to say here, I think, is that scientifically there is very little to go on to help us figure out where the lower thresholds of psychiatric disorders actually exist. To deal with this reality, we can either reserve the term mental illness for those with the most extreme levels of pathology or admit that the brain, the most complicated thing that has ever existed on this planet, gets a little off track once in a while for most of us and needs a little maintenance.  This maintenance does not and should not be confused with prescription medication or five times per week psychotherapy for all.  There needs to be some productive middle ground between a response of, for example, “It’s ADHD and you need this medication” and “It’s not ADHD (or there is no ADHD) so go home and fend for yourself.” 
My quick response was:

Here, I'd like to expand on my thoughts a bit more.

In an ideal world, everyone would recognize that the brain is the most complex organ that we have, indeed the most complex object that we know of in the universe. Thus, it would not be surprising that it does not function perfectly all of the time, and putting a label on what (we think) is going on would be as unremarkable as calling an upper respiratory viral infection "a cold." Just as most people get sick with colds twice a year, or suffer multiple orthopedic and soft tissue injuries throughout their lives, shouldn't mental conditions be as commonplace and non-stigmatized?

The problems, though, are many-fold. The first is what happens when someone shows up at the doctor's office looking tearful, depressed, or anxious. The majority of psychiatric medications are already being prescribed by primary care providers. As a medical student, I got to see many general internists rushing from one patient to another, spending 10-15 minutes on each encounter. These doctors had very little time to spend delving deeply into their patients' lives (this answers 1boringoldman's question, "what's the hurry?"). The more patient docs would listen empathetically for a few minutes, and then offer an SSRI or benzodiazepine, letting the patients make the choice as to whether they would like to try medication. So if more people thought that they had some kind of mental condition, more would probably opt for the meds. Psychiatrists, of course, should know better, but we are often just as culpable in taking a "medication-first" approach to treatment, especially when working in settings where we function as little more than medication prescribers/consultants. As the saying goes, if all you have is a hammer…

Dr. Rettew tweeted the following response:

I think it's wonderful that his clinic doesn't do 15 minute med checks. Most child psychiatry clinics at least acknowledge the importance of parents in a child's life, so the minimum is usually 30 minutes to allow time for a psychiatrist to work with the whole family. Unfortunately though, that method of practice is not the default for most mental health care that takes place in this country, and I worry about what will happen with the increased adoption of collaborative care models (nice anecdote here).

What Dr. Rettew proposes requires an adequate framework to provide checks and balances, i.e. using a true biopsychosocial model so that what is happening in the brain is not the only point of emphasis. A (very) rough analogy would that the use of needles to inject medications directly into the body can be a good thing, but it requires a framework of basic sanitary practices like sterilization of used needles and/or disposal, without which the technology could actually cause more harm than good. In The Hot Zone, for example, Richard Preston described how re-use of dirty needles led to ebola outbreaks in Africa. Another example, detailed by Steven Pinker in The Better Angels of Our Nature, is that when democracy replaces autocracy in places that don't have a culture that values pluralism and human rights, the result is often even more chaos and bloodshed.

Additionally, the effect on people of telling them that they have a brain condition is vastly different from telling them that they are suffering from indigestion or hypertension because of the stigma associated with mental illness. I don't think that labeling a significant proportion of the population with a mental illness is a good way of overcoming this stigma, especially when most doctors do not have time to explain the nuances behind psychiatric diagnosis to their patients. To the average patient, a diagnostic label implies a level of knowledge that we do not have. I have seen multiple cases where after receiving a diagnosis with inadequate explanation, patients have felt that there was something wrong with their brains, which for many resulted in more harm than good.

Overall, I agree with Dr. Rettew's message that there should be some middle ground between "diagnose + medicate" vs. "no diagnosis + no treatment." However, I don't think that the medical model is well suited for much of mental health since what contributes to both mental anguish and well-being is so multifactorial. That is why I love NOS/NEC, and I often find myself using "no specific diagnosis + multimodal treatment."