Showing posts with label psychiatry training. Show all posts
Showing posts with label psychiatry training. Show all posts

Saturday, September 5, 2015

Who Controls the Future of Medical Knowledge? Part I

The recent discontent amongst physicians regarding the process of maintaining board certification in various specialties got me thinking about a broader question: how do doctors acquire new medical knowledge, especially after medical school? Which brings me to an even more critical question: who controls said knowledge?

I would argue that next to our ability to listen to and empathize with patients, the other most valuable aspect of the medical profession is our knowledge. Ever since the days of Hippocrates, medical knowledge has been transmitted from one doctor to another in essentially the same way. In medical school and residency, we attend lectures, read textbooks, study cases, answer Socratic questions posed by more experienced clinicians, and most importantly, learn by seeing numerous patients and accumulating experience. After graduating medical school, it seems that most doctors learn by conferring with one another, reading journals, and attending conferences.

But the more information there is, the more time it takes to access and acquire new knowledge, and the harder it becomes for individual physicians to keep up.

You can be sure that corporations are well aware of this. On the patient side, of course, Dr. Google already provides incredible ease of access to knowledge and profits handsomely from selling ads to consumers. Pharmaceutical companies know more about my prescribing practices than I do, which fuels their targeted marketing efforts. More ambitiously, IBM's Watson Health Cloud promises to "bring together clinical, research and social data from a diverse range of health sources, creating a secure, cloud-based data sharing hub, powered by the most advanced cognitive and analytic technologies." And as much as I panned athenahealth's advertising in an earlier post, the electronic medical record companies will certainly find clever ways of profiting from the vast troves of health care data that they accumulate. And doctors are paying for the privilege of providing that information to them!

At least SERMO ("the most trusted and preferred social network for doctors") pays doctors for completing surveys, but you can be sure that they're in the same game. They keep their service free by monetizing the attention and knowledge of doctors: "Organizations seeking physician expertise, such as pharmaceutical companies, medical device firms, and biotechs, underwrite the market research and sponsorship opportunities within our site."

So what options are available for doctors who want to share their knowledge with each other free from the confines of a data mining operation? Of course, we can still consult with colleagues the old fashioned way, either in person or by phone. But after having these conversations, the knowledge still resides in the brains of people, not easily accessible to future doctors who may run into similar situations. Our professional associations post practice guidelines that hardly anyone reads, and at annual meetings, there are opportunities to meet with expert clinicians to discuss cases, which seems terribly inefficient. What about higher-tech options? There are numerous subscription services that provide summaries of research studies, but I believe that the patients doctors see do not necessarily resemble those who sign up for clinical trials. There are electronic mailing lists in which doctors can discuss cases, and which allow members to search through previous conversations. And there's wikidoc, a free wikipedia for doctors. However, these options are used by very few doctors and are paltry efforts next to the commercial ambitions of Big Data.

With all these business interests aiming to aggregate and profit from the knowledge of doctors, is there anything that the medical profession can do to avoid having our knowledge become some company's proprietary intellectual property?

I don't claim to have the answers, but I will explore some ideas in Part II. Stay tuned…

Sunday, April 5, 2015

Psychiatry as a Clinical Neuroscience, Why Not?

I first heard the term "clinical neuroscience" used in relation to psychiatry as a resident in 2009, when my associate program director handed out a paper to us trainees titled: "The Future of Psychiatry as Clinical Neuroscience." She presented this as a ground-breaking document that would greatly influence the rest of our careers. Shockingly, the authors of that paper did not cite NIMH Director Thomas Insel, who had an earlier article in 2005 titled: "Psychiatry as a Clinical Neuroscience Discipline." Since then, Dr. Insel has posted an updated version of the article on his blog (publication date: unknown) and wrote other blog posts championing the notion that in order for psychiatry to advance, we must focus on basic neuroscience research. And now, a recent article asks, "The Future of Psychiatry as Clinical Neuroscience: Why Not Now?"

The authors, who are program (or associate program) directors of residency training at Yale, Pitt, and Columbia, bemoan the fact that advances in understanding mental illness based on neuroscience research have not made their way into clinical practice. As barriers, they cite "the pervasive belief that neuroscience is not relevant to patient care," as well as the complexity of the research. They argue that the best place to start enacting this paradigm shift is in psychiatry residency programs right now. They also write:
The diseases that we treat are diseases of the brain. The question that we need to address is not whether we integrate neuroscience alongside our other rich traditions but how we work as a field to overcome the barriers that currently limit us. Ultimately, the most powerful force will be the improved translation of research into more refined explanatory models of psychiatric pathology and into novel therapeutics. To ensure that our field is ready to embrace new findings as they emerge, we need to begin the process of culture change today by enhancing communication and collaboration between researchers and practitioners.
I think 1BOM hit the nail on the head when he wrote: "Rather than being 'ready to embrace new findings as they emerge', tomorrow’s psychiatrist needs to know how to critically evaluate new findings as they emerge [italics in original]." I remember being taught as a resident about Broadmann Area 25 being critical in the pathogenesis of depression, based on exciting initial deep brain stimulation results from Dr. Helen Mayberg. This was almost treated as an established fact, despite the very preliminary nature of the research. Well, what happened when they tried to do a larger clinical trial? Neurocritic reported that the trial was halted before its planned endpoint in December 2013, and last month it was revealed that the medical device company conducting the trial (St. Jude) stopped it due to perceived study futility.

Do the clinical neuroscience curriculums for psychiatry residents teach the importance of humility and emphasize just how much we don't know? One of my favorite articles in the past year has been Tom Stafford's BBC Neurohacks column from December 2014 in which he discussed the importance of redundancy in the brain. He described the case of a woman who, despite missing her entire cerebellum, was able to live a fairly normal life:
This case points to a sad fact about brain science. We don't often shout about it, but there are large gaps in even our basic understanding of the brain. We can't agree on the function of even some of the most important brain regions, such as the cerebellum. Rare cases such as this show up that ignorance. Every so often someone walks into a hospital and their brain scan reveals the startling differences we can have inside our heads. Startling differences which may have only small observable effects on our behaviour.

Part of the problem may be our way of thinking. It is natural to see the brain as a piece of naturally selected technology, and in human technology there is often a one-to-one mapping between structure and function. If I have a toaster, the heat is provided by the heating element, the time is controlled by the timer and the popping up is driven by a spring. The case of the missing cerebellum reveals there is no such simple scheme for the brain. Although we love to talk about the brain region for vision, for hunger or for love, there are no such brain regions, because the brain isn't technology where any function is governed by just one part.
 As Neuroskeptic recently tweeted:
This is a point that needs to be made and repeatedly emphasized to those who write things like "the diseases we treat are diseases of the brain." The irony for me is that I do appreciate the importance of neuroscience in psychiatry and agree with the authors when they wrote: "The more sophisticated and nuanced our science becomes, the more critical it is to have individuals who can translate this work to make it accessible to students at all levels." It reminded me of one of my favorite college classes, Principles of Neuroscience. The professor, who studied ion channels in different animals, was an amazingly good teacher, and the first lecture started something like this:
"Ernest Hemmingway once boasted that he had a six-word story—complete with beginning, middle, and end—that would bring tears to anyone who heard it. Here it is [he lowered the lights in the room and said the following words softly and slowly]: 'For sale…baby shoes…never used.' [dramatic pause] While not all of you are tearing up, very few people could have heard those words without thinking of or feeling something. Any images in your mind (did you see the shoes, what color were they?), any thoughts or emotions you may have experienced after hearing those six words, formed as signals in your nervous system. Without the nervous system, we cannot see, hear, feel, taste, or smell—in short, our five senses would produce no corresponding thoughts, and life as we experience it does not exist."
The tour de force lecture progressed to descriptions of single neurons and how our nervous system is comprised of approximately 100 billion of them, each of which can have tens of thousands of synaptic connections to other neurons. The quote I remember most clearly: "All of the neurons together in one brain form more connections with each other than there are stars and planets in the galaxy." The professor ended his lecture by giving us some practical tips based on his knowledge of neuroscience. Time and repetition, he told us, is what will help us succeed in the class, because that is how neuronal circuits are programmed and how processes in the brain ranging from retrieving facts from memory to riding a bicycle become automatic. I use the same advice almost daily with my patients when I emphasize to them the importance of practicing new behaviors or ways of dealing with difficult thoughts and emotions. Similarly, based on my reading of research on the effects of sleep, exercise, and social interactions on the brain, I share with my patients the importance of getting enough of each.

I learned more neuroscience of clinical relevance in one semester from this PhD Biology professor than I have from years of attending lectures and reading papers from psychiatry researchers who are considered world experts in areas like the neurobiology of OCD, pediatric bipolar disorder neuroimaging, or how transcranial magnetic stimulation affects neural circuits in depression. For me, the most important distinction when we talk about clinical neuroscience is whether we take a broad view of neuroscience or a narrow view. The broad view would emphasize the huge effect of all of the different inputs on the brain (e.g. that six words can bring a person to tears), whereas the narrow view tends to emphasize things like genetics, neurotransmitters, biomarkers, and circuits.

Monday, May 5, 2014

What It Will Take to Decrease ADHD Rates, Part 2

In my last post, I enumerated some of the reasons why I thought the high rates of ADHD diagnosis and treatment were not about to fade. Here, I will discuss several steps that I think would need to take place in order to quell the ADHD "epidemic" in America.

Education Reform

Back in October, the New York Times published a very interesting article examining possible causes behind the rising rates of ADHD diagnosis:
Hinshaw, as well as sociologists like Rafalovich and Peter Conrad of Brandeis University, argues that such numbers are evidence of sociological influences on the rise in A.D.H.D. diagnoses. In trying to narrow down what those influences might be, Hinshaw evaluated differences between diagnostic tools, types of health insurance, cultural values and public perceptions of mental illness. Nothing seemed to explain the difference — until he looked at educational policies.

The No Child Left Behind Act, signed into law by President George W. Bush, was the first federal effort to link school financing to standardized-test performance. But various states had been slowly rolling out similar policies for the last three decades. North Carolina was one of the first to adopt such a program; California was one of the last. The correlations between the implementation of these laws and the rates of A.D.H.D. diagnosis matched on a regional scale as well. When Hinshaw compared the rollout of these school policies with incidences of A.D.H.D., he found that when a state passed laws punishing or rewarding schools for their standardized-test scores, A.D.H.D. diagnoses in that state would increase not long afterward. Nationwide, the rates of A.D.H.D. diagnosis increased by 22 percent in the first four years after No Child Left Behind was implemented.
And now, with the implementation of the Common Core, things may get even worse. As the philosopher-comedian Louis C.K. tweeted:

I am no education specialist, but it's fairly obvious that our education system is not working, and things like NCLB and the Common Core do not address the most pressing need, which is better teachers. This is a problem that is parallel to the emphasis in my profession on useless things like "quality of care", maintenance of certification, and patient satisfaction surveys, rather than increasing professionalism and training better clinicians.

Changes in Professional Training/Culture

Speaking of training better clinicians, blindly applying diagnostic criteria without regard to context is one of the biggest problems in American psychiatry. It leads to situations like this one noted by Dr. Allen Frances on Twitter:
In that study, parents completed a structured interview, designed to cover all of the ADHD diagnostic criteria. This approach is the gold standard in research, yet if you look at the document I linked to, there is no mention of the word "sleep" at all in there. So if a teenager is up all night playing video games and then struggles with focusing at school and doing homework, this approach would flag that teen as having ADHD. Sadly, many doctors, especially those with limited time to spend with patients, use a similar approach when they give parents a questionnaire like the Vanderbilt and then diagnose the child with ADHD if enough 2's and 3's were circled.

If a child does get diagnosed with ADHD, both the American Academic of Pediatrics and the American Academy of Child and Adolescent Psychiatry have published guidelines that say first-line treatment is medication "and/or" behavior therapy. Yet it is very hard for most families to actually find someone who offers this type of therapy; plus, behavioral therapy is much more demanding of parents' time and effort than simply medicating a child, a point I'll address in the next section. Still, increasing access to behavioral therapy can potentially help reduce the reliance on meds. I'm not sure how this is going to happen, but obviously we as a society would have to make it a priority to increase the numbers and the quality of training of those therapists.

As another example of the lack of holistic thinking, there are multiple studies showing a link between certain artificial food colorings and hyperactive behavior in school-aged children. During my child psychiatry training, I heard about these studies from a lecturer who emphasized that the effect size was small. However, a small average effect can mask large effects in individuals who are susceptible to certain insults. Or it may slowly lead to larger effects over long periods of time. In Europe, foods containing those dyes are required to have a warning label that they "may have an adverse effect on activity and attention in children," so most manufacturers have switched to using natural colors so they don't have to show the warning label. Not surprisingly, the FDA decided not to act, citing the need for more research.

Societal/Demographic Changes

One of my previous posts examined the geographical differences in the distribution of ADHD in the U.S. Clearly, societal factors like higher rates of single parenthood, lower social mobility, etc. have an impact on which children get diagnosed with ADHD.

One of the biggest issues I come across is how everyone is super-busy all the time, especially parents with young children who have to juggle their jobs and child-raising responsibilities. Not surprisingly, given how exhausted many American parents are, it is easy to give in to the temptation of having television or an iPad be a babysitter/pacifier. This of course comes at a huge cost to the relationship between parents and their children. Child in Mind is an excellent blog that has many posts on how parent-child interactions are critically important for the development of self-regulation skills in children, which significantly impact emotions, behaviors, and the ability to concentrate. There is also evidence that parents and schools can effectively teach self-control to children in ways that do not require harsh treatment or bribery.

Thus, measures that take stress off parents and increase the time that they can spend with their children is something that our society needs to invest in. Universal daycare/preschool is just one example. According to this article, "the U.S. ranks third to last among OECD countries on public spending on family benefits." If you don't think that has anything to do with why we lead the world in ADHD, then I'd love to hear your explanation.

In conclusion, there are no easy fixes to the problem of ADHD over-diagnosis/treatment in America, because it is in large part a reflection of some thorny societal/cultural problems. But that doesn't mean there are no solutions. The problem does require addressing issues on multiple levels, and not simply prescribing more pills.

Monday, March 31, 2014

Losing the White Coat Part 2: Residency

This is part 2 of a series on the evolution of my approach to psychiatry. Part 1 was about my medical school experience, and A Most Influential Professor described a key experience I had in college.

I went into psychiatry because I was fascinated by the variety of human emotions, behavior, and psychopathology, and I wanted to explore the plethora of influences (cultural, social, psychological, and biological) on those aspects of humanity. My medical school emphasized the biological approach, so I decided to continue my training elsewhere for residency.

At my residency program, while there was more of an emphasis on psychotherapy compared to my medical school, the biological psychiatrists still reigned supreme. The university had some well known psychotherapists, but they tended to have titles such as "emeritus professor" or "clinical professor," meaning that they were not around very much. And I doubt they would have felt welcome, with the residents' main jobs being completing paperwork and adjusting medications during the majority of their rotations, rather than running groups or conducting therapy.

It was easy to see who the big money-makers of the department were: the researchers who focused on the neural basis of mental disorders while providing biological treatments in their clinical practice. There was a bipolar disorder expert, who once had a patient on 10 different medications, to the point that it was impossible to tell what was the patient's "disease" and what were the side effects. There was the schizophrenia expert who headed the locked inpatient unit, who frequently gave talks to psychiatrists in the community advertising the newest antipsychotic medications. She claimed that because she was on the speaker bureau for all the big pharma companies, she was unbiased in her assessment of the medications. And then there was the renowned depression expert, who once told us, "Even if the medications are no more effective than placebo, it doesn't mean that you shouldn't treat the patients." Make of that what you will.

However, the experience that opened my eyes most to the flaws of a purely biological approach to psychiatry was what I saw happening with Dr. Z, one of the psychiatrists on the electroconvulsive therapy (ECT) service. He gave great lectures, drawing up pretty diagrams of the circuits in the brain believed to underly mood and depression. Unlike most psychiatrists, he often walked around in scrubs, and he had a confident charm to go along with a cheerful disposition. Perhaps appropriately so, since he offered a treatment unparalleled in its effectiveness for patients with severe psychotic depression and bipolar disorder.

The problem, though, was that the bipolar disorder diagnosis (and its attendant "treatment resistant depression") became so loosely applied that practically anyone with mood swings was being diagnosed with "bipolar II," and Dr. Z fully embraced this trend. His evaluations for whether a patient was a good candidate for ECT were thorough, to a point. There was meticulous documentation of the medications that the patient has tried and the inadequate response to them. Mostly ignored, however, were details about what the patient's life was actually like and what factors may have been influencing their symptoms. Thus, plenty of patients who clearly had borderline personality disorder (BPD) were deemed "excellent candidates" for ECT; none of the depression medications that they had tried ever did lasting good, since their moods would turn depressed or irritable in response to interpersonal stress, regardless of what meds they were taking.

I remember hearing two stories in particular about his patients (details altered to protect anonymity). One day, a patient of Dr. Z's arrived in clinic holding a knife to her chest after her boyfriend broke up with her. She told the astounded clinic receptionist that she would stab herself if she did not see Dr. Z right away. Dr. Z was not in, and the patient ended up walking into the office of another psychiatrist, who managed to calmly talk her down while security was notified. Another time, a patient was dragged kicking and screaming into the ER after swallowing a handful of pills during an argument with her husband. She was heard yelling, "I'll only talk to Dr. Z! Where is he? I know he's coming because he loves me!" Dr. Z clearly had a profound effect on his BPD patients, even if the benefits of ECT for those patients was very temporary.

Recently, I read Dr. David Allen's post on the difference between the symptoms of major depression and the depression often seen in BPD. But even back then something felt off to me about doing ECT on patients who had "treatment resistant depression" because of a personality disorder, which brings me back to the title of this post. At the institutions where I trained, the psychiatrists who wore the white physician's coats, not surprisingly, tended to be the more biologically-oriented ones. Thus, in my mind the white coat became associated with their view of psychiatry, one that I did not share.

Thankfully, my mind was already set on being a child psychiatrist. At least in the world of child psychiatry, despite the influence of biological psychiatrists like Harvard's Biederman, many (I don't dare to claim "most," given the direction things seem to be heading) child psychiatrists still consider the influence of things like family, parenting, and developmental trauma on behavior, rather than just focusing on figuring out the black box of the brain.

Sunday, November 17, 2013

The Wizard: Psychopharmacology Magic?

One of the most memorable psychiatrists that I worked with as a trainee is someone I think of as The Wizard. He specialized in treating some the most difficult behavioral manifestations of autism and other genetic conditions like Fragile X syndrome. He had a magical ability to calm even the most agitated children and adolescents and seemed to inspire reverence and awe in their parents, who kept voting him to the top of various "Best Doctor" lists.

What most amazed me about The Wizard was his Zen-like serenity. Regardless of how much noise the patient was making or how many toys went flying around the room, he would be like the calm eye of the storm, holding still while everything else moved around him. His gaze was remarkable, intense yet warm and soft, like a bright candle. He would focus intently on whoever he was talking to, making that person feel important and special. His voice was smooth and soothing, almost soporific; perfect for those in emotional distress.

He took no notes during the appointments. His dictated progress notes were usually just a couple of paragraphs long, without pesky details like what medications the patient was taking and what medication changes were made during the visit. However, he did not have to remember those things. During the visit, he would shine his bright gaze upon the parents and say, "So tell me, what did we decide to do with the medications last time?" And the parents always provided the details. Maybe they knew that they would be quizzed this way, so they prepared so as to not be embarrassed. More likely, I think the parents were pleased that this eminent psychiatrist trusted them enough to empower them in this way.

The Wizard was an expert psychopharmacologist, often prescribing medications that I've seen no other psychiatrist prescribe. Things that may have had success in case studies, but no positive clinical trials (and maybe even some negative ones). Yet for him, he was able to get results using those medications. Perhaps he was lucky, or with his experience he was able to intuit the right medication for a certain patient. However, I firmly believe that just being in his presence was one of the major therapeutic interventions that he provided for his patients and their parents.

I attempt to channel him during every patient encounter. But try as I may, I can't help but continue taking notes while talking to patients and then writing overly detailed progress notes.

Monday, October 14, 2013

Losing the White Coat Part 1: Medical School

This is part 1 of a series on the evolution of my approach to psychiatry. For background, I recommend reading A Most Influential Professor, which is essentially part 0 of this series.

Just about every medical school has a traditional white coat ceremony, during which the incoming class of medical students get their shiny new white coats that they will then wear throughout the rest of medical school during clinical experiences. At my school, the ceremony came with a recitation of a modified Hippocratic oath, adding to the gravitas of the day and helping us reflect on our future roles as healers and doctors.

The psychiatry faculty and residents at my school made it a point to insist that they were “doctors first." As medical students, we were repeatedly told about the contributions our institution made to modern biological psychiatry, and how it was a bastion of biologically-minded psychiatrists even during the era when psychoanalysts dominated psychiatry.

It was not surprising, then, to see psychiatry attendings walking around the hospital and lecture halls wearing their long white coats. Even the lone psychologist that taught some medical student lectures wore a white coat when he was in the hospital.

However, something always felt amiss with this biomedical aura. The psychiatry attendings were very quick – too quick – to defend the medical-ness of their specialty. I was told on multiple occasions that the arbitrary diagnostic criteria used in the DSM-IV are no different than the cutoffs used to define blood pressure in hypertension or glucose levels in diabetes. However, despite the prominent role the school's psychiatry department made in establishing biological psychiatry, physicians in other specialties there did not seem to respect the psychiatrists very much. The psychiatry interns took care of fewer patients on their Internal Medicine rotations than the medical interns, yet the psychiatry program director always insisted that the psychiatry interns performed just as well as the medicine ones.

As a third year medical student, I did my psychiatry rotation in a publicly-funded mental hospital, wearing my white coat just like the residents and attendings. There certainly were cases in which something clearly biochemical was going on in the brain of my patients, such as when a young man came into the ER hearing voices and feeling very paranoid after using a large amount of cocaine. I got to see antipsychotic medications help some patients with schizophrenia, but only so much, and with obvious side effects. There was clearly a vast gulf in understanding between the psychiatrists and patients, with the psychiatric residents spending minimal amounts of time with their patients and going home by 3pm each day. I was not sure how much wearing a white coat contributed to this distance or if it was mostly due to the culture of the place, but it certainly did not help foster empathy.

There were many other cases that left me feeling uneasy. As a fourth year medical student on the consult service, I accompanied a psychiatry resident when he evaluated a patient for suicidal thoughts. Afterwards, he told the primary team, "Don't worry, he's just a boy borderline." The attitude seemed to be that this patient would not actually harm himself because he was just "being manipulative," or that personality disorders somehow were not real, perhaps because there was nothing "biological" that could be done.

I did have a great experience working with the child psychiatrists at my medical school, who because of their specialty necessarily had to take a more holistic view of things. But even so, they tended to focus on the children as individual entities, without deep thought given to how interactions with parents influenced the children's behaviors.

When I asked the program director about learning psychotherapy as a resident there, I was told by that they don't really teach psychotherapy, because that is not going to be part of the job of a psychiatrist going forward. I would learn enough to know what kind of psychotherapy to refer a patient for, if it were necessary. Talking to the psychiatry residents though, some of them clearly wished that they had more psychotherapy training, so they could be more complete and competent clinicians.

I knew as a medical student that this approach to psychiatry was not for me. I would go elsewhere to continue my training.

Tuesday, July 30, 2013

Is Psychiatry Residency Training Backwards?

For decades, the process of turning a medical school graduate into a psychiatrist has remained essentially the same: A post-graduate year 1 (PGY 1) internship that includes rotations in medicine and neurology in addition to psychiatry, followed by 3 additional years of residency training focused on psychiatry. Even though psychiatry residency programs are famously diverse, they almost always follow the pattern of mostly inpatient psychiatry for PGY 1-2 and mostly outpatient psychiatry for PGY 3-4. Child psychiatry exposure typically occurs for only a few months during PGY 2 or 3.

Earlier this year, 1Boring Old Man had an excellent series of posts that included a look back at his experience as a residency training program director in the 1970's, when he pulled his residents from a large state hospital because the experience was no longer educational. Yet most psychiatry programs across the country still have their psychiatry residents staffing inpatient units during their first two years of training, even as the length of stay at acute inpatient psychiatry units continues to decline. What does this do? I think it puts an emphasis on "medication-first" thinking, because changing some meds around (usually by adding more rather than taking any away) is really all one can do for a patient who is just going to be in the hospital for a few days.

Additionally, I believe that being exposed to the most severe mental illnesses during PGY 1-2 primes young clinicians to over-pathologize when they end up interacting with less ill patient interactions later on. Ordinary sadness or grieving may be called depression. "Hearing voices" (which is how many people describe their intrusive thoughts or internal monologues) starts to sound like schizophrenia. Mood swings or anger outbursts often get diagnosed as bipolar disorder. Of course, there are certainly other forces driving the pathologizing of normal behavior, but I do think the way training is structured facilitates this type of thinking.

Lastly, the focus on treating adult individual psychopathology deprives trainees of developing a crucial developmental and social perspective. Family therapy is something that is usually taught briefly, if at all, during the PGY 3 or 4 years. During my years of general psychiatry residency, I had the vague sense that a patient's interactions with family or her experiences growing up may have influenced her symptoms over the course of her life, but the attitude of my attendings seemed to be: since those things can't really be changed, why focus on them? It wasn't until my two years of child psychiatry training that I finally started to understand the roles that early childhood adversity and interactions amongst family members play in an individual's patterns of behavior.

I think that psychiatry residency programs would be improved immensely by earlier clinical exposure to assessing children (both "normally-developing" and ones with behavioral problems) and their families, as a counterpoint to the biomedical neurotransmitter-based framework that residents are most familiar with. This not an original idea. Other psychiatrists have suggested the same thing, including Dr. Carl Feinstein, head of child and adolescent psychiatry at Stanford (which is somewhat ironic given Stanford psychiatry's overall biological orientation). Daniel Carlat's book Unhinged proposes some more fundamental changes in the process of training psychiatrists.

Sadly, as psychiatry becomes increasingly driven by managed care, it looks like residency training will continue to languish as psychiatry departments come under pressure to increase patient volumes so they can operate in the black.