- Home
- Gary Greenberg
Manufacturing depression Page 5
Manufacturing depression Read online
Page 5
For religious people—in Job’s time as well as in ours—the solution to the problem he represents is to relinquish the expectation that human sensibilities can grasp the sense of life and to replace it with a conviction that there is a divine, if inscrutable, plan behind our suffering. Job’s pessimism and outrage, in this view, dissolve when he gives up that expectation. His suffering over the unfairness of his life is transformed into faith in a God whose justice surpasses understanding and whose mercy can soothe his grief. (Although the restoration of his wealth that Yahweh finally grants to Job, in an ending the rabbis tacked onto a book otherwise considered too bleak, seems to miss this point; for why would Job think that it will not be taken away again? And what about his children?)
For those of us who look to science for revelation, however, suffering has a very different fate and its cure rests on a different transformation. We place our faith in doctors and their science. Founded on the idea that knowledge moves us forward, that ignorance is all that stands between us and the best of all possible worlds, scientific medicine embodies the faith that we can figure our way out of our troubles. This belief rests on some optimistic assumptions: not only that the world will yield its secrets, but also that it has secrets to yield, that life is lawful in a way that will make sense to us. It’s no wonder then that depression has fallen into the hands of the doctors: science is the natural enemy of pessimism.
To say that a particular form of suffering is a disease is always to go beyond the observation that the suffering exists. It is also to say—as Kramer does when he looks forward to the elimination of depression—that the suffering doesn’t belong in our world, that we would live better lives without it, and that we ought to do so. When doctors turn suffering into symptom, symptom into disease, and disease into a condition to be cured, they are acting not only as scientists, but also as moral philosophers. To claim that an affliction ought to be eradicated is also to claim that it is inimical to the life we ought to be leading.
With some diseases, it barely matters that there is a philosophical dimension to a diagnosis. It is hard to imagine a world in which cancer and diabetes are not best understood as illnesses. But when the pathology is an attitude, a “fixed tragic view of the human condition,” and when the treatment is touted as the restoration of the true human selfhood, then we really should consider whether that attitude is best understood as an illness to be eliminated. We should wonder whether doctors who urge us to come out against depression aren’t, wittingly or otherwise, also urging us to adjust ourselves to a world that our pessimism shows to be deeply flawed.
And above all, we should recognize that to talk about why we suffer and what we should do about it is also to talk about how things ought to be. To say that a young man is sick when he is lying on the floor of his office, reeling from personal disaster, is to make a moral statement, and then to cloak it in the language of science, which plays the same role in our world that religion did in Job’s. And just as Eliphaz and his colleagues overstepped with Job, so too the depression doctors, and their drug company sponsors, have overstepped with us. They don’t know any better than you and I what life is for or how we are supposed to feel about it.
CHAPTER 3
MAUVE MEASLES
I became an officially depressed person in 2006. I received my diagnosis from a highly respected Harvard psychiatrist working at the Mood Disorders Unit at Massachusetts General Hospital who said I had major depressive disorder, recurrent, mild, with melancholic features. I wasn’t entirely surprised that I turned out to be mentally ill. I had shown up at his office to enroll in a clinical trial of an antidepressant medication, and a diagnosis is a requirement for entry.
But I was expecting a different diagnosis—minor depressive disorder. This isn’t an official psychiatric disease, at least not yet. It is listed in the DSM-IV, but in an appendix of “Diagnoses in Need of Further Study.” Much depends on this further study, notably whether the diagnosis will get the official four-or-five-digit code that compels insurance companies to pay for its treatment, and whether the Food and Drug Administration will then be able to give a drug company an indication—that is, the right to claim that its drug treats the new disease.
Not all research diagnoses turn out to be winners. Take premenstrual dysphoric disorder (PMDD) for instance. The idea that PMDD is a psychiatric illness must have looked like a good idea to someone. My money is on Eli Lilly, which wanted to squeeze a few more dollars out of Prozac, or Sarafem, as the company had relabeled it for treatment of PMDD. But despite (or perhaps because of) its corporate sponsorship, the diagnosis ran into stiff opposition from feminists who objected to the way that it pathologized what they considered to be a normal variant of human behavior. PMDD has turned out to be a bust. It’s still languishing in the back of the book and may even disappear completely when the DSM-V comes out in 2012.
Minor depressive disorder, on the other hand, seems like a shoo-in for advancement. To qualify, you only have to report three of the nine depression criteria, one of which has to be either a sad mood or loss of interest or pleasure in all, or almost all, activities. (Major depression requires five.) You could have trouble, for instance, eating and concentrating and, so long as you were also unhappy most of the time for most of the days in a two-week period in the last six months, you would qualify. By the time I got diagnosed, about twenty years after my bout on the floor of my study, I’d had enough disappointment and setback, not to mention nearly six years of the Bush administration, to ensure plenty of periods like that—none quite so bad as the first, but all of them unpleasant enough. I hadn’t exactly come to value this experience, but I’d learned to tolerate it the same way that you tolerate a difficult friend or watch a disturbing movie, and for the same reason: that you get something out of the bargain, some insight into the world, some glimpse of the way things are.
I was, in other words, at risk of exactly what Peter Kramer, in Against Depression, warns about: mistaking mental illness for clarity. Of course, I had good company. Not only Job, with his conviction that a life in which everything you’ve lived for can be taken away so easily is not what it’s cracked up to be, but a whole pantheon of unacknowledged legislators—like William James, who, in his Varieties of Religious Experience put it this way:
The normal process of life contains moments as bad as any of those which insane melancholy is filled with, moments in which radical evil gets its innings and takes its solid turn. The lunatic’s visions of horror are all drawn from the material of daily fact.
Psychiatrists like Kramer cut through the philosophical question of when the “normal process of life” becomes insanity with their assertion that two weeks of those moments is quite enough. As unsatisfying as this answer is—give me Yahweh in a thunderstorm any day; at least then the fact that the line is arbitrarily drawn by those with the power to do so is obvious—I thought it could work in my favor. When I showed up at Mass General, veteran of the dissatisfactions of a middle-class, middle-aged American life—nothing spectacular, but enough to keep me up nights and make me blue for a couple of weeks at a time on a regular basis—I figured that a diagnosis of minor depression was a sure thing.
I’ll tell you all about how wrong I was, and why, and what happened, in due time. First, though, let me get the credibility problem out of the way. Yes, I went to Mass General with an agenda. I figured my temperament and the American Psychiatric Association’s zeal to create a new disease were made for each other—and added up to a good opportunity to write about a little-explored region of the medical-industrial complex. Clinical trials are the pivot of the depression industry, the venue in which the drug companies get the government to guarantee the public that an antidepressant works and won’t hurt you. But in the bargain, they also confer legitimacy on the disease that the drug purports to treat. Every approval of an antidepressant also ratifies the claim that the disease it treats really exists.
Many diseases don’t need this kind of advertising. Whatev
er disease means—and this question is far from settled—no one will deny that cancer or malaria deserve the label. But sometimes the public has to be convinced. That’s why GlaxoSmithKline (GSK) once paid a doctor to say that an “uncontrollable urge to move [the] legs, or ‘creepy-crawly’ sensations in the legs…that often leads to sleep disruption” was actually a disease called restless legs syndrome. RLS, said GSK, causes insomnia, marital discord, and poor job performance. This campaign was a transparent attempt to persuade people to think of their suffering as a disease, and the linchpin of the argument (and, of course, the reason GSK was bothering to make it) was that the drug Requip, which as a treatment for Parkinson’s disease had reaped disappointing profits, relieved RLS. If a medicine makes a problem better, this logic goes, then the problem must have been a disease to begin with, and its sufferers are entitled to all the benefits we bestow upon the sick: sympathy, research money, insurance coverage, and so on.
So a clinical trial of a treatment for a research diagnosis like minor depression is also a trial of the diagnosis itself. That’s why I went to Mass General: to see if I could catch a glimpse of the depression machinery as it cranked up to turn out a new model.
As motives go, mine was less straightforward than, say, wanting to benefit humankind or get myself cured. But neither was my visit to Mass General a repeat of one of the greatest pranks ever perpetrated on psychiatry: a study by David Rosenhan, a sociologist who in 1972 sent a cadre of his graduate students into various emergency rooms complaining, dishonestly, that they were hearing the word “thud” in their heads. The students were hospitalized, most of them for schizophrenia. Once there, they behaved normally, or what passes for normally among graduate sociology students. They read, asked questions, and took extensive notes, all of which was duly noted in their charts as more symptoms. When they were released, it was with the diagnosis of paranoid schizophrenia, in remission.
Rosenhan called the 1973 paper he published in Science about his caper “On Being Sane in Insane Places.” You can imagine how embarrassed psychiatrists were when the story hit the press that the One Flew over the Cuckoo’s Nest nightmare was true. The profession was already under siege, thanks to a society that for many reasons was beginning to suspect that mental illnesses weren’t real, but merely ways of pathologizing nonconformity. A cottage industry sprang up to rebut, denounce, and generally scream at Rosenhan. But no one took issue with his finding that it’s easy to get diagnosed with a mental illness and then much, much harder, if not impossible, to get undiagnosed. That part, as opposed to his allegedly shoddy ethics and research methods, was unassailable.
This was the best part of my going to Mass General: I didn’t have to lie. Not that I hadn’t thought about it. I do know the DSM-IV pretty well, certainly well enough to fake just about any psychiatric illness. But I couldn’t get myself to do it just for the sake of my writing career. So when I found out there was a study I thought I could get into by telling the truth, I jumped at the opportunity.
It wasn’t all ambition, however. The possibility that the trial drug, which was Celexa, might make me feel better—well, I can’t deny this was intriguing. I’ve got nothing against better living through chemistry. I’ve practiced my own amateur version of it for many years, in fact. And I’ve spent a couple of decades listening to patients (and friends) sing the praises of antidepressants and seeing the results up close and personal. It was enough to make me curious, and sometimes envious, especially when I was depressed. Sometimes I’d wonder if it wasn’t just stubbornness that stopped me from visiting a psychiatrist, some point of pride or a fear that I’d be sucked into the Prozac cult or forced to abandon some of my deepest convictions if the drug worked—reluctance that, whatever the explanation, felt insuperable, and, at times, depressing. The clinical trial gave me perfect cover from myself, a way to check out the drugs while maintaining that I was only doing research. Call it the Kinsey approach.
So when I got diagnosed with major depression instead of minor depression, I suppose I was only getting my comeuppance for trying to exploit the system, which was in turn glad to bestow a disease upon me, but not necessarily the one I wanted.
I didn’t get kicked out of Mass General. To the contrary, my doctor immediately gave me five major depression studies to choose from. But it was impossible to ignore the fact that after listening to my answers to his questions, a capable and compassionate doctor told me I had a serious mental illness—something wrong with my brain that was causing the trouble in my mind—and a much worse one than I had thought to begin with. I was the last person in the world that I would have expected to believe this. But as I’ll describe, the idea that my difficulties were an illness caused by biochemical imbalances grew on me during the trial—especially the part about the possibility that I could be cured of what I had long ago come to think of as myself.
But even before that happened, even as I walked to his office for the first time, it had dawned on me that this whole vast apparatus with its towers and pavilions arrayed like the castles of the Magic Kingdom, its maze of bustling streets—the doctors checking their watches, the patients, some wheeling IV stands down the sidewalk (one of them even sneaking a smoke as the liquid dripped into his veins), the family sitting crying on a bench—was a monument to one brilliant and magnificent idea: that our suffering is caused by diseases that can be cured by medicine. Well, actually, those are two ideas—that diseases exist in nature and that we can improve nature by finding the culprit and getting rid of it—and they seem, like all common sense, to be unassailable and timeless. They may even seem not to be ideas, but simple facts.
But they are ideas, invented by people rather than discovered in nature, and much newer ones than you might think. Indeed, the belief that we can turn what ails us into the target for a drug first appeared about 150 years ago and was not widely accepted until the early part of the twentieth century. Neither have illness and cure always been related in the way we usually think they are: that we identify diseases and then look for the remedy. Drug-driven diagnoses are not original to the depression industry, or for that matter to the restless legs syndrome industry. In fact, when it comes to the modern understanding of disease, the drugs have often come first.
Betty Twarog’s mussels weren’t the first mollusks to figure in the history of depression. Murex trunculus and Murex bandaris, two species of sea snails that litter the coasts of Italy and Asia Minor, beat her critters to the punch by a good two thousand years. Both varieties are four or five inches long, with pastel-colored bands spiraling up their shells. M. trunculus looks like an elephant’s head, its spiny shaft like a trunk, while M. bandaris features spikes that stick up like the points of a child’s jack. You might stoop to pick up a few of these specimens on a beachcombing walk, but they’re a little too subtle to be your prize find. Unless, that is, you happen to be the unnamed dog belonging to Heracles that, according to legend, took a mouthful of snails while on a walk with his master. The legend doesn’t say why Heracles was at the beach in Tyre (I suppose that even Greek half-god heroes need a vacation occasionally), but it does tell us that after the dog spit them out, its mouth had turned an extraordinary shade of purple.
It was left to the Phoenicians to figure out how to exploit this discovery by crushing, salting, and boiling the snails until they had extracted a dye that, according to Pliny the Elder, was “exactly the colour of clotted blood, and…of a blackish hue to the sight, but of a shining appearance when held up to the light.” It was laborious to produce—the mucus of thousands of snails was needed to color a single robe—but so glorious that it eventually fetched its weight in silver at ancient markets. Tyrian purple (also known as royal purple), became the color of kings and generals and nabobs.
And, in the late 1850s, of the ladies of Paris. Inspired, some would say inflamed, by the Empress Eugénie, wife of Napoleon III, whose haute couture graced the pages of the fashion magazines just then coming into production, Parisians couldn’t get enough of t
he scarlet-purple hue known to them as mauve. A little less red than the Phoenician original, mauve was nonetheless a gorgeous color and demand was high. “Mauve Measles,” as Punch called it, spread quickly across the Channel, leaving Englishwomen with a “measly rash of ribbons.”
The Parisian mauve came not from snail mucus but from bat guano and from certain lichens that also could stain fabric purple. These sources were plentiful and easier to refine than the snails, but supplies still had to be found and secured, harvested and processed. Variations in sunlight and soil conditions and other vagaries of nature could affect hue and quantity. None of this would necessarily have been a problem worth solving—after all, wasn’t this kind of inconsistency the way of the natural world?—if it weren’t for the Industrial Revolution, which was imposing a new expectation: that commodities, particularly consumer commodities, should be uniform and easily available and certainly not made out of bat poop or snail snot if it could at all be avoided. It was left to a kid to figure out how to meet this emerging market.
By the time William Perkin entered the City of London School in 1851, at the age of thirteen, he had already considered a number of careers: carpenter (his father’s trade), engineer, painter, musician. At his new school he took a shine to science and sought entry to Michael Faraday’s Saturday lectures about electricity, a request that was granted by the great man himself. But nothing caught his fancy like the twice-weekly chemistry lectures taught by Thomas Hall, his writing master. Soon young Perkin prevailed upon his father to allow him to set up a lab at home where he could explore the principles he was learning from Hall.
Chemistry barely existed as a scientific discipline in mid-nineteenth-century England, where it was associated with apothecaries and other charlatans. But Faraday, along with Prince Albert and other prominent Britons, saw the advances being made by chemists on the Continent, especially in Germany, and rounded up the money for the Royal College of Chemistry, which opened in 1845 with twenty-six students. Hall had attended the first classes there, and in 1853 he urged Perkin to enroll.