- Home
- Gary Greenberg
Manufacturing depression Page 6
Manufacturing depression Read online
Page 6
Perkin’s mentor at the Royal College was August Wilhelm von Hofmann, who had been recruited from Germany by Prince Albert himself. Hofmann had an interest that was perfect for demonstrating to a skeptical educational establishment the practical value of studying chemistry. He thought that natural substances could be synthesized in the lab so long as you started with materials that contained the elements that chemists were just identifying as the building blocks of the natural world: carbon, oxygen, hydrogen, sulfur, and nitrogen. Nature, according to Hofmann, assembled these atoms into molecules, and then into the substances of daily life, in much the same way that industrialists were assembling raw materials into finished products. Figure out how nature did this, and you could conceivably make anything it could make—and without all the bother of, say, gathering and boiling snails.
Hofmann was lucky to have this insight at a time when a rich supply of those basic elements was just coming available—and on the cheap, as an unwanted by-product of industrialization: coal tar, the stinky residue of the process by which coal was refined into the gas that fueled London’s lamps. An enterprising Scotsman, Charles Macintosh, figured out how to smear coal tar on textiles to make a rubbery waterproof cloth, and soon people were wearing macintoshes in the Glasgow rain. But no one knew what else to do with the stuff, so mostly it got dumped into streams, where it killed fish and made washday a nightmare. Hofmann, however, thought he could extract value from the hydrocarbons, and he set Perkin to work on one of his pet projects: making quinine.
Malaria was not only a scourge of people living in the swamps and fens of England and the rest of Europe, it was also a problem for the armies of imperialism, which found the warmer climates overrun with the disease—a bad thing not only for the natives but, more important, for the men who would contract malaria in the course of conquering them. Cinchona bark had long been used by indigenous people as a remedy for fevers, and at the end of the seventeenth century, a British physician, in one of the earliest controlled studies of a drug, proved that its effect was unique to what was then known as tertian fever.
In the nineteenth century, a pair of Frenchmen isolated quinine as the active ingredient in the bark and developed a way to extract it. Soon, the medicine was in wide use. The trees still grew mostly in South America, however, so quinine was expensive: the East India Company’s annual budget for it at midcentury was one hundred thousand pounds—enough to hinder the business of empire and to delay the invention of the preferred way to end a long day of bearing the white man’s burden: the gin and tonic.
Hofmann thought he had a solution to the problem. Thanks to advances in microscopy, he knew that naphthalidine, a derivative of coal tar, differed from quinine by only a couple of hydrogen and oxygen molecules—water, in other words. It wouldn’t be as simple as adding water to the naphthalidine, he said, but with enough time at the bench, he was sure he could make medicine out of coal tar. Hofmann set Perkin to the “happy experiment” of figuring out how to make oil and water mix.
Alas, the experiment was not so happy. Perkin soon determined that all the chemistry in the world wasn’t going to convince naphthalidine to turn into quinine. One of his failures was intriguing, however: it had yielded a powder with a reddish tint. Perkin decided to see what color would come from another coal tar component, aniline. “Perfectly black,” he described it, and when he dried it and added alcohol (wine spirits, in this case), it turned a beautiful shade of purple. Perkin, with the encouragement of some older entrepreneurs, soon proved that the new color, mauveine, was fast in wool and silk. He quickly forgot about malaria. He had achieved something stupendous—an alchemy that actually worked. He could transmute the dross of the Industrial Revolution into gold—or mauve, as the case may be.
Perkin was eighteen years old when he got his patent on mauveine, and he was soon fabulously wealthy. Dressmakers, textile manufacturers, fashionable (and now, due to the lower price, not-so-rich) women and the men who liked to look at them all benefited. Unless you count the factory workers and neighbors (and occasional customers) who suffered from the poisonous by-products of aniline dye manufacturing and the producers of natural colors, whose madder and indigo and other vegetable-based dyes plummeted in price, mauve made winners out of everyone.
But the biggest beneficiaries of all were a group of German companies whose names are still familiar: Bayer, Hoechst, Geigy, BASF (originally the Baden Aniline and Soda Factory), and Agfa. Perkin’s invention proved that natural substances could be chemically synthesized out of cheap chemicals, and these companies exploited the obvious opportunity. Uncovering the structure and properties of hydrocarbons—the field we know today as organic chemistry—they eventually figured out how to manufacture plastics, textiles, pesticides, and all the other synthetic products that we’ve become accustomed to and dependent upon. Not least among those products were drugs, and the German companies that turned out synthetic dyes eventually became the backbone of the pharmaceutical industry.
The journey from Perkin’s mauve to Prozac is not as long or winding as you might think. From the time that Hofmann tried to synthesize quinine, the medical industry and the dye industry have been thick as thieves, and not only because some dyes, as you’ll see in a moment, turned out to cure diseases. The linking of haberdashery and therapeutics also helped to establish an idea that revolutionized medicine, the idea that lay underneath the clinical trial where I got my diagnosis: that drugs can be magic bullets, aimed directly at the chemical causes of our suffering.
Paul Ehrlich, the German doctor who came up with this theory, had just been born when Perkin invented mauveine, but by the time he was a teenager, he had been captivated by—some said obsessed with—synthetic dyes of all colors. “When I felt…miserable and forsaken,” Ehrlich once told his secretary, “I often stood before the cupboard in which my collection of dyes was stored and said to myself, ‘These are my friends which will not desert me.’”
Ehrlich had been introduced to his friends by his cousin, a biologist who was exploiting dyes in a way that Perkin didn’t foresee: as staining agents for microscope slides. The synthetic dyes, far superior to vegetable-based colors, illuminated cellular structures as never before. Ehrlich was enchanted by his cousin’s slides, but it wasn’t this new glimpse into the invisible world of cells that caught his interest. Rather, he wondered, why did the same dye make different parts of the same cell show up in different shades? How did dyes perform their revelatory magic in the first place?
Ehrlich’s biologist friends were uninterested in this question. “They cared so little for the theory of it,” he complained to his secretary. But Ehrlich thought he knew where the answer lay: in chemistry. Like Perkin, he had been a basement chemist, and he claimed to have unique abilities—not only to make concoctions, but also to understand what he was doing. “I can see the structural formula…with my mind’s eye,” he once wrote with a characteristic lack of modesty, “and my chemical imagination has developed so rapidly that sometimes I have been able to foresee things that were recognized only much later by the disciples of systematic chemistry.” What Ehrlich saw under the microscope with his mind’s eye was not nuclei and mitochondria lit up in color, but the whirling dance of molecules that was spinning out the colors in the first place. “Substances act only when they are linked,” he proclaimed. When a dye latched onto a cell, the linkage created a new molecule that had the color as one of its properties. Stained tissue samples thus opened a window into not one but two unseen realms: the biology of the cell and the chemistry of the interaction between living matter and hydrocarbons.
Even among his mad-scientist peers at medical school in the 1870s, Ehrlich was known as a “brilliant eccentric…wandering around the laboratory with hands that looked as though they had been thrust into innumerable paint pots up to the wrists.” Upon graduation in 1878, he continued to hang out with his dyes but also took a job as a physician at Berlin’s largest hospital, the Charité. Hospitals, even good hospitals, were gri
m places at the time, with little but comfort to offer patients with typhus, tuberculosis, syphilis, cholera, and all the other diseases (not to mention the opportunistic infections that could turn any wound into a death sentence) contributing to the forty-year life expectancy of the average Johann. But scientists had just begun to explore a new and controversial theory about illness: that tiny organisms—germs—were responsible for most of the scourges that brought people to the hospital or the family deathbed in the first place.
This wasn’t a new idea. As far back as the first century B.C., a Roman doctor, Marcus Varro, was warning his countrymen to avoid marshland lest they encounter the “minute creatures that live [there] and that cannot be discerned with the eye and…enter the body through the mouth and nostrils and cause serious diseases.” More recently, in the 1790s Edward Jenner had discovered that smallpox could be prevented by inoculation with cowpox, and in 1854 John Snow had demonstrated that cholera was transmitted through the water supplies.
You would think that doctors would have connected the dots by the time Ehrlich was trying to puzzle out how to use the knowledge he was generating with dyes in his medical practice. But although scientists had by then cracked open enough bodies, living and dead, to gain at least a rudimentary understanding of how blood flowed, how some of our organs worked, how we were put together, and even, in a very small way, how the brain gave us speech and thought and behavior, doctors remained firmly rooted in their Hippocratic past. Like those early physicians, they gathered their knowledge empirically. They tried something and if it seemed to work they tried it again on a similar sickness without spending much time on the whys of the result. That’s why the remedies at their disposal—techniques like bloodletting and trepanning, and the herbal potions in the pharmacopoeia—were largely those of their ancient masters. Medicine in 1850 was largely as it had been in 350 B.C.—guided by whim, by accident, and by tradition.
Some of these traditions were harmless enough. Theriac, for instance, a concoction dating back at least to the second century A.D. healer Galen that remained in the pharmacopoeia until the mid–twentieth century, probably never hurt anyone. In fact, since it contained—in addition to pulverized viper flesh, which is to say snake oil—generous quantities of opium, it probably made nearly everyone feel better. But other traditional cures probably hastened many deaths, including that of George Washington, whose fatal ague was treated with mercury, another popular ancient remedy and the same metal that doctors today tell you to avoid at all costs. To the extent that physicians were successful they relied not on knowledge of how their remedies acted biochemically to cure a disease but on luck, on trial and error, and, perhaps above all else, on the placebo effect. At least one prominent doctor of the nineteenth century—Oliver Wendell Holmes—knew this. “If the whole materia medica…could be sunk to the bottom of the sea,” he told his brethren in the Massachusetts Medical Society in 1860, “it would be all the better for mankind and all the worse for the fishes.”
Naturally, most doctors didn’t see it that way. They even had a theory for why their medicines should work. Disease, they thought, was the result of an imbalance among the four humors—blood, phlegm, and yellow and black bile—that coursed through the body. This idea had been in force since the Hippocratic era and updated over the centuries, although the relationship between, say, blood the humor and blood the substance had never been fully clarified. Nor had the reason that a miasma—bad air—could throw the humors out of balance and thus be the source of a contagious illness. So even by the time Holmes was worrying about the fishes, the treatment of disease remained a matter of using time-proven, if poorly understood, remedies to restore the proper ratio of the humors.
To accept the germ theory, doctors would have to abandon not only all this tradition, but a worldview about balance and harmony that was more than two thousand years old. As one doctor put it, “If we were once to admit that the pox was produced by little animals swimming in the blood, then we would have as much reason to think likewise not only of the plague…but also of smallpox, hydrophobia, scabies, sores…and this would overturn the whole of medical theory.”
But some scientists were beginning to doubt that what had worked for Plato and Aristotle was really the best medicine. Among them was Louis Pasteur, who in 1862 showed that heating a liquid would kill the microorganisms that fermented it, and who strongly suspected that doing so would make the milk supply safe. But it wasn’t the great Frenchman who convinced the reluctant medical establishment of the truth of the germ theory. That honor went to a German country doctor, Robert Koch.
Koch’s patients were farmers, and their herds were succumbing to a disease that turned their blood black and thus was known as anthrax, from the Greek for “coal.” Koch had heard that a couple of scientists, using the new stains and improved microscopes, had spotted some rod-shaped structures in the blood of animals that had been killed by anthrax. He bought himself a state-of-the-art microscope and used it to elucidate the life cycle of the rods, which he called “bacilli.” He then conducted a series of experiments in which he injected mice, rabbits, and frogs with the bacillus. The animals did him the favor of contracting the disease and dying, and in 1876, Koch announced that he had discovered the cause of anthrax and offered the undeniable proof under his microscope.
Paul Ehrlich and Robert Koch met two years later, but they were only distant acquaintances until 1882. That was the year that Robert Koch announced that using the same techniques he had used with anthrax, he had found the tuberculosis bacillus and killed four guinea pigs with it. He lamented, however, that he had not yet found a dye that would make the bacillus easy to spot. Within the year, Ehrlich had found that methylene violet would light it up unmistakably, and the two men began a long collaboration.
But identifying a pathogen and doing something about it were two different matters. Pasteur came to focus on vaccination, while Koch thought that sterilization and other public health measures were the best way to kill the bugs. Ehrlich, however, had a different idea. Why not exploit the chemical affinities between synthetics and organisms for purposes beyond diagnosis? He explained his idea this way:
It should be possible to find artificial substances which are really and specifically curative for certain diseases, not merely palliatives acting favorably on one or another symptom…Such curative substances a priori must directly destroy the microbes provoking the disease; not by an “action from distance” but only when the chemical compound is fixed by the parasites.
“Magic bullets,” as Ehrlich called these substances, would fly “straight onward, without deviation, upon the parasites.” Once scientists “learn how to take aim, in a chemical sense,” they could have a “marvelous effect”: they would be able to draw a bead on disease and kill it.
Ehrlich initially conceived of magic bullets as the body’s own weapons, and of drugs as a means to unleash them, a process he called “chemotherapy.” His investigations into immunology earned him the Nobel Prize in medicine in 1908, but by then his efforts had turned to a more direct use of chemicals: as the bullets themselves.
Ehrlich had also by then gained a corporate patron: Hoechst, to which the magic-bullet model was greatly appealing and which bankrolled Ehrlich starting in 1906. Ehrlich’s a priori approach promised not only huge reward, but also lower research costs. It took some of the guesswork out of drug development by using knowledge about the molecular structure of both drug and pathogen to narrow the field of candidates. “There must be a planned chemical synthesis,” Ehrlich said, and he had an idea of where to start.
The ongoing attempt to synthesize quinine had led researchers to look at other tropical diseases besides malaria, including sleeping sickness, the coma-inducing result of a tsetse fly bite. In 1903, one of Ehrlich’s dyes had proven toxic to sleep-sick animals. Trypan red—named after the trypanosomes, the spiral-shaped organisms that caused the disease—proved disappointing as a remedy; whatever it was doing to cure mice was not working in h
umans. In the course of his work, however, Ehrlich had discovered that trypan red was much more effective in the mice when it was mixed with a form of arsenic. He also heard that a couple of British doctors had used Atoxyl, a compound derived from arsenic, to treat animals infected with sleeping sickness. Atoxyl turned out not to work in humans; even worse, it destroyed their optic nerves, making them blind before they slept themselves to death.
But Ehrlich was intrigued by arsenic, which, like mercury, was an ancient remedy. And he had discovered something significant about Atoxyl and other arsenic derivatives: at the end of their chemical chain was a reactive chemical group, an open link of nitrogen and hydrogen that could easily be joined to other molecules. Ehrlich concluded that Atoxyl was effective in animals for exactly this reason: its reactive chemical group latched onto the “parasites” in the same way that dye latched onto tissues. It was perfect, in other words, for the planned syntheses that could both prove out his magic-bullet method and lead to drugs that would make Hoechst happy. And he determined that the test animals were doing a little synthesizing themselves: their metabolism converted the arsenic in Atoxyl into a more useful and less toxic compound. Ehrlich decided to follow suit, using the reactive chemical group to make a version of the arsenic metabolite. He combined this molecule with other chemicals, searching for a drug that would work in people. The first three years and 417 syntheses were failures, but the 418th attempt found its target: arsenophenylglycin, as the compound came to be known, reliably killed trypanosomes in animals and humans but otherwise left the host’s body alone.