Posts Tagged ‘U.S’
Posted on January 30, 2011 - by David
The latest episode of ‘Office Hours,’ a social-science podcast produced by several grad students at the University of Minnesota, features my recent interview with Francesco Duina, chair of the sociology department at Bates College, and author of ‘Winning: Reflections on an American Obsession.’
Since reading Duina’s book, I’ve noticed the language and mindset of competition popping up in some questionable contexts, and thanks to his insightful analysis, I’m less likely to accept without reflection that this winning/losing dynamic always makes sense. But:
For many of us, it is a simple matter of fact that, in our schools, workplaces, businesses, and everywhere else, there are winners and losers. We can either win or lose our war against fat, the peace in Iraq, recognition as best employee of the month, custody of our children, our lover’s heart, and in the words of Newt Gingrich in his recent book, even “the future” (Gingrich 2005). (Duina 182)
It’s particularly interesting that Duina’s brief list of confusing competitions includes the Gingrich book (titled, ‘Winning the Future: A 21st Century Contract with America’), since “winning the future” turned out to be the catchphrase from Obama’s State of the Union Address last week. I counted 10 uses of this phrase or some variation of it, and the heading above the video on the Whitehouse website makes it abundantly clear that this was indeed the official theme of the night.
So what did people have to say about the President’s new slogan? Gingrich obviously agrees that the term fits the topic, and states that “Winning implies a real contest. Winning implies losing is possible.” (Duina would indeed have us recognize the same thing about this kind of language, but Gingrich doesn’t demonstrate how exactly this makes sense, the point for him being, I guess, that America’s victory isn’t a sure thing). Of course he disagrees with Obama entirely about how the future is to be won. Bill O’Reilly opens his article by poking some fun at the phrase, but then ultimately buys into the concept, just disagreeing in pretty much the same fashion as Gingrich about what will make us future-winners. Sarah Palin commented on her Facebook page that the “acronym [wtf] seemed more accurate than much of the content.”
Others challenged the language itself a bit more directly. An AP article called it an “upbeat but amorphous phrase.” NPR’s Ari Shapiro noted that, “for Obama, “Win the Future” has the advantage of being vague. At the end of “recovery summer,” people asked where the recovery was. The future, on the other hand, is always just around the corner.” Still others got closer to the heart of the matter, questioning Obama’s use of this “amorphous phrase” to talk about competition with nations like India and China. Tim Redmond of the San Francisco Bay Guardian asked:
…since when was the future a war, something to be fought with an enemy? To “win” the space race we had to “beat” the Soviets, which we did (ha ha, we got to the moon first). To “win” the future, do we have to beat someone else? The Russians aren’t up for winning much of anything these days, but Obama seems concerned about competing with China; do the Chinese have to “lose” the future for us to “win?”
Art Carden, on his Forbes blog, The Economic Imagination, wrote:
… while a group of White House speechwriters apparently thought that “win the future” would have the same rhetorical resonance as “yes we can,” the Address conveyed an incorrect zero-sum worldview in which what others gain comes at our expense. As economics has shown over and over and over and over again, trade creates wealth. Voluntary exchange is a positive-sum game. If China gets richer, it doesn’t imperil our ability to get richer, too.
You can find similar thoughts at the Economist’s Free Exchange blog. The point is, it’s not clear why we Americans “need to out-innovate, out-educate, and out-build the rest of the world” in order to be content. I for one am not particularly upset by the fact that South Korea has better wireless access than we do, though the intonation of Obama’s voice as he tells us this suggests we all should be. It’s also odd that the first half of his speech sets up other nations as opponents, but he goes on to cite major trade agreements we have reached or are working on in Asia as evidence of the progress we’re making. Agreements imply cooperation, not competition, but I guess it’s just harder to get Americans fired up about working together than it is to construct a global economic showdown.
I found the “education race” rhetoric to be especially troubling. “Of course,” Obama said, “the education race doesn’t end with a high school diploma. To compete, higher education must be within the reach of every American.” What are our nation’s students supposed be racing towards? Is a student’s desire to go directly to work after high school legitimate, or does this signal that he has lost the race? Obama is determined that “by the end of the decade, America will once again have the highest proportion of college graduates in the world,” but if we take this statement apart, we see it is a relative goal, contingent upon the proportion of college grads in the nations we imagine ourselves to be competing against not rising enough to keep us from reclaiming the top spot. (To highlight the nebulous quality of this aspiration, I should mention that it could in fact be achieved without any improvement on our part, if the rates of college graduation in these competitor nations were to drop for whatever reason).
And just what kind of education should young Americans be racing to get so that we can “win the future”? Obama emphasized the importance of math and science, and mentions how we’re falling behind in these areas, but of course neither he nor the two men who sat behind him (Biden and Boehner) received this type of education (nor, I would guess, did the majority of those seated in the House chamber that night). What’s implied is that if we can “out-educate” in math and science, our young people will be able to “out-innovate and out-build,” and thus we will “win the future.” Certainly, improving material conditions is a worthwhile aim, but in order to navigate our increasingly important and complex relations with other nations, won’t we need students of history, language, psychology, and culture? Will better wireless coverage and faster trains help us understand the way that people in India see the world? And even more fundamentally, do we have any good reason to think that a stronger economy will erase problems like mental illness, substance abuse, suicide, and violence of the type we saw in Tucson a few weeks ago?
This isn’t intended to be a critique of Obama, but rather a reevaluation of the imprecise rhetoric which I’m sure was meant to inspire and uplift us. I think we would all do well to follow Duina’s advice about “conceptual hygiene,” and commit to using the language of winning and losing only where it actually fits. If we can do this, our words and thoughts may begin to better match reality, we might find ourselves able to articulate what it is we actually want, and perhaps we’ll start to feel a bit more at peace with ourselves and those around us. (And that includes India and China)
Posted on January 24, 2011 - by David
Of the various writings, videos, and recollections of teachers, classmates, and former friends with which we are left to piece together a picture of Jared Loughner’s mind before the shooting, the most intelligible is a poem titled ‘Meat Head’ that he wrote for one of his classes at Pima Community College last fall.
Awaking on the first day of school
Pain of a morning hang over
Attending a weight lifting class for college credit
Attempting to exercise since freshmen year of high school
Crawling out of bed and walking to the shower
Warm water hitting my back
Thoughts of being promiscuous with a female again
Putting on a old medium red tee shirt, light brown cameo shorts, and black Adidas
For breakfast a glass of water, cold pepperoni pizza, and two Advil
Bringing my Nano Ipod with heavy metal music
Taking the local bus on a overcast morning
Waiting with crack heads after their nightly binge
Bus is cheap, two dollars for a ride anywhere in the city
Sitting in back against a hard plastic seat
Staring at stop lights, brand new cars, and graffiti
Coming to a slow halt in front of the school
Entering the gym as the glowing florescent lights are humming
Next to the treadmills, putting a green foam mat on the ground
Stretching for fifteen minutes, loosening the muscles in my legs, back and arms
Cleaning the mat with anti-bacterial spray and a paper towel
Jogging for ten minutes, my heart beating, beating, beating
Pain in my right side of the last minute of twenty
Looking around, the cute women are catching my eye
Probably waiting for their hot boyfriends wandering in the locker room
All the men are in shape with their new white tank shirts, basketball short, and Nike shoes
Confusing look on my face of no idea what to do
Deciding to copy other men’s routines of
Arm Curls, Leg presses, Rows, Squats, Military something’s, and Isolated whatever’s
Leaving the gym thinking
Waiting for the bus with alcoholics that are going to the bars early
Coming home for another shower
While grabbing the white towel, the eureka moment is lingering
Quick nap and lunch is on my mind
Setting the alarm one hour before getting ready for my next class
Getting into bed
The title already suggests a problematic identity; (for any who might not know), ‘meathead’ is a derogatory term for jocks, muscular men, athletes, etc., but the poem focuses on Loughner’s own time in the gym, so he is criticizing himself? He has been “attempting to exercise since freshmen year of high school,” (this “attempting” implies a lack of success), and yet the title and tone suggest he sees himself as different from, and possibly somehow superior to, those working out around him. He has trouble looking the part, as his gym clothes are mismatched and worn out (“old medium red tee shirt, light brown cameo [probably meant camo] shorts, and black Adidas”) while the others “are in shape with their new white tank shirts, basketball short, and Nike shoes.” Not only does he feel like he looks out of place, he’s not sure how to act (“confusing look on my face of no idea what to do”), so he ends up “deciding to copy” what others do. It seems Loughner relates his lack of romantic or sexual involvement with the opposite sex to his inability to be like these other men, and while there is clearly an element of sexual tension in the poem, I don’t think it is as simple as Loughner being some kind of sex-crazed pervert. In fact, a MySpace posting from November 17th (“It hurts to have been never sexually active at 22!”) reveals that it is not so much a lack of sexual activity that is the problem, but his consciousness that in this society, a 22-year old virgin would probably be deemed abnormal. The people Loughner connects himself to most closely are the “crack heads” and “alcoholics” with whom he waits for the bus, and even this mention of public transportation seems another to reflect upon his inadequacy – he notes the contrast as he views “brand new cars, and graffiti” from his “hard plastic seat” on the “cheap” bus.
When read, ‘Meat Head’ is more depressing than disturbing, especially if one forgets for a moment who wrote it. But apparently the style of his presentation in class didn’t quite match the overall flatness of the poem. According to Don Coorough, a classmate who provided copies of two of his poems to the media, Jared “had the poem memorized, and he stood up in class and performed it with great drama — at one point, grabbing his crotch.” This performance, along with his inappropriate emotional responses to others’ poems (he laughed and joked as a tearful female student read a poem about abortion), contributed to the complaints which resulted in his suspension from Pima.
Another poem, ‘Dead as a dodo,’ may be an attempt to paint an allegorical scene, though it’s anyone’s guess who the dodo is (is it Loughner? Giffords?) or what the other objects, creatures, and movements might symbolize.
Dead as a dodo
On the island of Mauritius a heavy storm is leaving.
In the fields of the ancient wild forest a wild field of mushrooms is growing.
Snails and grasshoppers are ready for the warmth.
The old grass growing with lizards are jolting for crickets while snakes looking for lonely mice.
Falcons are flying for pray.
Shallow light Blue Ocean shimmering at each wave as the black clouds are rolling.
Waves are lapping.
Fisherman on the reefs are casting their poles.
In warm water a pack of clown fish are floating.
Tiger sharks are swimming free.
Steel drums beating in the distance.
The full moon slowly setting for the sun is rising.
At the local cemetery there is weeping.
The dodo is finally dieing.
But one wonders, why was this kid taking a poetry class when the unanswered question which proved nearly fatal for Rep. Giffords was, “what is government if words have no meaning?” His friends, at 4:06 in the video below, describe Jared’s obsession with what he perceived as the meaninglessness of language:
“He was obsessed with how words were meaningless, you know, you could say, “oh, this is a cup,” and hold a cup, and he’d be like, “oh, is it a cup? or is it a pool? is it a shark? is it an airplane?”
While his friends, and others since the shooting, have interpreted these statements as nonsensical, he is on to something very real here, despite his difficulty in expressing it: Jared realized that words, as symbols, are arbitrary, given their meaning by the history of their (socially agreed upon) use. There is nothing in the physical composition of the object we call a “cup” that makes us use that sound and those letters to refer to it, and for Jared this arbitrariness was equal to unreality. This fact of culture, overlooked or taken for granted by most, seems to have been both exhilarating and terrifying for Loughner; exhilarating because it meant there was no good reason why he should be constrained by social conventions, and terrifying because he was, in fact, constrained – someone or something else was “controlling the grammar.”
In addition to discovering the arbitrary nature of symbols, Jared senses the importance of logic in our culture, and his attempts to make sense of his reality rest largely on a series of if-then syllogisms like those in the video above. He seems to think that by formulating his delusional beliefs, (which he takes as facts), into logical statements, he has proven these beliefs true to his (at the time he made the videos, probably imagined, but now very real) “listener.”
Of course, since the premises themselves are faulty, nearly all of Jared’s syllogisms fail, except perhaps the following:
All humans are in need of sleep
Jared Loughner is a human
Hence, Jared Loughner is in need of sleep
If we consider what Loughner does (or tries to do) when he sleeps, our image of him becomes even more interesting: according to his friends, Jared was an enthusiastic practitioner of lucid dreaming. His own writings refer to “conscience dreaming” by which he presumably meant “conscious dreaming” (another term for lucid dreaming. Apparently, he preferred the dream world to waking world, feeling a greater sense of freedom and control while asleep.
Examined in the light of Liah Greenfeld’s hypothesized mental processes, Jared Loughner’s struggle to determine his own reality demonstrates fundamental problems with his Identity which manifested in problems with the Will. But one of the most important questions – from a legal standpoint at least – will be whether or not Loughner fully understood and was in control of his actions when he opened fire on January 8th. The evidence indeed suggests this was a willful act – planned ahead of time, and executed according to plan, so how do we reconcile this with the image of a deranged mind? In my next post on the subject, I’ll look at how Loughner’s delusional beliefs and other psychotic symptoms fit into existing definitions of mental illness, and consider what this might tell us about Jared’s mindset the moment he pulled the trigger.
Posted on October 1, 2010 - by David
I am working directly from the unpublished text of Liah Greenfeld’s forthcoming book, Mind, Madness, and Modernity: The Impact of Culture on Human Experience. All the original ideas, and all interpretations and analysis of primary and secondary source materials used to support the ideas are attributable to Liah Greenfeld. Read the introduction to the exposition here.
part 3 – Madness: A Modern Phenomenon
With all that has been written about schizophrenia and manic-depressive illness, the countless studies that have been conducted, and the growing list of medications used in treatment, it would be easy to mistakenly assume that we now understand the nature and cause of these ailments. The history of the separation of psychoses of unknown cause into these two categories leads us to Emil Kraepelin (1856-1926). This German psychiatrist believed that these were heritable brain diseases, and he led a revolution in classification in German-language psychiatry around the turn of the twentieth century, trying to discover just what kind of brain diseases he was dealing with. Kraepelin used a latin version (dementia praecox) of the French term demence precoce (coined in 1852 by Benedict Morel), to distinguish a form of insanity with an early onset and rapid development from the common geriatric dementia. Kraepelin then separated Dementia praecox from manic-depressive insanity (called by the French folie circulaire or folie a double forme). Up until that point, the two conditions were believed to constitute one general category of insanity.
Kraepelin’s use of the term dementia praecox, which suggested a progressive slowing of mental processes, to refer to a condition characterized largely by delusions and hallucinations, (which imply not mental lethargy but imaginative hyperactivity) may have contributed to the misinterpretation of schizophrenia, (still common today), as degeneration of cognitive/reasoning capacities. The evidence suggests that it is rather the strange character of thought, the inability to think in normal, commonly accepted ways, which distinguishes schizophrenia from geriatric dementia. The name “schizophrenia” (meaning “splitting of the mind”) was introduced to replace dementia praecox in 1908 by Swiss psychiatrist Eugen Bleuler. Bleuler saw the disease mainly in terms of four features: abnormal thought associations, autism (self-centeredness), affective abnormality, and ambivalence (inability to make decisions). Then in the 1930’s, another German psychiatrist, Kurt Schneider, contributed greatly to the diagnosis of schizophrenia by identifying “first-rank symptoms,” primarily related to hallucinations and delusions. Hearing voices speak one’s thoughts aloud, discuss one in the third person and describe one’s actions; feeling like an outside force is controlling one’s bodily sensations or actions and extracting, inserting, or stopping thoughts; believing that one’s thoughts are “broadcast” into the outside world – these are some of the experiences which Schneider found to be characteristic of the illness which Bleuler had recently renamed.
It should be noted that although Schneider’s first rank symptoms are essentially psychotic symptoms, (and schizophrenia is by definition a psychotic illness), very often those diagnosed with schizophrenia do not experience these symptoms. Diagnostic standards today distinguish between positive symptoms, (symptoms like hallucination and delusions which are not present in healthy individuals), and negative symptoms (e.g blunted affect, lack of fluent speech, inability to experience pleasure, lack of motivation). Anti-psychotic medications are often effective in treating some of the positive (i.e psychotic) symptoms of schizophrenia, but attempts to alleviate negative symptoms with medication have been largely unsuccessful, and the prognosis tends to be worse for sufferers who experience primarily negative symptoms.
By far the most authoritative and extensive work (over 1200 pages long) on that other half of madness is Manic Depressive Illness: Bipolar disorders and Recurrent Depression, written by Drs. Frederick Goodwin and Kay Redfield Jamison. The subtitle (Bipolar disorders and Recurrent Depression) added for the 2nd edition, (published in 2007), emphasizes the essential unity of all the major affective illnesses. In the introduction, the authors stress their reliance on Kraepelin’s model for their own conceptualization of mdi. (They, like Kraepelin, see it as a brain disease with genetics playing a significant causal role). But because Kraepelin’s major act of classification was to divide psychotic illness into two distinct disorders, any definition of mdi based on his work depends on having a clear definition of schizophrenia, which is clearly lacking. Kraepelin’s distinction between the two was based primarily not on differences in symptoms, but on course of illness and outcome, with schizophrenia (or in his terminology, dementia praecox) being much more malignant and causing significant deterioration over time. It was in fact Eugen Bleuler who first called mdi an “affective illness,” not because schizophrenia occurred without major mood disturbance, but because in mdi he saw it as “the predominant feature.” This characterization has proven to be extremely important for the current conception of major mental illness; the original distinction as between two psychotic illnesses has largely been obscured, and mdi is now viewed essentially as a mood disorder, with schizophrenia, by contrast, appearing to be essentially a thought disorder.
Though manic-depressive illness includes a variety of mood disorder diagnoses, the main distinction is between major depression and bipolar disorder (alternating episodes of depression and mania). A few decades ago, the bipolar label was split into bipolar-I and bipolar-II. Bipolar-I is the severe form of the disease in which both depressive and manic episodes are serious enough to require treatment. A diagnosis of bipolar-II may be given when a patient suffers from major depressive episodes and also experiences “hypomanic” episodes (meaning basically “mildly manic” and therefore lacking psychotic features). Even Goodwin and Jamison seem skeptical of the value of this and other divisions in classification.
In order to compare manic-depressive illness with schizophrenia, then, we should concentrate on descriptions of, (go figure), depression and mania. According to the DSM-IV, typical symptoms of depression include “loss of interest or pleasure in nearly all activity,” irritability, “changes in appetite and weight, sleep, and psychomotor activity; decreased energy; feelings of worthlessness or guilt; difficulty thinking, concentrating, or making decisions; [and] recurrent thoughts of death or suicidal ideation, plans, or attempts.” The description given by Goodwin and Jamison is along the same lines, though much more vivid:
Mood in all of the depressive states is bleak, pessimistic, and despairing. A deep sense of futility is often accompanied, if not preceded, by the belief that the ability to experience pleasure is permanently gone. The physical and mental worlds are experienced as monochromatic, as shades of gray and black. Heightened irritability, anger, paranoia, emotional turbulence, and anxiety are common. (MDI 66)
Further descriptions from patients and clinical observers add more layers to this general body of symptoms; among the most interesting, lack of facial expression, and a sometimes frightening sense of unreality. It is quite clear that depression is something altogether different from normal sadness, and even “abnormally low mood.” These descriptions show a huge variation in the level of emotion experienced, from almost no feeling at all, to unbearably acute anxiety. A depressed person’s thinking may be slowed almost to the point of paralysis, or he may alternately be unable to control an unending torrent of painful thoughts. All that seems consistent within descriptions and definitions of depressive episodes is that it is an extremely unpleasant experience.
There is such a diagnosis as psychotic depression, (featuring obvious delusions and hallucinations, in which case it is not clear how it can be diagnosed differently from schizophrenia) but even its more ordinary form, many of the symptoms of depression cannot be easily distinguished from the negative symptoms of schizophrenia, which include flat affect and paralyzed thought. And what good reason is there not to consider the firm belief in one’s utter worthlessness, the obsession with death, and the sense of the absolute necessity of ending one’s life as instances of delusion or thought disorder?
Just as depression is not just extreme sadness, mania is not an exaggerated form of joy. According to the DSM-IV, a manic episode is a period of “abnormally and persistently elevated, expansive, or irritable mood,” with typical symptoms being “inflated self-esteem or grandiosity, decreased need for sleep, pressure of speech, flight of ideas, distractability, increased involvement in goal-directed activities or psychomotor agitation, and excessive involvement in pleasurable activities with a high potential for painful consequences.” To be considered a manic (rather than merely “hypomanic) episode, “the disturbance must be sufficiently severe to cause marked impairment in social or occupational functioning or to require hospitalization or it is characterized by the presence of psychotic features.” Mood within a manic episode may be highly variable, and the frequent alternation between euphoria and irritability is noted.
Grandiose delusions are common – the extreme expression of the inflated sense of self-importance so typical in mania. (Again, one wonders why the beliefs which spring from the typical sense of worthlessness in depression – the polar opposite of the grandiose beliefs in mania – should not be considered delusions as well). Grandiosity often manifests in compulsive writing which the sufferer may believe has special significance but is usually characterized by “flight of ideas” and “distractability.” This behavior is not unique to mania, and has been well documented in patients diagnosed with schizophrenia.
Delusions may be not only grandiose, but, (as in schizophrenia), paranoid as well. In some severe cases, the sufferer may reach the stage of delirious mania, which the authors of MDI describe by quoting Kraepelin:
At the beginning the patients frequently display the signs of senseless raving mania, dance about, perform peculiar movements, shake their head, throw the bedclothes pell-mell, are destructive, pass their motions under them, smear everything, make impulsive attempts at suicide, take off their clothes. A patient was found completely naked in a public park. Another ran half-clothed into the corridor and then into the street, in one hand a revolver in the other a crucifix….Their linguistic utterances alternate between inarticulate sounds, praying, abusing, entreating, stammering, disconnected talk, in which clang-associations, senseless rhyming, diversion by external impressions, persistence of individual phrases are recognized. …Waxy flexibility, echolalia, or echopraxis can be demonstrated frequently. (36)
The descriptions of delirious mania provided by recent clinicians are similar to Kraepelin’s. Quite obviously, a patient in the condition described above is suffering from some of the most characteristic symptoms of schizophrenia. Of course for those following in Kraepelin’s footsteps, this similarity should come as no surprise, since (as was mentioned earlier) his distinction between the two psychotic disorders was not based on differences in symptoms. Indeed, the need to clarify the blurry boundary between psychotic mania and schizophrenia has resulted not in further distinction, but the creation of hybrid diagnostic categories like schizoaffective and schizo bipolar. In summarizing the findings of a number of studies over a thirty year span comparing thought disorder in schizophrenia and mania, Goodwin and Jamison are forced to conclude that there is no quantitative difference in thought disorder between the two conditions. Nevertheless, (needing to maintain the distinction between their area of expertise and the even more mysterious realm of schizophrenia) they maintain there are qualitative differences in thought disorder, though the studies used to support this claim point in a number of different directions. Of course, these studies were done only after patients received a particular diagnosis, so differences in thought disorder may also have been related to the effects of different medications. After considering the huge overlap between these two diagnoses, and the fact that differences seem to be more of degree than kind, it seems possible that perhaps they might not be two distinct diseases after all.
While the technological advancements of recent decades allow us to map the human genome and look at the brain on the molecular level, the enormous amount of data that has been amassed is virtually useless for psychiatrists trying to diagnose their sick patients because the assumed biological causes of schizophrenia and manic-depressive illness have not been found. No brain abnormalities that are specific to either illness or present in all cases have been identified. Nevertheless, the experts who study and treat schizophrenia and mdi keep the faith (quite literally) that a breakthrough is just around the corner.
For years, genetic research has appeared to be the most promising of the recently opened avenues, but the excitement seems unwarranted by the findings. The relatively large number of chromosomal regions which may be implicated in susceptibility for bipolar means that hope of finding a specific bipolar gene or even a small number of genes must be given up. Some researchers think the way to go is to narrow the search by looking for genes associated with specific aspects of the disease. Of course, this further refinement is only possible because of the huge variation in symptoms and experiences of those who fall under the mdi/bipolar umbrella, and we are once again reminded of the difficulty of defining what this illness or group of illnesses even is. Furthermore, even the distinction between schizophrenia and mdi seems to collapse in light of the genetic linkage data. Goodwin and Jamison write:
While the search for predisposing genes had traditionally tended to proceed under the assumption that schizophrenia and bipolar disorder are separate disease entities with different underlying etiologies, emerging findings from many fields of psychiatric research do not fit well with this model. Most notably, the pattern of findings emerging from genetic studies shows increasing evidence for an overlap in genetic susceptibility across the traditional classification categories. (49)
Genetic studies in the schizophrenia research community lead to pretty much the same hypothesis as with bipolar: genetic susceptibility is most likely polygenic, meaning dependent on the total number of certain genes which may contribute to vulnerability. It must be noted that genetic vulnerability is a condition, not a cause of schizophrenia and bipolar – something else must be acting on this vulnerability. In one way or another, this fact is usually noted in the literature that deals with genetic data, but it is often obscured by a tone of confidence which suggests the information may be more meaningful and explanatory than it truly is.
Even when a specific gene has been well studied across illnesses, its usefulness in understanding genetic susceptibility may be extremely limited. Some studies in both schizophrenia and mdi have found an increased risk of illness for those who possess the short form of the serotonin transporter promoter gene 5-HTT. The thing is, each of us has two copies of this gene, and over two-thirds of us have one long and one short form, meaning that having the normal variant of the gene is the risk factor! If most of us possess a gene which puts us at risk for an illness which only a small minority of people have, then this particular trait is obviously not much of a causal explanation.
Still today, the most important evidence for the heritability of schizophrenia and bipolar are traditional genetic-epidemiological studies – “genetic” research only in the sense that we know that relatives share genes. There is significantly greater lifetime risk of illness for people with a first degree relative who suffers schizophrenia, and studies of bipolar and major depression (i.e manic-depressive illness) have had parallel findings. However, the overwhelming majority of schizophrenics do not have parents or first-degree relatives with schizophrenia, and most of them do not have children themselves, making it difficult to establish the genetic component by looking at family history in a large percentage of cases.
Studies of twins are particularly important for the heritability argument. Calculations from these studies find a 63% risk of having bipolar disorder if an identical (monozygotic) twin has it. The risk for major depression is significantly lower. In schizophrenia the risk is under 50%. The ideal study design for attempting to separate the contributions of biology and environment involves identical twins, separated at birth, adopted, and raised apart, with at least one of them suffering from mental illness. As can be imagined, these cases are hard to come by (4 in mdi and 14 in schizophrenia), and the small number of cases makes generalization suspect (though generalizations are often still made). Another method, for which there is significantly more data, is to compare the risks of identical (monozygotic) and fraternal (dizygotic) twins. Because both kinds of twins are assumed to share the same environment, but fraternal twins only share 50% of their genes, the difference in risk between fraternal and identical twins is attributed to genetics. But this method depends on an extremely limited understanding of environment, reducing it to simply having the same parents. It’s likely that identical twins would be treated in very similar ways by their parents and society at large, but fraternal twins, being biologically different (perhaps even in gender) will likely be treated in very different ways. Therefore, it is highly doubtful that twin studies are able to separate the contributions of biology and environment to lifetime risk of mental illness to anywhere near the degree that is suggested. The fact that over one-third of identical twins are not affected by the disease from which their twin suffers reveals again that genetic susceptibility is at most a condition, and not a cause of schizophrenia and mdi.
The prevailing assumption that schizophrenia and mdi have biological causes naturally leads to the expectation of finding them distributed uniformly across cultures and throughout history. In the case of schizophrenia, this belief justifies the adoption of the standard worldwide lifetime risk of 1%, (a nice round number), extrapolated from an embarrassingly small number of studies – one from Germany in 1928, and two from the 1940’s in rural Scandinavian communities. However, there is a serious lack of evidence of the existence of these illnesses before the early modern period, and studies have consistently found significant differences in the rates of mental illness across cultures and between social classes within cultures. Nevertheless, (perhaps because the idea that serious mental illness may affect different populations at different rates does not sit well with us), variations are often explained away with charges of inaccurate reporting and under or over diagnosis. But epidemiological studies sponsored by the World Health Organization carried out over several decades have found that the illness identified as schizophrenia in poorer, “developing” countries tends to be less chronic (fewer psychotic episodes), causes less disability, and has a better prognosis than schizophrenia in more affluent, “developed” societies. Some of the data from Western nations suggests a lifetime risk of schizophrenia greater than 1%, while in poorer societies the number often appears lower. Multiple studies have found the rate of schizophrenia among Afro-Carribeans born in the UK to be higher than the prevalence in the islands from which their families immigrated. Both schizophrenia and mdi have been found to be less prevalent in Asian countries.
Overall, cross-cultural data supports the hypothesis that schizophrenia and mdi are diseases caused by modern culture, and more specifically, that the more anomic a society becomes, (i.e the more identity becomes a matter of individual choice and the less guidance is given by culture), the more mental illness will be found. Research in the U.S has shown a lower age of onset and higher rates of prevalence for manic-depressive illness in those born after 1944 compared to those born before, though this increase has been attributed to the inadequacy of earlier data-collection techniques, which systematically underestimated the true prevalence of affective disorders. Usually, when environment is allowed a causal role in mental illness, poverty and the stress of the urban environment is the safest target to blame, with studies as early as 1939 finding a higher incidence of schizophrenia in lower-class, urban areas. However, when studies began to consider social class of origin rather than merely the status of the patient when the illness was first recognized, the picture changed significantly. The social mobility of schizophrenic patients displays a “downward drift,” suggesting that their greater proportion among the lower class is due to the disability of the disease rather than the stress of this environment. Furthermore, it appears that the upper-class supplies more schizophrenics than could be predicted by the total upper-class share in the population. The majority of studies of manic-depressive illness show significantly lower rates in blacks compared to whites, but this, like so many other findings which make no sense within the biological framework, is dismissed for a variety of reasons as a mistake.
Finally, Goodwin and Jamison tell us that “the majority of studies report an association between manic-depressive illness and one or more measures reflecting upper social class.” (169) To explain this finding, they consider the possibility that certain personality traits associated with affective illness may contribute to a rise in social position. (One assumes they mean the occasionally “positive” aspects of mild mania, since it is unclear how crippling depression or delusional mania would aid in social climbing). A second hypothesis, that manic-depressive illness could be related to the particular “stresses of being in or moving into the upper social classes,” is deemed simply “implausible, because it assumes that, compared with lower classes, there is a special kind of stress associated with being in the upper social classes, one capable of precipitating major psychotic episodes.” Furthermore, they accuse such a hypothesis of ignoring genetic factors, though discounting genetic vulnerability as a condition for mdi is quite obviously not implied by this idea.
By now it should be quite clear that the belief that major mental illness is caused biologically has made it virtually impossible to reconsider what the empirical evidence actually tells us. Each time the research that is supposed to support this belief comes up short, it is another occasion for the reaffirmation of faith in a soon-to-come breakthrough. Where the data appears to blatantly contradict their hypothesis, they often simply discount its reliability. While many of the most important experts will freely admit how little we actually understand about mental illness, despite all efforts, it is hard to imagine the direction of these efforts will change much anytime soon. This is not a recipe for scientific progress.
The final post of this series will bring Greenfeld’s theory of the mind together with what we know about schizophrenia and manic-depressive illness, considering the two as one disease existing on a continuum of complexity of will-impairment.
Posted on May 19, 2010 - by David
Ethan Watters, author of Crazy Like Us (see my commentary), was on PRI the other day (listen to the audio here). I was pleased to get a prompt response to my comments, which I’ve copied below. Go here to follow the entire discussion in the PRI science forum.
Posted on April 8, 2010 - by David
Since Barack Obama’s election in November of 2008, health care reform has been at the center of our nation’s attention. This debate has been almost inseparable from discussion of the ongoing economic crisis, both because of the question of what effect reform measures might have on the economy, and because many individuals are unemployed or making less money and cannot afford health insurance. Naturally, because the most visible players in this game are politicians, much of the discourse surrounding the health care debate has been boiled down to Democrats vs. Republicans, or, (to essentially say the same thing twice), liberals vs. conservatives. Obama and the Democrats have been demonized by their opposition as socialists instituting a totalitarian regime, and the responses of those pushing for reform have not been much kinder.
As we all know, the new legislation has been passed, and with the sounds of celebration on one side and apocalyptic lament on the other, it still seems easiest to conceive of this issue in terms of a conflict between two political parties or groups with opposed economic interests. It strikes me, though, that there is a more basic and more powerful force driving this debate, and it goes straight to the heart of what it means to be an American. This is the contradiction between our two supreme national values: liberty and equality.
The chapter on America in Liah Greenfeld’s Nationalism: Five Roads to Modernity, provides an excellent analysis of how conflicting ideals of liberty and equality have shaped the nation’s history and identity.
“It must be realized that individualistic-libertarian nationalism sets itself an impossible task. A nation, ideally, is a society composed of individuals equal in their human worth. But in fact such perfect equality cannot be achieved. The reality of an individualistic nation and its ideals are necessarily inconsistent, and this inconsistency breeds discontent and frustration.” (449-450)
Before America was truly a “nation,” (according to Greenfeld’s careful account this was not until after the Civil War), and in fact even before independence from England, liberty and equality existed here to a greater degree than in any country across the Atlantic, conferring on the individuals who experienced these values an unparalleled sense of dignity. Of course, despite rhetoric about the natural birthrights of all mankind, initially only white male land-owners could enjoy these rights. The inconsistency between social reality and the professed ideals would inevitably have to be confronted. Greenfeld writes of the decades preceding the Civil War:
“That equality in American society had advanced beyond anything imaginable elsewhere at the time cannot be disputed. But the American society was also committed to equality to an extent that was unimaginable elsewhere. Thus, while the reality in America in this regard was incomparably better than in any other society, the gap between it and its brilliant ideal was nonetheless wider.”(452)
“Inequality inherent in social reality was blatantly inconsistent with American national commitment. In a society which believed that “all men are created equal,” the denial of equality meant that one was not human, was less of a human than others.” (453)
Of course, slavery, the most blatant contradiction of the national principles, was eventually abolished, and, though it took almost another 60 years, women were given the right to vote in 1920. In many ways, the course of American history can be characterized as an attempt to close the gap between social reality and the national ideals. But there is an important distinction between equality of rights under the law and equality of conditions. Equal rights can be, to a large extent, provided by government, but they cannot guarantee that conditions will be equal. On the contrary, in a free society where individuals are not only able, but expected to achieve for themselves what they can, inequality of conditions will necessarily result. Even if we admit that “all men are created equal” does not mean that everyone is born with the same natural ability, Americans still desire equality of opportunity – the sense that we all start on a level playing field, that achievement will not be dictated by inherited wealth, geography, or social connections. But common sense and observation tell us that equality of opportunity is not achieved through equality of rights. The federal government, then, becomes a means of creating equality of conditions:
“In a society which sets great store by equality, economic inequality acquires a significance which goes beyond the effects of differences in material well-being. It is necessarily seen as unjust by the “have-nots” and is perceived as an affront to their dignity, because it belies the proposition that all men are created equal and have equal rights to life and happiness. Equality in liberty (that is, self-government) becomes less important in such situations. In fact, rather than being regarded as an absolute good, it is likely to be seen as a tool for the perpetuation and concealment of existing inequalities. Liberty is infinitely divisible; other goods are not. An increase in the liberty of another does not imply a proportional decrease in one’s own; increase in another’s share of a finite quantity of something, whether power or wealth, does. When these resources become scarce, the demand for equality of opportunity, dignity, and respect commensurate with one’s abilities gives way to the demand for equality of result. It is clear that equality of opportunity, which does not provide for the equality of result, would appeal more strongly to those who have the qualifications necessary to realize the opportunities open to them. It is also clear that in the early American society, actually characterized by equality of conditions, equality of opportunity would be generally acceptable without special provisions for the equality of result because it would appear that the latter was implied, inherent in the former. But when actual equality of conditions no longer obtains, the provisions for equality of opportunity only (the legal equality of rights) must appear unsatisfactory. The transformation in the nature of desired equality began to be evident in America in the 1830s. It initiated the transformation in the perception of the functions of the government: government as essentially a protective agency (guarding against encroachments on the people’s rights by others) no longer appeared sufficient; there was a feeling that it should act as a distributive agency. This, in turn, affected the attitudes towards centralization, making it acceptable and even necessary.” (439)
We know that equality of health care does not exist, and this bothers us. It is difficult to accept that some individuals might be held back from achieving their goals or providing for their families, that children might not grow up to enjoy life on earth to the fullest, because they could not afford a treatment that millions of others receive. At the same time, we understand that health care is not an infinite resource – many opponents of the new legislation who are more or less satisfied with their current health insurance fear there will be “rationing” of care. This idea is offensive to the American mind, because it limits the individual’s ability to obtain the level of care he has worked hard for. Those wealthier individuals who currently enjoy a high level of care also fear having to pay a greater portion than others in order to fund a system which threatens to limit their choices. Can coverage be expanded without also being limited? It seems some must give more so others can receive more. While the goals may be liberty and equality, this legislation may actually send a contradictory message. If some people experience a reduction in their level of care, does that tell them that hard work doesn’t pay off after all? Does providing coverage to the uninsured send the message that they are unable to provide for themselves?
Of course there are also serious concerns about how this health care bill will impact the national debt. If the nation’s commitment to equality results in a worsening of the economic situation, liberty and equality will be put at further risk. But the supreme value we place on the individual life, and the belief in equal access to all forms of treatment as a fundamental right, may prove to be dangerously expensive. In a March 31st article from the New England Journal of Medicine, Dr. Molly Cooke says it’s time to give up the old lie that doctors give 100% to each and every patient, and advises that considerations of cost must be taught in med school:
“…we must abandon the myth of the physician as single-minded advocate for any amount of benefit for every patient. We make all kinds of choices in caring for patients; some involve denying care that patients perceive as — and that might actually be — beneficial. Given that we make value-based decisions about the deployment of other finite resources, such as our time and the use of beds in the intensive care unit, why not about costly treatments? In fact, numerous studies in the United States and Europe confirm that bedside rationing of care is common practice. Problematically, it is done in an occult and unpredictable manner.”
Practical as this sounds, I think these are tough words to hear for most Americans. When it comes to health and life, the idea of dollar-value calculations is extremely distasteful to us. It is not that we are fundamentally opposed to the idea of discontinuing care or deciding against a potentially beneficial treatment. It makes perfect sense to us when a family decides it’s time to “pull the plug” on a relative whose chance of recovery is virtually non-existent. We don’t question the cancer patient who finally elects to move home and receive hospice care rather than undergo another risky and painful surgery, even though it could buy him some extra time. We accept these decisions because they are made on the basis of individual dignity and liberty. Of course, with advances in medicine and technology our options are constantly multiplying. That any of these options might be denied to us on the basis of cost almost amounts to cultural blasphemy, but this may be the reality.
A New York Times article published online yesterday also acknowledged the impending cost crisis in health care. Author David Leonhardt identifies some of the same cultural values I mentioned above as obstacles, but is hopeful that reform measures which require that patients be provided with more information may actually help to keep spending down:
“The health act requires Medicare and other agencies to help hospitals and doctors give patients more information — which is practically a no-lose proposition. In the course of receiving more control and more choice, two distinctly American values, patients will probably help hold down costs.”
Whether or not this proves to be true, it’s interesting that this potential solution still relies on the sense of individual liberty and dignity.
My intention is not to guess at whether or not health care reform will work, but to suggest that the principles driving this national debate are older than the nation itself. In an introduction to President Obama’s speech following the historic signing, Vice-President Joe Biden told us that, “For much too long, for much too long, Americans have been denied what every human being is entitled to — decent, affordable health care.” I am sure that 200 years ago, no one in the nation could have imagined health care as a basic human right, but the identification of the American value of equality with basic human rights would have made perfect historical sense. For better or worse, it seems America will keep striving to create a world that matches our impossible ideals.
Posted on March 27, 2010 - by David
Over the last several weeks, the preliminary approval of new social studies curriculum standards by the Texas State Board of Education on March 12th has turned into major national news. As the story goes, because Texas is one of the nation’s largest textbook purchasers, the standards it sets will impact the content of textbooks across the country as publishers try to meet the Lone Star state’s requirements. And why is this such a problem? Because a group of conservative board members pushed through a number of controversial revisions, and rejected many of the changes proposed by liberals in a 10 – 5 vote split down party lines.
These changes include:
- An emphasis on the Christian identity and values of the founding fathers and a shift away from teaching about the separation of church and state. (As a result, Thomas Jefferson get’s scratched off the list of thinkers who inspired revolutions in the 18th and 19th century, replaced, according to the New York Times, by St. Thomas Aquinas, John Calvin, and William Blackstone).
- Referring to the U.S government as a “constitutional republic” rather than calling it “democratic.”
- Using the term “free-enterprise system” in place of “capitalism” to avoid its negative connotations.
- Including in discussions of McCarthyism that “the later release of the Venona papers confirmed suspicions of communist infiltration in U.S. government.”
- A Greater focus on the conservative movement of the 70’s and 80’s
If you’re really interested in finding out about the revisions, I’d suggest you skip the major news outlets and check out this annotated version of the Board’s standards that was put together by writers at www.texastribune.org.
For your viewing pleasure, here’s a clip from ABC’s Nightline, highly critical of lame-duck board member Don McLeroy who seems to be the driving force in this “conservative bloc.”
And I couldn’t resist including the less reverent but more entertaining perspectives of Comedy Central’s Jon Stewart and Stephen Colbert.
|The Daily Show With Jon Stewart||Mon – Thurs 11p / 10c|
|Don’t Mess With Textbooks|
|The Colbert Report||Mon – Thurs 11:30pm / 10:30c|
|I’s on Edjukashun – Texas School Board|
If you watched the clips above or read any of the news articles out there, you probably picked up on the less than subtle jabs at some of the prominent conservative board members who have little or no background in education or history. The New York Times refers to Don McLeroy as “a dentist by training.” David Bradley is characterized as “a conservative from Beaumont who works in real estate.” Whether or not these are valid criticisms, they’re definitely easy shots, and it’s hard to blame journalists for taking them.
So what happens next? The Texas Education Agency website will post a document containing the revisions by mid-April, at which time an official 30- day public comment period will begin. But everyone seems to expect that when the Board reconvenes in May, the ratification of the new curriculum will occur without much further discussion.
When I started to think about what these changes might actually mean to students, a thought occurred to me which wasn’t mentioned in any of the media coverage. I’m not sure how else to put this, so I’ll just say it… Most high school students will not read these textbooks. They will be able to pass U.S history with a minimal amount of reading if they pay a little bit of attention in class and maybe take some notes when the teacher reviews the material. All this hype is based on the assumption that students are actually reading what’s printed, but what if that’s not the case?
I’m pretty sure my experience with American history was not typical. I attended a large public high school in the suburbs of Philadelphia. There were around 750 students in my graduating class. In 11th grade, I took AP (advanced placement) U.S History. I believe there were only two sections of the class, so if each class had approximately 25 kids, that’s about 50 total for the year. That means over 90% of my classmates got some other, less rigorous education in our nation’s history, split up between classes designated as honors, college prep, career prep, and basic instruction. Besides the fact that we were (supposed to be) the best and the brightest of our class, we had real incentive to learn because we were all hoping to score a 4 or a 5 on the AP exam and receive college credit for our work in the class (BU actually gave me credit for 2 U.S history classes).
This class was no joke. Our main text, The American Pageant, was the fattest book in my locker, over a thousand pages long, and we were expected to have read a good chunk of it over the summer before we showed up to school. Its companion was The American Spirit, a book of primary source materials compiled by the authors of The American Pageant. Add to this occasional readings from After the Fact: The Art of Historical Detection, a collection of cases studies from American history designed to teach students to think critically about context and how “history” and “the facts” come into being. Of course there were also novels, biographies, and other historical texts which the ambitious or desperate-to-pull-his-grade-up-at-the-last-second student could read and write about for extra credit, and the various relevant news and magazine stories of that our teacher brought in from time to time.
There was a lot to read, and I didn’t come close to reading it all. I don’t think any of us did (except maybe our valedictorian, who google tells me is currently doing graduate work in quantum physics at Stanford – my head hurts just looking at it). I’m sure a lot of students were like me, and tried to cram as much of the textbook as possible into their minds in the few nights before the AP test in May. Throughout most of the year, I skimmed the textbook on some of the nights we had assigned reading (which was most nights), and neglected to even carry the massive thing home on others. I think I probably absorbed the majority of the information through the instruction and discussion which took place in class each day. I don’t think I’m generalizing too much from my own experience when I say that regardless of the level of the history class, the teacher’s particular methods and what he/she emphasizes has more of an impact on what students will learn than what is written in the textbook. A good teacher will acknowledge when there is controversy on a particular topic, present the various positions with as little bias as possible, and encourage students to think critically about the information before they jump to conclusions. Obviously, this is an ideal, and there’s no doubt that the political views of a history teacher are likely to become visible in the classroom, at least occasionally. In my class, we spent a good deal of time talking about current events, (which tend to be the most charged with emotion), and considering that this was the year that the Towers fell and the war in Afghanistan started, there was plenty to talk about. Interestingly, because we had so much to cover, we barely even got up to 1980 in our textbook, so the years of this “conservative resurgence,” which seem central to the Texas Board’s amendments, were passed over fairly quickly.
The point is, regardless of how influential the Texas Board of Education may be in the composition of new American History textbooks, the claims that they are determining what the rest of the nation’s kids will learn are exaggerated. As journalist Brian Thevenot of The Texas Tribune points out in “The Textbook Myth,’ (by FAR the best article I’ve read on this subject), technology has made it much easier for publishers to customize their content to meet the standards of different states, lessening the impact of Texas’s large market share. Even within Texas, new laws regarding digital materials may undermine the power of the conservatively-crafted textbook. Thevenot writes:
Because of their sheer buying power, large states with statewide textbook adoption processes did once indeed influence what went into the books, which used to be printed almost exclusively in national editions, Diskey and other industry experts said. But since the mid-1990s and the rise of the state curriculum standards and testing movement, publishers have increasingly been forced to customize their books for different states, as well as for larger school districts in the roughly 30 states without statewide adoptions. Simultaneously, advances in publishing and printing technologies allow far more customization at lower cost, much like large newspapers that issue several geographically customized editions every day.
What’s more, rapidly shifting politics and the digital revolution in instructional materials promises to dilute the power of state school boards even further — both here in Texas and nationally. Texas remains one of only two states that has shunned the national standards movement being pushed out of Washington, which, if it progresses as expected, would no doubt dwarf the market influence of even giant states. And here in Texas, new legislation that impinges on the board’s previously well-guarded curricular turf allows Commissioner of Education Robert Scott, who does not report to the board, to create a separate list of approved digital materials over which the board has no say. The new law only requires that schools buy one “classroom set” of board-approved textbooks, rather than one for every student.
As Thevenot’s article suggests, even the idea that the new curriculum standards will drastically alter what students in Texas learn seems suspect. I understand that there are several layers of administration from the state to the district to the individual school which prescribe and monitor what kids should be learning. I also understand that for a teacher, going against the grain or trying to squeeze in extra lessons on excluded or controversial subjects can be risky and complicated. But if a teacher wants to spend 10 minutes talking about a little known Latino figure like Oscar Romero, or allow an interested student to write a report about him, does the Texas Board of Education’s vote against including him in the curriculum do anything to prevent that?
Though I’m skeptical about the impact of these changes, I’m not saying that what’s going on in Austin doesn’t matter. Certainly, I believe the attempt to balance out the perceived liberal bias by unabashedly injecting a conservative slant into the new standards demonstrates a serious misunderstanding of what it means to teach history. And the idea of a governmental mandate that praise for America’s “limited form of government” be included in the history books strikes me as just a bit ironic. But there is one important domain that all of the conservative muscle of the Texas Board of Education can’t do much to reshape: the internet. I suspect that as time goes on, despite what teachers and administrators might hope, kids who have grown up online will rely more heavily on google and Wikipedia for the answers to their history questions than the textbooks that get handed out at school. Simply put, it takes more effort to flip through a thick book and scan for key information than it does to type the name of an important historical figure into a search engine and find this key information already neatly packaged in hyperlinked, outlined form. Will this make lazy students even lazier? Perhaps, but I think it also opens doors for those students who are curious about what really happened. If there is controversy over a certain subject, they won’t have to look that hard to find it, and after informing themselves, they can draw their own conclusions. The message to conservatives on the Texas State Board of Education – don’t be surprised if these historical conclusions aren’t the same ones you’re about to vote into law.
Posted on March 22, 2010 - by David
81 Words, a 2002 episode of NPR’s This American Life that was recently rebroadcast, tells the story behind the removal of the homosexuality diagnosis from the DSM-II in 1973. You can download the audio or read a transcript of the show here: part 1, part 2.
The report is given by Alix Spiegel, whose grandfather, Dr. John P. Spiegel, was president-elect of the American Psychiatric Association in 1973 when this historic change took place. Alix describes the family myth – that grandpa had single-handedly changed the APA’s position on homosexuality and removed one of the major barriers to equal rights for homosexuals in America. The truth, she says, is actually much more complicated. Though he did play a role in this historic change, ‘grandpa’ was not the driving force his family believed him to be, nor were his motives simply those of dedicated psychiatrist and champion of human rights. In Alix Spiegel’s words:
… this version of events was discarded anyway. Discarded after the family went on vacation to the Bahamas to celebrate my grandfather’s 70th birthday. I remember it well. I also remember my grandfather stepping out from his beach front bungalow on that first day followed by a small well-built man, a man that later during dinner my grandfather introduced to a shocked family as his lover, David. David was the first of a long line of very young men that my grandfather took up with after my grandmother’s death. It turned out that my grandfather had had gay lovers throughout his life, had even told his wife-to-be that he was homosexual, two weeks before their wedding. And so in 1981 the story that my family told about the definition in the DSM changed dramatically.
According to Alix Spiegel, from the 40’s through the early 60’s, the APA was a very conservative organization, largely uninterested in “weighing in on the issues of the day.” In her interviews with psychiatrists who were members of the APA in 1970, when the forces behind the definition change began to take shape, she was told that the overwhelming majority of the APA believed that homosexuality was indeed a mental illness – “even the ones of us who were gay,” added Dr. John Fryer.
Fryer was not alone in the APA. Because homosexuals were not allowed to practice psychiatry, Fryer and others like him had to hide their sexual preference, but they began to meet informally at APA conventions, calling themselves the Gay PA. There may have been a sense of solidarity among them, but they were not questioning the official psychiatric stance on homosexuality. Fryer told Spiegel, “because of our own internalized homophobia, most of us probably agreed that it was OK to be a disease.”
The idea that homosexuality was a form of insanity rather than a ‘moral abomination’ was first put forth in the 19th century, and Spiegel notes that many homosexuals actually saw this as a step forward. In the early 70’s, psychoanalysis, Freud’s great gift to psychiatry, was still the dominant form of therapy and mode of theoretical understanding in the profession. The two psychoanalytic authorities on homosexuality were Dr. Irving Bieber and Dr. Charles Socarides. Bieber, who was later demonized by gay activists, actually became interested in the subject of homosexuality after working as an army psychiatrist during WWII, when soldiers who were found to be homosexual were dishonorably discharged. Bieber believed they should receive treatment instead of being discharged, and because of this position, he was never promoted from his rank of Captain during his four years of service. Returning home, he began to research and write about this topic, which culminated in the 1962 publication of Homosexuality: A Psychoanalytic Study. As Spiegel says, this book, which analyzes the work of 77 doctors and over 100 of their gay patients, “concluded that the cause of homosexuality was a combination of what they termed close-binding mothers – which is overprotective women who made their children weak and feminine – and detached, rejecting fathers.”
Of course, there was other data used to argue against the idea of homosexuality as a mental illness. Alfred Kinsey’s famous and highly controversial report on male sexuality, published in 1948, found that 37% of American men had had physical contact to the point of orgasm with another man. Some opponents of the diagnosis used Kinsey’s work to claim that an experience so common could not be reasonably considered pathological.
The work of Evelyn Hooker, a psychologist from UCLA, was first made public in 1956, and addressed one of the main criticisms leveled against psychiatrists like Dr. Irving Bieber, whose study subjects consisted only of homosexuals who were imprisoned, in mental hospitals, had been discharged from the military, or had otherwise sought treatment on their own. Hooker’s aim was to examine gay men who weren’t troubled by their own sexuality. She administered psychological tests to 30 homosexuals who had never sought therapy, as well as 30 heterosexuals who were matched for comparable age, IQ, and education. The disguised results were then given to three experienced psychiatrists who were asked to identify the homosexuals. They were unable to distinguish between the two groups, and categorized two-thirds of both groups as “perfectly well-adjusted, normally functioning human beings.”
In 1970, the APA held their convention in San Francisco, probably an ill-advised choice of location. Gay rights activists showed up, some of whom had apparently obtained press passes from people within the APA, and made their feelings known. Bieber was a particular target, and they effectively broke up the meeting where he was trying to give a talk. The ’71 convention was much the same story.
While there was obvious pressure coming from the gay community to change the DSM, there was also something happening inside the APA. It seems from Spiegel’s story that the psychiatrists of the Gay PA were for the most part content to gather in secret and accept the traditional designation of homosexuals as sick, but others had begun to mobilize. In Dr. John P. Spiegel’s Cambrige, MA home, a small group of psychiatrists, ‘the young turks,’ began to meet:
The young turks were all psychiatrists, all members of the APA and all liberal-minded easterners who had decided to reform the American Psychiatric Association from the inside. Specifically they had decided to replace all the grey-haired conservatives who ran the organization with a new breed of psychiatrist; more sensitive to the social issues of the day with liberal opinions on Kent State, Vietnam, feminism. They figured that once they got this new breed into office they could fundamentally transform American psychiatry. And one of the things this group was keen to transform was American psychiatry’s approach to homosexuality.
Spiegel is quick to clarify that this group and others like it by no means constituted a “homosexual cabal,” but “several of the key players were gay,” and the young turks were able to use their influential positions as members of the Committee for Concerned Psychiatry to propose candidates for office. Despite all the visible and colorful protests of the APA by gay activists, Spiegel maintains that if it weren’t for the internal changes set into motion by these psychiatrists, the DSM diagnosis would have gone untouched.
At the 1972 convention, the efforts of those working for change both inside and outside the APA were joined for the first time. Gay psychiatrist Dr. John Fryer, recently ousted from his job at UPenn and apparently unemployable due to the rumors of his homosexuality, was recruited by activists to give a speech about the damaging effects of the DSM diagnosis. Though he initially refused the offer, after being rejected by one university after another as he looked for a new job, Fryer accepted the second request on the condition that his identity remain a secret. He appeared as ‘Dr. Anonymous,’ wearing a loud suit several sizes too big, his face hidden behind a distorted Nixon mask, hair covered by a wig, speaking into a special microphone to alter his voice. “He explained to his fellow psychiatrists how these  words had harmed him, and others like him,” and when he was through, received a standing ovation.
Independent of the changes already underway on the inside, there was another chance encounter involving an APA psychiatrist and a gay activist which proved to be instrumental in this process. During a behavioral therapy conference in New York City in ‘72, Dr. Robert Spitzer, a member of the APA’s committee on nomenclature and subscriber to the standard psychiatric view of homosexuality, was sitting in a meeting when Ron Gold stood up and spoke out against psychiatry’s oppression of gays. Spitzer made a point of speaking to Gold after the meeting ; he wanted to express his annoyance at the inappropriateness of the interruption. But when Gold discovered that Spitzer was on the nomenclature committee – the group that first decides what should and shouldn’t end up in the DSM – the conversation went in a different direction. The two men parted ways with Spitzer agreeing to set up a meeting for Gold with the committee as well as a panel discussion at the next convention where gay activists could participate.
At the 1973 APA convention in Honolulu, a few months after the requested audience with the nomenclature committee left the psychiatrists at a loss as to what should be done about the diagnosis, “The old guard, Charles Socarides and Irving Bieber, publicly met the new school, Ronald Gold, Judd Marmor [a future president of the APA] and several other psychiatrists in front of a room filled to capacity.” The showdown was a resounding victory for the gay activists. Even Socarides admits that the reception to his speech, (which Gold referred to as “his ‘they’re betraying their mammalian heritage’ number”), hardly qualified as warm. “A lot of people booed,” he told Spiegel, “some people clapped.”
Perhaps the most surprising part of this story, the last shove leading to the change, came later that night in a Honolulu bar. Gold, as the hero of the day, was invited to a covert Gay-PA celebration, and decided to bring Spitzer, who still didn’t personally know of any gay psychiatrists, along with him. Spitzer was supposed to be playing the role of a closeted gay man, but when he realized some of the big names who had been part of this underground group for years, he was shocked, and started asking questions that gave his true identity away. A psychiatrist Gold described as “the grand dragon of the Gay PA” wanted Spitzer out of there, but Gold refused on the grounds that Spitzer was actually doing something to help homosexuals, while the Gay PA had done nothing. In the middle of this encounter, a man in full army uniform walked into the bar, looked around, and fell weeping into Gold’s arms. As Gold tells Spiegel:
Well I had no idea who he was. It turned out he was a psychiatrist, an army psychiatrist based in Hawaii who was so moved by my speech, he told me, that he decided he had to go to a gay bar for the first time in his life. And somehow or other he got directed to this particular bar and saw me and all the gay psychiatrists and it was too much for him, he just cracked up. And it was a very moving event, I mean this man was awash in tears. And I believe that that was what decided Spitzer, right then and there, let’s go. Because it was right after that that he said, ‘Let’s go write the resolution.’ And so we went back to Spitzer’s hotel room and wrote the resolution.
While obviously we don’t have the original text composed by Gold and Spitzer in Honolulu– perhaps scrawled on some long lost sheets of hotel stationary –I’m guessing that much of what was written that night ended up here, in this position statement proposing a change in diagnosis from homosexuality to ‘Sexual Orientation Disturbance’ with homosexuality bracketed. This change was to be put into effect for the 6th printing of the DSM II and read as follows:
302.0 Sexual orientation disturbance (Homosexuality)
This category is for individuals whose sexual interests are directed primarily toward people of the same sex and who are either disturbed by, in conflict with, or wish to change their sexual orientation. This diagnostic category is distinguished from homosexuality, which by itself does not constitute a psychiatric disorder. Homosexuality per se is one form of sexual behavior and, like other forms of sexual behavior which are not by themselves psychiatric disorders, is not listed in this nomenclature of mental disorders.
In this paper, Spitzer basically states that homosexuality is a normal variant of human sexuality. He writes that “for a mental or psychiatric condition to be considered a psychiatric disorder, it must either regularly cause subjective distress, or regularly be associated with some generalized impairment in social effectiveness or functioning,” and because many homosexuals do not meet these criteria, homosexuality should not be considered an illness. Spitzer clearly understood that this change was in part a political action, stating that “we will be removing one of the justifications for the denial of civil rights to individuals whose only crime is that their sexual orientation is to members of the same sex.” However, he writes that the removal of the homosexuality diagnosis does not amount to “saying that it is ‘normal’ or as valuable as heterosexuality,” and maintains that “this change should in no way interfere with or embarrass those dedicated psychiatrists and psychoanalysts who have devoted themselves to understanding and treating those homosexuals who have been unhappy with their lot.” The idea, in the end, was that if someone was bothered by their own homosexual thoughts, impulses, or behavior, the DSM still had them covered.
This initial change, officially announced by Dr. Alfred Friedman, president of the APA, on December 15, 1973, may have allowed psychoanalysts to continue treating gay patients for a time, but in less than 15 years, the DSM would be wiped clean of the last traces of the idea that homosexuality could be a mental illness. Spitzer’s original change had been rewritten as ‘ego-dystonic homosexuality’ for the DSM-III, but was removed altogether in 1987.
Dr. Charles Socarides, the most prominent player on the losing team, responded to the change in a 1978 article titled ‘The Sexual Deviations and the Diagnostic Manual,’ published in the American Journal of Psychotherapy. In protest of further proposed revisions for the soon to be published DSM-III, Socarides wrote, “these changes would remove from psychoanalysis and psychiatry entire areas of scientific progress, rendering chaotic fundamental truths about unconscious psychodynamics, as well as the interrelationship between anatomy and psychosexual identity.” In particular, Socarides objected to the fact that the heading ‘Sexual Deviations,’ under which the homosexuality diagnosis had once was fallen, was going to be entirely removed from the DSM-III. Proponents of this change pointed to reports like Kinsey’s, arguing that a phenomenon as common as homosexuality shouldn’t be understood as a deviation, but Socarides believed this was faulty reasoning:
To form conclusions as to the specific meaning of an event simply because of its frequency of occurrence is to the psychoanalyst scientific folly. Only in the consultation room, using the techniques of introspective reporting and free association, protected by the laws of medicine and professional ethics, will an individual, pressed by his suffering and pain, reveal the hidden (even from himself) meaning and reasons behind his acts.
When I read Socarides’ paper, I noticed that he repeatedly summons the name of science, even while his argument belies a dogmatic faith in psychoanalysis –an approach that has been waning in popularity for decades, suffering from the criticism that it lacks scientific validity. Regardless of who is right or wrong in this argument, (or any similar argument for that matter), what I find most interesting is how it is imperative for each party to claim the support of science. One of the last people Spiegel speaks to in her report is Ronald Bayer, a public health historian from Columbia who wrote a history of this change titled Homosexuality and American Psychiatry. Bayer tells Spiegel that “the nature of these controversies,” is that “both sides wrap themselves in the mantle of science and both sides charge that the other side is being unscientific.”
While developments in medicine and advances in genetic study and different brain imaging technologies have no doubt increased the importance of being aligned with “science” when it comes to psychiatric debate, this is not a new phenomenon, nor was it new in the ‘70’s. At the same time, stories like this one makes it plain that the progress of certain disciplines may be driven just as much by personal and political factors as it is by actual scientific progress. I wonder if the removal of the homosexuality diagnosis in 1973 wasn’t the beginning of the end for psychoanalysis, as well as the first move towards the more standardized, symptom-based diagnoses of the 1980 DSM-III. This seems reasonable, considering that Robert Spitzer was chairman of the task force responsible for creating the new edition and directed the development of the revised edition published in 1987 (DSM-III-R).
As the APA prepares for the publication of the DSM-V in 2013, I believe it’s worthwhile to keep this story in mind. Some of the proposed changes seem to have more to do with a desire to remove a stigmatizing label than real “scientific” evidence. And like homosexuality, the pathology of which was for a many years assumed but never proven, the scientific understanding of some of the older DSM diagnoses is not particularly strong. Studying the history of psychiatry can’t necessarily prove or disprove the validity of a diagnosis, but it may help us to remain cautious as we go forward.
Posted on March 4, 2010 - by David
Apparently, the “decade-long decline” in teen drug use has come to an end. The results of the 2009 Partnership Attitude Tracking Study, (an annual survey administered by the Partnership for a Drug-Free America and MetLife Foundation), published this Tuesday show significant increases in the use of alcohol, marijuana, and ecstasy among 9-12 graders in the U.S. You can read a reprint of the AP article here.
I was a little skeptical of these results, so I decided to look at another well known annual study. One of the best sources of information on drug use in the United States has been the Monitoring the Future Survey, carried out by researchers from the University of Michigan since 1975. While the early summary of the 2009 MTF also found an increase in marijuana use, there are some pretty big differences between these two studies.
The PATS finds that 25% of 9-12 graders used marijuana in the past month (up from 19% in 2008), but the MTF shows a past-month rate of 20.6 % for 12th graders, and with the previous year’s rate at 19.4%, this is a much smaller increase. Keep in mind also that this is only the rate for 12th graders, the group of students with the highest prevalence rates across the board. MTF also collects data for 8th and 10th graders, and their rates of past-month marijuana use were 6.5% and 15.9%, respectively. Considering this, the results look drastically different.
While the 2009 PATS shows ecstasy (MDMA) use on the rise, the MTF shows virtually no change in annual prevalence of ecstasy use. While the PATS estimates that 10% of 9-12 graders have used ecstasy in the past year, MTF found that only 4.3% of 12th graders had used ecstasy in the past year, a figure that has been very consistent for the past 7 years. The PATS 30 day prevalence rate for ecstasy use was 6%, while MTF found that only 1.8% of high school seniors had used ecstasy in the past month.
As far as past-month use of alcohol, the results appear closer than the other categories, (PATS: 39% among 9-12 graders, MTF: 8th grade-14.9% 10th grade – 34% 12th grade 43.5%), but for the MTF, the 2009 and 2008 stats are practically identical whereas the 2009 PATS shows a significant increase.
So, discrepancies between studies aside, if there really has been an upswing in teen drug use, what can be done? The main message accompanying the PATS report seems to be that parents need to do more if they want to prevent their kids from developing a serious problem. Steve Pasierb, president and CEO of the Partnership belives that “these new PATS data should put all parents on notice that they have to pay closer attention to their kids’ behavior – especially their social interactions – and they must take action just as soon as they think their child may be using drugs or drinking.” The report links to resources for concerned parents who need guidance in confronting this issue. I found this snooping checklist kind of amusing.
They might as well have written “look anywhere and everywhere!”
They even have a section titled ‘Prepare to Be Called a Hypocrite’ with tips on how to avoid letting your kids use your own past experimentation against you. But even before I read this, a question occurred to me which I don’t think is addressed in the Partnership resources – what about the fact that plenty of parents are still regular users of marijuana and alcohol? I think that by their teens, most kids are aware of their parents substance use, and this can present a difficult contradiction to the message that groups like the Partnership for a Drug-free America want to send.
The Monitoring the Future Survey has collected follow up data on high school graduates, allowing them to build a valuable body of data on adult drug use. The following comes from the 2008 MTF, as the full 2009 report has not been released yet:
The adjusted lifetime prevalence figures are most striking for today’s 50-year-olds (the class of 1976), who were passing through adolescence near the peak of the drug epidemic. Some 86% reported trying an illicit drug (lifetime prevalence, adjusted), leaving only 14% or about one in every seven who reported never having done so (see Figure 4-1). Some 79% of 50-year-olds said they had tried marijuana, and almost three quarters (73%) said they had tried some other illicit drug, including 46% who have tried cocaine specifically. The adjusted lifetime prevalences for 45-year-olds (the class of 1981) are similar to 50-year-olds. Clearly, the parents of today’s teenagers and young adults are themselves a very drug-experienced generation.
The data does suggest that parents confronting their teens about drug use should indeed be prepared to called hypocrites, but I think it’s more important to look at what adults are actively using today:
- 30 day prevalence of any illicit drug use- age 35 (11%) age 40 (9%) age 45 (10%) age 50(12%)
- 30 day prevalence of marijuana use age 35 (8%) age 40 (7%) age 45 (6%) age 50 (7%)
- Daily use of alcohol- age 35(24%) age 40 (22%) age 45(21%) age 50(20%)
- 2-week prevalence of 5 or more drinks in a row- age 35(5%) age 40(7%) age 45(10%) age 50 (11%)
Actions speak louder than words, and the statistics seem to show that for plenty of kids, looking in the refrigerator, the liquor cabinet, or mom and dad’s own top dresser drawer is enough to justify not only experimentation, but regular use. The MTF literature would suggest that many of today’s parents are members of a generation that used even more heavily than today’s teens do, but overall, I see the trends as fairly consistent, especially when you consider daily users, the group that would likely experience the most problems. When the survey began in 1975, 6% of 12th graders were daily marijuana users, compared to 5.2% in 2009.
In closing, I can’t resist including this quote from the PATS report, where teen drug use gets couched in terms of an unacceptable national expense. Is it just me or does this sound a bit cold and impersonal?
“We’re very troubled by this upswing that has implications not just for parents, who are the main focus of the Partnership’s efforts, but for the country as a whole,” said Partnership Chairman Patricia Russo. “The United States simply can’t afford to let millions of kids struggle through their academic and professional lives hindered by substance abuse. Parents and caregivers need to play a more active role in protecting their families, trust their instincts and take immediate action as soon as they sense a problem.”
I think it’s important to note that even leaving all of pop culture aside, the nation’s official stance on this issue is not a clear one. Under federal law, marijuana is still a schedule I controlled substance (the designation given to drugs with high potential for abuse and no approved medical use), but in the last ten years, 20 states have passed laws either decriminalizing possession of marijuana or legalizing medical use. This presents an interesting contradiction. I’m not suggesting that changes in legality have led to this supposed “upswing” in use, but I do believe that convincing kids that marijuana is a dangerous drug will become increasingly difficult as the trend towards legalization continues.
From what I’ve observed, each study that shows a decrease becomes an occasion for celebration, while every apparent increase is a cause for major concern. And while people talk and write about these results, kids across the nation keep getting high at basically the same rate.
Posted on February 6, 2010 - by David
Winning is not a sometime thing; it’s an all the time thing. You don’t win once in a while; you don’t do things right once in a while; you do them right all the time. Winning is a habit. Unfortunately, so is losing.
There is no room for second place. There is only one place in my game, and that’s first place. I have finished second twice in my time at Green Bay, and I don’t ever want to finish second again. There is a second place bowl game, but it is a game for losers played by losers. It is and always has been an American zeal to be first in anything we do, and to win, and to win, and to win.
Every time a football player goes to play his trade he’s got to play from the ground up — from the soles of his feet right up to his head. Every inch of him has to play. Some guys play with their heads. That’s O.K. You’ve got to be smart to be number one in any business. But more importantly, you’ve got to play with your heart, with every fiber of your body. If you’re lucky enough to find a guy with a lot of head and a lot of heart, he’s never going to come off the field second.
Running a football team is no different than running any other kind of organization — an army, a political party or a business. The principles are the same. The object is to win — to beat the other guy. Maybe that sounds hard or cruel. I don’t think it is.
It is a reality of life that men are competitive and the most competitive games draw the most competitive men. That’s why they are there — to compete. To know the rules and objectives when they get in the game. The object is to win fairly, squarely, by the rules — but to win.
And in truth, I’ve never known a man worth his salt who in the long run, deep down in his heart, didn’t appreciate the grind, the discipline. There is something in good men that really yearns for discipline and the harsh reality of head to head combat.
“I don’t say these things because I believe in the “brute” nature of man or that men must be brutalized to be combative. I believe in God, and I believe in human decency. But I firmly believe that any man’s finest hour, the greatest fulfillment of all that he holds dear, is that moment when he has worked his heart out in a good cause and lies exhausted on the field of battle — victorious.
The trophy that bears the legendary coach’s name is up for grabs again this Sunday, when the New Orleans Saints and the Indianapolis Colts meet in Miami for Super Bowl XLIV. As often seems to happen with championship games, much of the media hype has reduced the match-up between the two teams to a match-up between two quarterbacks, Peyton Manning and Drew Brees.
Do a quick google search of “Peyton Manning” and you’ll probably notice two major topics: 1. The record-setting contract he’s expected to sign after the season comes to an end, and 2. Where he ranks among the all time great quarterbacks. Many believe that winning his second Super Bowl this Sunday would solidify his status as the best to ever play his position. Listing all of his achievements here would be a waste of space, but the quick rundown looks something like this:
- Started 192 consecutive games since he was drafted #1 in 1998
- 10 pro bowl appearances in 12 seasons
- 10 seasons with over 4,000 yards passing (NFL record)
- 4 time Most Valuable Player (NFL record)
- 9-8 postseason record
- MVP of Super Bowl XLI
So if there’s one man in the game who, to use Lombardi’s words, seems to “do things right all the time,” it’s Peyton Manning. But with all the talk about Manning’s career, it’s easy to lose sight of the season Drew Brees just had. Despite the fact that Manning took the MVP award, Brees arguably had a better year, throwing more touchdowns with fewer interceptions and setting an NFL record with his 70.6 completion percentage.
If there is one thing America loves more than the consistent greatness that Manning represents, it’s an underdog like Drew Brees and the New Orleans Saints. We’re talking about a team that took 20 years to finish a season with a winning record and 33 years to win a playoff game. They played the entire 2005 season away from home due to that Hurricane Katrina did to the Superdome. Now, 42 years after they entered the league as an expansion team, the Saints are playing in their first Super Bowl. Interestingly, before Brees came to town, Peyton Manning’s father Archie was New Orleans’ most memorable quarterback, giving the team 10 hard-fought but losing seasons as a starter from 1971-1981. And who happens to be the quarterbacks coach helping Drew Brees with his gameday decisions? Vince Lombardi’s grandson Joe, who at 28 is younger than Brees and many of the teams veteran players.
But back to Brees. Despite a stellar college career at Purdue where he set a number of Big 10 conference records, he wasn’t picked until the second round of the 2001 draft due to concerns about his size and arm strength, and played in only 1 game in that first season with the San Diego Chargers. He beat out Doug Flutie for starting job in 2002, only to have the Chargers take the football from him and hand it back to a forty year-old Flutie late in 2003. In 2004, the Chargers picked up quarterback Philip Rivers on draft day, casting serious doubt on Brees’ future in San Diego. Rivers’ reluctance to sign a contract before the season started gave Brees another shot, and he made it count, posting his best numbers to that point in his career and making it to the Pro Bowl. But in the last game of the 2005 season, Brees was left lying on the ground with a shredded shoulder, and his time with the Chargers was up. In 2006, he stepped onto the field with the Saints in the newly repaired Superdome and led them all the way to their first NFC championship game, but their season ended one game before the big one due largely to mistakes he made. Since then, Brees has continued to put up very impressive numbers, making the Pro Bowl in 3 of his 4 seasons with the Saints, and in 2008 he became only the second quarterback to throw for over 5,000 yards in a single season. If he’s able to lead the trophy-less Saints to Super Bowl victory tomorrow, Drew Brees could potentially add his name to that list of great QB’s which currently has Peyton Manning hovering near the top.
Of course, as a Philadelphia fan who has for the past 10 years watched quarterback Donovan McNabb and coach Andy Reid lead the Eagles to 8 winning seasons and 5 NFC championship games, only to come up empty handed each time, I know well that winning the Super Bowl is the only thing that matters. McNabb holds almost every possible Eagles quarterback record, he has the second best TD-interception ratio of all time, (behind Tom Brady), and among active quarterbacks, only Brady and Peyton Manning have a higher win percentage. Statistically, McNabb is unquestionably in elite company, but he can’t seem to shake the accusations that he’s a guy who can’t get the job done when it really counts. Just as it seems that a team’s successes are often reduced to the performance of one individual, McNabb is a perfect example of how a team’s failures often fall squarely on the shoulders of their on-field leader.
I’ve rambled on about football for a while now, and it feels a bit unnatural to shift back to thoughts about national identity, but I’m wondering, with the individualistic nature of American identity, is it even possible for our obsession with greatness and victory to be expressed other than through obsession with individuals? Do we ever think of whole teams as our heroes? How often do we really remember the guy who came in 2nd place? And does the American dream of rags-to-riches explain our love of underdogs?
All I have to say is, Go Saints.
Posted on January 29, 2010 - by David
|The Daily Show With Jon Stewart||Mon – Thurs 11p / 10c|
The other night, Ethan Watters appeared on the Daily Show with Jon Stewart to talk about his new book, Crazy Like Us: The Globalization of the American Psyche.
At a little past 4 minutes into the clip, Stewart says:
We could make the argument that when we went over into parts of the undeveloped world with vaccines, and they thought we were poisoning them, you know, we weren’t, we were just trying to cure some diseases, why should we neccesarily give deference to something that might be a superstition, only because it has the value of “well it’s their culture” ?
Watters response echoes one idea which is central to the book: In our attempts to share medical knowledge and treatment with the world “we often bring cultural ideas that may be replacing ideas that actually are helpful in those other places…”
This answer glosses over the main difference between sharing vaccines and sharing treatments for mental illness: Our vaccines actually worked! And they are used to prevent illnesses which we actually understand! Unfortunately, Watters misses his chance to make a bigger point about differences in the nature of mental illness from culture to culture. Still, I think he’s doing a lot to shift the focus of the discussion towards cultural factors.