Archive for the ‘Mind of Modernity’ Category
Posted on January 30, 2011 - by David
The latest episode of ‘Office Hours,’ a social-science podcast produced by several grad students at the University of Minnesota, features my recent interview with Francesco Duina, chair of the sociology department at Bates College, and author of ‘Winning: Reflections on an American Obsession.’
Since reading Duina’s book, I’ve noticed the language and mindset of competition popping up in some questionable contexts, and thanks to his insightful analysis, I’m less likely to accept without reflection that this winning/losing dynamic always makes sense. But:
For many of us, it is a simple matter of fact that, in our schools, workplaces, businesses, and everywhere else, there are winners and losers. We can either win or lose our war against fat, the peace in Iraq, recognition as best employee of the month, custody of our children, our lover’s heart, and in the words of Newt Gingrich in his recent book, even “the future” (Gingrich 2005). (Duina 182)
It’s particularly interesting that Duina’s brief list of confusing competitions includes the Gingrich book (titled, ‘Winning the Future: A 21st Century Contract with America’), since “winning the future” turned out to be the catchphrase from Obama’s State of the Union Address last week. I counted 10 uses of this phrase or some variation of it, and the heading above the video on the Whitehouse website makes it abundantly clear that this was indeed the official theme of the night.
So what did people have to say about the President’s new slogan? Gingrich obviously agrees that the term fits the topic, and states that “Winning implies a real contest. Winning implies losing is possible.” (Duina would indeed have us recognize the same thing about this kind of language, but Gingrich doesn’t demonstrate how exactly this makes sense, the point for him being, I guess, that America’s victory isn’t a sure thing). Of course he disagrees with Obama entirely about how the future is to be won. Bill O’Reilly opens his article by poking some fun at the phrase, but then ultimately buys into the concept, just disagreeing in pretty much the same fashion as Gingrich about what will make us future-winners. Sarah Palin commented on her Facebook page that the “acronym [wtf] seemed more accurate than much of the content.”
Others challenged the language itself a bit more directly. An AP article called it an “upbeat but amorphous phrase.” NPR’s Ari Shapiro noted that, “for Obama, “Win the Future” has the advantage of being vague. At the end of “recovery summer,” people asked where the recovery was. The future, on the other hand, is always just around the corner.” Still others got closer to the heart of the matter, questioning Obama’s use of this “amorphous phrase” to talk about competition with nations like India and China. Tim Redmond of the San Francisco Bay Guardian asked:
…since when was the future a war, something to be fought with an enemy? To “win” the space race we had to “beat” the Soviets, which we did (ha ha, we got to the moon first). To “win” the future, do we have to beat someone else? The Russians aren’t up for winning much of anything these days, but Obama seems concerned about competing with China; do the Chinese have to “lose” the future for us to “win?”
Art Carden, on his Forbes blog, The Economic Imagination, wrote:
… while a group of White House speechwriters apparently thought that “win the future” would have the same rhetorical resonance as “yes we can,” the Address conveyed an incorrect zero-sum worldview in which what others gain comes at our expense. As economics has shown over and over and over and over again, trade creates wealth. Voluntary exchange is a positive-sum game. If China gets richer, it doesn’t imperil our ability to get richer, too.
You can find similar thoughts at the Economist’s Free Exchange blog. The point is, it’s not clear why we Americans “need to out-innovate, out-educate, and out-build the rest of the world” in order to be content. I for one am not particularly upset by the fact that South Korea has better wireless access than we do, though the intonation of Obama’s voice as he tells us this suggests we all should be. It’s also odd that the first half of his speech sets up other nations as opponents, but he goes on to cite major trade agreements we have reached or are working on in Asia as evidence of the progress we’re making. Agreements imply cooperation, not competition, but I guess it’s just harder to get Americans fired up about working together than it is to construct a global economic showdown.
I found the “education race” rhetoric to be especially troubling. “Of course,” Obama said, “the education race doesn’t end with a high school diploma. To compete, higher education must be within the reach of every American.” What are our nation’s students supposed be racing towards? Is a student’s desire to go directly to work after high school legitimate, or does this signal that he has lost the race? Obama is determined that “by the end of the decade, America will once again have the highest proportion of college graduates in the world,” but if we take this statement apart, we see it is a relative goal, contingent upon the proportion of college grads in the nations we imagine ourselves to be competing against not rising enough to keep us from reclaiming the top spot. (To highlight the nebulous quality of this aspiration, I should mention that it could in fact be achieved without any improvement on our part, if the rates of college graduation in these competitor nations were to drop for whatever reason).
And just what kind of education should young Americans be racing to get so that we can “win the future”? Obama emphasized the importance of math and science, and mentions how we’re falling behind in these areas, but of course neither he nor the two men who sat behind him (Biden and Boehner) received this type of education (nor, I would guess, did the majority of those seated in the House chamber that night). What’s implied is that if we can “out-educate” in math and science, our young people will be able to “out-innovate and out-build,” and thus we will “win the future.” Certainly, improving material conditions is a worthwhile aim, but in order to navigate our increasingly important and complex relations with other nations, won’t we need students of history, language, psychology, and culture? Will better wireless coverage and faster trains help us understand the way that people in India see the world? And even more fundamentally, do we have any good reason to think that a stronger economy will erase problems like mental illness, substance abuse, suicide, and violence of the type we saw in Tucson a few weeks ago?
This isn’t intended to be a critique of Obama, but rather a reevaluation of the imprecise rhetoric which I’m sure was meant to inspire and uplift us. I think we would all do well to follow Duina’s advice about “conceptual hygiene,” and commit to using the language of winning and losing only where it actually fits. If we can do this, our words and thoughts may begin to better match reality, we might find ourselves able to articulate what it is we actually want, and perhaps we’ll start to feel a bit more at peace with ourselves and those around us. (And that includes India and China)
Posted on January 24, 2011 - by David
Of the various writings, videos, and recollections of teachers, classmates, and former friends with which we are left to piece together a picture of Jared Loughner’s mind before the shooting, the most intelligible is a poem titled ‘Meat Head’ that he wrote for one of his classes at Pima Community College last fall.
Awaking on the first day of school
Pain of a morning hang over
Attending a weight lifting class for college credit
Attempting to exercise since freshmen year of high school
Crawling out of bed and walking to the shower
Warm water hitting my back
Thoughts of being promiscuous with a female again
Putting on a old medium red tee shirt, light brown cameo shorts, and black Adidas
For breakfast a glass of water, cold pepperoni pizza, and two Advil
Bringing my Nano Ipod with heavy metal music
Taking the local bus on a overcast morning
Waiting with crack heads after their nightly binge
Bus is cheap, two dollars for a ride anywhere in the city
Sitting in back against a hard plastic seat
Staring at stop lights, brand new cars, and graffiti
Coming to a slow halt in front of the school
Entering the gym as the glowing florescent lights are humming
Next to the treadmills, putting a green foam mat on the ground
Stretching for fifteen minutes, loosening the muscles in my legs, back and arms
Cleaning the mat with anti-bacterial spray and a paper towel
Jogging for ten minutes, my heart beating, beating, beating
Pain in my right side of the last minute of twenty
Looking around, the cute women are catching my eye
Probably waiting for their hot boyfriends wandering in the locker room
All the men are in shape with their new white tank shirts, basketball short, and Nike shoes
Confusing look on my face of no idea what to do
Deciding to copy other men’s routines of
Arm Curls, Leg presses, Rows, Squats, Military something’s, and Isolated whatever’s
Leaving the gym thinking
Waiting for the bus with alcoholics that are going to the bars early
Coming home for another shower
While grabbing the white towel, the eureka moment is lingering
Quick nap and lunch is on my mind
Setting the alarm one hour before getting ready for my next class
Getting into bed
The title already suggests a problematic identity; (for any who might not know), ‘meathead’ is a derogatory term for jocks, muscular men, athletes, etc., but the poem focuses on Loughner’s own time in the gym, so he is criticizing himself? He has been “attempting to exercise since freshmen year of high school,” (this “attempting” implies a lack of success), and yet the title and tone suggest he sees himself as different from, and possibly somehow superior to, those working out around him. He has trouble looking the part, as his gym clothes are mismatched and worn out (“old medium red tee shirt, light brown cameo [probably meant camo] shorts, and black Adidas”) while the others “are in shape with their new white tank shirts, basketball short, and Nike shoes.” Not only does he feel like he looks out of place, he’s not sure how to act (“confusing look on my face of no idea what to do”), so he ends up “deciding to copy” what others do. It seems Loughner relates his lack of romantic or sexual involvement with the opposite sex to his inability to be like these other men, and while there is clearly an element of sexual tension in the poem, I don’t think it is as simple as Loughner being some kind of sex-crazed pervert. In fact, a MySpace posting from November 17th (“It hurts to have been never sexually active at 22!”) reveals that it is not so much a lack of sexual activity that is the problem, but his consciousness that in this society, a 22-year old virgin would probably be deemed abnormal. The people Loughner connects himself to most closely are the “crack heads” and “alcoholics” with whom he waits for the bus, and even this mention of public transportation seems another to reflect upon his inadequacy – he notes the contrast as he views “brand new cars, and graffiti” from his “hard plastic seat” on the “cheap” bus.
When read, ‘Meat Head’ is more depressing than disturbing, especially if one forgets for a moment who wrote it. But apparently the style of his presentation in class didn’t quite match the overall flatness of the poem. According to Don Coorough, a classmate who provided copies of two of his poems to the media, Jared “had the poem memorized, and he stood up in class and performed it with great drama — at one point, grabbing his crotch.” This performance, along with his inappropriate emotional responses to others’ poems (he laughed and joked as a tearful female student read a poem about abortion), contributed to the complaints which resulted in his suspension from Pima.
Another poem, ‘Dead as a dodo,’ may be an attempt to paint an allegorical scene, though it’s anyone’s guess who the dodo is (is it Loughner? Giffords?) or what the other objects, creatures, and movements might symbolize.
Dead as a dodo
On the island of Mauritius a heavy storm is leaving.
In the fields of the ancient wild forest a wild field of mushrooms is growing.
Snails and grasshoppers are ready for the warmth.
The old grass growing with lizards are jolting for crickets while snakes looking for lonely mice.
Falcons are flying for pray.
Shallow light Blue Ocean shimmering at each wave as the black clouds are rolling.
Waves are lapping.
Fisherman on the reefs are casting their poles.
In warm water a pack of clown fish are floating.
Tiger sharks are swimming free.
Steel drums beating in the distance.
The full moon slowly setting for the sun is rising.
At the local cemetery there is weeping.
The dodo is finally dieing.
But one wonders, why was this kid taking a poetry class when the unanswered question which proved nearly fatal for Rep. Giffords was, “what is government if words have no meaning?” His friends, at 4:06 in the video below, describe Jared’s obsession with what he perceived as the meaninglessness of language:
“He was obsessed with how words were meaningless, you know, you could say, “oh, this is a cup,” and hold a cup, and he’d be like, “oh, is it a cup? or is it a pool? is it a shark? is it an airplane?”
While his friends, and others since the shooting, have interpreted these statements as nonsensical, he is on to something very real here, despite his difficulty in expressing it: Jared realized that words, as symbols, are arbitrary, given their meaning by the history of their (socially agreed upon) use. There is nothing in the physical composition of the object we call a “cup” that makes us use that sound and those letters to refer to it, and for Jared this arbitrariness was equal to unreality. This fact of culture, overlooked or taken for granted by most, seems to have been both exhilarating and terrifying for Loughner; exhilarating because it meant there was no good reason why he should be constrained by social conventions, and terrifying because he was, in fact, constrained – someone or something else was “controlling the grammar.”
In addition to discovering the arbitrary nature of symbols, Jared senses the importance of logic in our culture, and his attempts to make sense of his reality rest largely on a series of if-then syllogisms like those in the video above. He seems to think that by formulating his delusional beliefs, (which he takes as facts), into logical statements, he has proven these beliefs true to his (at the time he made the videos, probably imagined, but now very real) “listener.”
Of course, since the premises themselves are faulty, nearly all of Jared’s syllogisms fail, except perhaps the following:
All humans are in need of sleep
Jared Loughner is a human
Hence, Jared Loughner is in need of sleep
If we consider what Loughner does (or tries to do) when he sleeps, our image of him becomes even more interesting: according to his friends, Jared was an enthusiastic practitioner of lucid dreaming. His own writings refer to “conscience dreaming” by which he presumably meant “conscious dreaming” (another term for lucid dreaming. Apparently, he preferred the dream world to waking world, feeling a greater sense of freedom and control while asleep.
Examined in the light of Liah Greenfeld’s hypothesized mental processes, Jared Loughner’s struggle to determine his own reality demonstrates fundamental problems with his Identity which manifested in problems with the Will. But one of the most important questions – from a legal standpoint at least – will be whether or not Loughner fully understood and was in control of his actions when he opened fire on January 8th. The evidence indeed suggests this was a willful act – planned ahead of time, and executed according to plan, so how do we reconcile this with the image of a deranged mind? In my next post on the subject, I’ll look at how Loughner’s delusional beliefs and other psychotic symptoms fit into existing definitions of mental illness, and consider what this might tell us about Jared’s mindset the moment he pulled the trigger.
Posted on January 19, 2011 - by David
By now, the search for political or ideological motivations in the January 8th shooting in Tuscon has given way almost entirely to a search for signs of mental illness in Jared Loughner’s past, and while debates over gun control, inflammatory political rhetoric, and the responsibility of colleges when it comes to dealing with troubled students will certainly continue in the wake of this tragedy, agreement is pretty much universal that this was the work of a madman.
I’m a pretty big fan of Jon Stewart, and wasn’t surprised that in his first show after this all happened, he took a characteristically sensible view, drawing the focus away from the much discussed “vitriol” even before the overall tone of reporting had shifted. But his acknowledgment of the role of insanity contains a subtle, unquestioned assumption that may need to be challenged, as controversial as such a challenge may be; this is the idea that mental illness, or at least the kind of mental illness that plays into an attack like this, has always existed. At 3:33 into the opening, Stewart said:
“We live in a complex ecosystem of influences and motivations, and I wouldn’t blame our political rhetoric any more than I would blame heavy metal music for Columbine. And by the way, that is coming from somebody who truly hates our political environment – it is toxic, it is unproductive, but to say that that is what has caused this, or that the people in that are responsible for this, I just don’t think you can do it. Boy would that be nice. Boy would it be nice to be able to draw a straight line of causation from this horror to something tangible, because then we could convince ourselves that if we just stop this, the horrors will end. You know, to have the feeling, however fleeting, that this type of event can be prevented, forever. But it’s hard not to feel like it can’t. You know, you cannot outsmart crazy, you don’t know what a troubled mind will get caught on – crazy always seems to find a way, it always has…”
But has it always? And how would we know? We’ve become increasingly convinced that serious mental illnesses – especially psychoses usually classified as bipolar or schizophrenia – are caused genetically, even though what we actually know about these illnesses doesn’t justify this faith in the biological model. The assumption that mental illness has existed in generally the same form, at generally the same rate throughout history and across cultures deserves more scrutiny than it is normally given today. Liah Greenfeld has hypothesized that madness is a modern phenomenon, emerging in 16th century England simultaneous with the emergence of nationalism. Consider the parallels between Jared Loughner and the case of Peter Berchet, a “lunatic” and a “deranged Puritan,” as described in Greenfeld’s forthcoming book:
In 1573, Berchet, a law student, stabbed Sir John Hawkins, a very firm Protestant, whom he mistook for Sir Christopher Hatton, an advisor to the Queen and also a Protestant, accused by Berchet of being “a wylfull Papyst [who] hindereth the glory of God.” The incident taking place at the time of increasing Puritan agitation, Elizabeth wished Berchet to be questioned under torture to reveal the names of co-conspirators she suspected. On the testimony of two of his fellow students, however, Berchet’s examiners became convinced that he was not a political/religious extremist, but, rather, suffered from “nawghtye mallenchollye,” i.e., was stark mad…
The distemper expressed itself in “very strange behavior” at the Middle Temple which his friends attributed to overmuch study and which, shortly before the attack on Hawkins reached a stage we would consider psychotic. “He rarely slept and would pace up and down in his room, striking himself upon his breast, throwing his hands in the air, filliping with [snapping] his fingers and speaking softly to himself… while alone in his chamber, [he] would walk up and down reciting biblical verses and rhymes to himself, then suddenly he would race to the window. With a pointed diamond that he wore in a ring on his little finger, he would scrawl one of his own compositions upon the glass,” when asked by a friend whether he was all right, he responded that “there was ‘a thing at his hart wich noe man in the world showld knowe’ and … would throw his hands in the air and use other ‘frantic gestures’.” To distract him, his friends took Berchet to a wedding in the country, where he proceeded to inform the bride that “she was another man’s daughter, and that she had been born in London. Staring into her eyes while pounding his hands upon the table, Berchet declared that he had ‘seene the verrye same eyes but not the same face,’” punctuating his “outrageous monologue… with unspecified but insulting gestures.” Before his departure from the house of the friend with whom Berchet and his fellow students stayed in the country, he “for no apparent reason beat a young boy … sent to his room to build a fire” and then “Berchet came out of his room, filipping his fingers and talking very strangely, saying in a loud voice, ‘watche, shall I watche hark, the wynd bloweth, but there is neither rayne, wynd, nor hayle, nor the Deuyll hym self that can feare me, for my trust is in thee Lord.’” On the way back to London his companions thought that his “head was verrye muche troubled,” among other things, he “galloped away from the party, dagger in hand, determined to kill some crows that had offended him.” In London, one of Berchet’s friends warned him that, if he continued behaving so, “his position at the Temple would be jeopardized. Berchet reproached [the friend] and maintained that he had ‘a thing at my hart which them nor anye man alyue shall knowe.’ The day that Berchet performed the fateful act, he and a fellow student… had attended a lecture given by Puritan zealot Thomas Sampson. The lecture seemed to provide Berchet with a necessary inspiration to attack Hawkins, for later the same day [another friend] observed Berchet by peering at him through the keyhole of his room door and heard him, as he filliped with his fingers, remark, ‘shall I doe it and what shall I doe it? Why? Then I will doe it.’ Running quickly toward the Temple gate, Berchet hesitated for a brief moment, repeated the same words, then dashed into the Strand where he confronted Hawkins.”
The outraged Queen, as mentioned above, wished Berchet to be both questioned under torture and executed immediately. Instead, following the testimony of his friends, he was committed to the Lollards Tower for his heretical beliefs, where the Bishop of London promised him that, if he recanted, he would live. Berchet recanted and was transferred to the Tower, apparently for an indefinite term of imprisonment under relatively humane conditions, to judge by the fact that the room was kept warm and had light enough, allowing his personal keeper to stand comfortably and read his Bible by the window. At this, however, Berchet took umbrage, promptly killing this innocent with a piece of firewood supplied by the charitable state. Thus, in the end, he was executed – not because his original, and, from the viewpoint of the authorities, graver, crime was attributed to madness (which, in fact, could save him), but because his madness could not be contained.
(The description of this case is based on Cynthia Chermely’s “’Nawghtye Mallenchollye’: Some Faces of Madness in Tudor England,” The Historian, v.49:3 (1987), pp. 309-328.)
Of course, this historical comparison is not meant to somehow explain Loughner’s actions, but if we consider for a moment the possibility that mental illness serious enough to drive someone to murder might have a cultural cause, then we must also consider that this cause is not rooted in the specific content of any particular cultural conflict – neither Puritan vs. Catholics nor Tea Party vs. Progressives – but in the general conditions of modernity which make identity formation so problematic. In my next post, I’ll look at some of Loughner’s preoccupations, including logic, language, and lucid dreaming, and consider how they might make sense within Greenfeld’s cultural model of mental illness.
Posted on January 9, 2011 - by David
I’ve probably spent an inordinate amount of time over the last year thinking about “memes.” (Perhaps this is evidence that these parasitic mind viruses do in fact exist). Unsatisfied with my first critique, I hope to offer something more valuable here.
I may be wrong, but I get the feeling not that many social science types take the memetic view of culture seriously enough to respond to it – they smirk, or shrug it off, and go about their business. But with the amount of public attention “memes” have received, I think this ambivalence is a mistake. Students of culture who believe they have something better to offer ought to speak up.
Over the last 20 years, Daniel Dennett has probably been the strongest advocate of the memetic perspective, which grew out of Richard Dawkins’ book, The Selfish Gene (1976). Some of Dennett’s more recent thoughts are found in a 2009 article called ‘The Cultural Evolution of Words and Other Thinking Tools.’ I’ll focus my attention here, but refer to his other work as well.
What is culture?
(see Liah Greenfeld’s view here)
Dennett offers no explicit definition of culture here, but two that can be extracted from the article are “behavioral-perceptual transmission” and “transmission by replication of non-genetic information.” Obviously, “behavioral-perceptual transmission” plays an important role in the survival of many individual organisms and the continuation of many different species. So what, according to Dennett, distinguishes the human, “hyperpotent variety of cultural evolution” from transmission of learned behaviors in other species?
As Richerson and Boyd (2006) show, just as the standard information highway, the vertical transmission of genes, was optimized during billions of years, the second information highway from parents to offspring had to evolve under rather demanding conditions; however, once this path of vertical cultural transmission had been established and optimized, it could be invaded by “rogue cultural variants,” horizontally or obliquely transmitted cultural items that do not have the same probability of being benign. (The comparison to spam on the internet is hard to avoid.) These rogue cultural variants are what Richard Dawkins (1976) calls memes, and although some of them are bound to be pernicious—parasites, not mutualists— others are profound enhancers of the native competences of the hosts they infect. One can acquire huge amounts of valuable information of which one’s parents had no inkling, along with the junk and the scams.
This passage begs the question, in what respect are these “rogue” bits of culture “variants” from the “behavioral-perceptual transmission” we see in other species? The answer is found in his comparison of a termite castle and Gaudi’s ‘La Sagrada Familia,’ where he writes that “the design and construction [of Gaudi’s church] could not have proceeded without elaborate systems of symbolic communication”
What are “memes”?
If Dennett is saying that humans are distinguished from other animals by their dependence on symbolic transmission, then we are in agreement. But this still leaves the question: what, exactly, are “memes”?
In his 1991 book, Consciousness Explained, Dennett quotes Dawkins’ definition of the meme as a “unit of cultural transmission, or a unit of imitation.” (202) This article calls memes “cultural items that replicate with varying amounts of input from intelligent vectors.” If we try to synthesize a definition of “meme” by combining these statements with the implicit definition of culture I refer to above, (“transmission by replication of non-genetic information”), we can say that a meme is a replicating or replicable unit of non-genetic information. But this differs in two important ways from the “elaborate systems of symbolic communication” upon which Dennett correctly states that the construction of Gaudi’s church depends. First, there is no requirement that the “meme” be symbolic in nature, and second, culture is assumed to be fundamentally composed of discrete, self-replicating units or entities. To grant any one symbol an independent existence as a self-replicating unit is to remove it from the context in which it has significance and by which it renders its effects. So if the memetic perspective obscures the distinction that Dennett initially draws out, what, if anything, does it clarify or contribute?
Words as “memes”
What are words? They are not just sounds, or marks, or even symbols. They are memes (Dawkins 1976; Dennett 1991, 1995, 2006). Words are that subset of memes that can be pronounced.
Dennet calls words “our paradigmatic memes” and tells us that they “have an identity that is to a considerable extent language-independent”:
Like lateral or horizontal gene transfer, lateral word transfer is a ubiquitous feature, and it complicates the efforts of those who try to identify languages and place them unequivocally in glossogenetic trees. English and French, for instance, share no ancestor later than proto-Indo-European (see Fig. 2) but have many words in common that have migrated back and forth since their divergence (cul-de-sac and baton, le rosbif and le football, among thousands of others). Just as gene lineages prove to be more susceptible to analysis than organism lineages, especially when we try to extend the tree of life image back before the origin of eukaryotes (W.F. Doolittle, this volume), so word lineages are more tractable and nonarbitrary than language lineages.
It seems like all “lateral word transfer” really means is that throughout history, individuals and societies which speak different languages have come into contact with each other and shared words. And the fact that a word is found in more than one language does not mean its identity is “language-independent” either, it just means there is an even wider range of linguistic contexts in which it can be used and understood.
After this less than compelling argument for words as “memes,” Dennett goes in a somewhat different direction:
Words have one feature that has a key role in the accumulation of human culture: They are digitized. That is, norms for their pronunciation permit automatic—indeed involuntary—proofreading, preventing transmission errors from accumulating in much the way the molecular machines that accomplish gene replication do.
But these norms and the automatic correcting Dennett is talking about are not features of individual words; they come from the symbolic system of a language. Of course, he knows this, and writes:
… when you acquire language, you install, without realizing it, a Virtual Machine that enables others to send you not just data, but other virtual machines, without their needing to know anything about how your brain works.
Dennett’s computer analogy, the “Virtual Machine,” is the symbolic system of a particular language. Again we come back to context; the meaning of a word changes with the context in which it appears, with time, and from place to place. This may seem like a trivial observation, but I make it repeatedly because it is the fact which most obviously challenges the idea of discrete, self-replicating units of culture.
One of the goals of the meme concept is to unify culture and biology by attempting to demonstrate that natural selection governs not only biological evolution, but the cultural process as well. He writes in Consciousness Explained:
Meme evolution is not just analogous to biological or genetic evolution, not just a process that can be metaphorically described in these evolutionary idioms, but a phenomenon that obeys the laws of natural selection exactly. The theory of evolution by natural selection is neutral regarding the differences between memes and genes; these are just different kinds of replicators evolving in different media at different rates. (202)
This of course depends on whether “memes” exist (in the kind of concrete, material sense in which the language used to talk about them suggests they exist). But Dennett cleverly dodges the demand to prove this existence by instead suggesting that genes might not be as concrete as we tend to think:
Genes, according to George Williams (1966, p. 25) are best seen as the information carried by the nucleotide sequences, not the nucleotide sequences themselves, a point that is nicely echoed by such observations as these: A promise or a libel or a poem is identified by the words that compose it, not by the trails of ink or bursts of sound that secure the occurrence of those words. Words themselves have physical “tokens” (composed of uttered or heard phonemes, seen in trails of ink or glass tubes of excited neon or grooves carved in marble), and so do genes, but these tokens are a relatively superficial part or aspect of these remarkable information structures, capable of being replicated, combined into elaborate semantic complexes known as sentences, and capable in turn of provoking cognitive, emotional, and behavioral responses of tremendous power and subtlety.
I’m no geneticist, but I’m fairly certain that a nucleotide sequence is not merely a superficial token, arbitrarily related to the information it carries, the way that a word, (as a symbol), is arbitrarily related to its referent. The information is literally embodied in the nucleotide sequence. Nonetheless, this sleight of hand on Dennett’s part critical to advancing his argument past questions of the meme’s existence.
Now, taking the existence of memes for granted, the next step is to argue that their “selection” is due to their own fitness – they may or may not enhance the reproductive fitness of their hosts. He contrasts this with the “traditional wisdom – ‘common sense’ – according to which culture is composed of various valuable practices and artifacts, inherited treasures, in effect, that are recognized as such (for the most part) and transmitted deliberately (and for good reasons) from generation to generation.” He writes:
The key improvements, then, of the memetic perspective are its recognition that:
1. Excellently designed cultural entities may, like highly efficient viruses, have no intelligent design at all in their ancestry.
2. Memes, like viruses and other symbionts, have their own fitness. Those that flourish will be those that better secure their own reproduction, whether or not they do this by enhancing the reproductive success of their hosts by mutualist means.
“Memes” and Dennett’s ‘Intentional Stance’
If the tautology in number 2 above, (“those that flourish will be those that better secure their own reproduction”), makes “memes” sound an awful lot like intentional agents, a look at Dennett’s philosophy should explain why. He wrote a book called The Intentional Stance in 1987, but he has been working with the idea it contains for the last four decades. Here’s how he describes it in this summary of ‘Intentional Systems Theory’ from the Oxford Handbook of the Philosophy of Mind:
The intentional stance is the strategy of interpreting the behavior of an entity (person, animal, artifact, whatever) by treating it as if it were a rational agent who governed its ‘choice’ of ‘action’ by a ‘consideration’ of its ‘beliefs’ and ‘desires.’(1)
But Dennett is pretty clear that this is more than just a strategy; he tells us that “anything that is usefully and voluminously predictable from the intentional stance is, by definition, an intentional system.”
Where on the downward slope to insensate thinghood does ‘real’ believing and desiring stop and mere ‘as if’ believing and desiring take over? According to intentional systems theory, this demand for a bright line is ill-motivated.(7)
Seeking one’s own good is a fundamental feature of any rational agent, but are these simple organisms seeking or just ‘seeking’? We don’t need to answer that question. The organism is a predictable intentional system in either case. (9)
It’s one thing to argue that taking the intentional stance might help us describe certain aspects of the cultural process; when we consider how the internal logic of a system of symbols may predict the behavior of individuals and groups for whom that system is important, this is in some sense what we are doing. But Dennett’s claim that cultural evolution is governed by natural selection is dependent on a much more generous application of this kind of thinking – chopping the cultural process into little pieces and treating them as intentional agents (their intention being simply to replicate themselves).
The memetic view contains two intentional systems: the intention of the meme is to replicate itself at any cost, while the intention of the host remains to replicate its genetic material. There are memes that help this process and memes that hurt it, and a multitude of more or less neutral cultural trappings in which we are dressed along the way toward death. A meme is classified as a parasite, mutualist, or commensal, based on its effects on the reproductive fitness of its host. But why should we be committed to such an impoverished view of our existence? By Dennett’s own account, culture transformed our species, just as it transforms each new member that acquires it. He writes in Darwin’s Dangerous Idea:
… it cannot be “memes versus us,” because earlier infestations of memes have already played a major role in determining who or what we are. The “independent” mind struggling to protect itself from alien and dangerous memes is a myth.
It is no accident that the memes that replicate tend to be good for us, not for our biological fitness…, but for whatever it is we hold dear. And never forget the crucial point: the facts about whatever we hold dear – our highest values – are themselves very much a product of the memes that have spread most successfully. ( 364-365)
We truly are cultural beings; our own intentional states are inseparable from the historical, symbolic process that happens inside our brains. To be fair, Dennett knows how much context matters. He wrote in this piece from 1998, that “the environments that embody the selective pressures that determine [memes’] fitness are composed in large measure of other memes.” But trying to explain culture by asking, as he suggests, “the cui bono question” (who benefits?), and answering, “our memes,” means ignoring a critical, if easy to miss, fact revealed in the quote above: we are a species that has values and holds things dear. Dennett clearly values “mutualistic memes” – the culturally driven development of “technology and intelligence” which has made surviving so much easier for our species – but seems not to fully appreciate the observation anthropologist Clifford Geertz makes in his essay, ‘Ethos, Worldview, and the Analysis of Sacred Symbols,’ (1957):
The drive to make sense out of experience, to give it form and order, is evidently as real and as pressing as the more familiar biological needs. And, this being so, it seems unnecessary to continue to interpret symbolic activities-religion, art, ideology-as nothing but thinly disguised expressions of something other than what they seem to be: attempts to provide orientation for an organism which cannot live in a world it is unable to understand.
‘Traditionalists’ made of straw
Dennett makes a point of claiming that what he considers a naive, “economic model” of culture, “where possessions, both individual and communal, are preserved, repaired, and handed down,” “is for the most part uncritically adopted by cultural historians, anthropologists, and other theorists.” He believes the fact that “many of our most valuable cultural treasures have no identifiable author and almost certainly were cobbled together by many largely unwitting minds over long periods of time” presents the “traditionalist” with a serious problem:
Nobody invented words or arithmetic or music or maps or money. These apparent exceptions to the traditional model are typically not seen as a serious problem. The requirement of intelligent authorship can be maintained by distributing it over indefinitely many not-so-intelligent designers whose identities are lost to us only because of gaps in the “fossil record” of culture. We can acknowledge that many of the improvements accumulated over time were “dumb luck” accidents that nevertheless got appreciated and preserved. With these concessions, the traditionalist can avoid acknowledging what ought to seem obvious: These excellent things acquired their effective designs the same way plants and animals and viruses acquired theirs—they evolved by natural selection, but not genetic natural selection.
First, (whether or not anyone cares about maintaining “intelligent authorship”), what Dennett labels here as “concessions” actually describe the cultural process better than just saying “they evolved by natural selection”; how, after all, can he claim that that this evolution happens “the same way,” when the crucial set of facts is of a radically different (symbolic) nature? Second, it’s possible for an individual to have an impact on the collective cultural process without intending to and without being aware of it – I don’t see how this scenario supports Dennett’s claims. But most importantly, I return to the Geertz, because his view of culture is conspicuously not like the “traditionalist” straw man Dennett sets up. He wrote this in ‘Religion as a Cultural System,’ (published 10 years before Dawkins invented the “meme”):
So far as culture patterns, that is, systems or complexes of symbols, are concerned, the generic trait which is of first importance for us here is that they are extrinsic sources of information. By “extrinsic,” I mean only that–unlike genes, for example–they lie outside the boundaries of the individual organism as such in that intersubjective world of common understandings into which all human individuals are born, in which they pursue their separate careers, and which they leave persisting behind them after they die. By “sources of information,” I mean only that–like genes–they provide a blueprint or template in terms of which processes external to themselves can be given a definite form. As the order of bases in a strand of DNA forms a coded program, a set of instructions, or a recipe, for the synthesis of the structurally complex proteins which shape organic functioning, so culture patterns provide such programs for the institution of the social and psychological processes which shape public behavior.
Notice that “to make sense out of experience” and “provide orientation,” it is not necessary for “systems or complexes of symbols” to be some carefully curated set of goods. Nor does Geertz assume that culture patterns will necessarily be positive, or work to enhance genetic or reproductive fitness – it’s possible for an order-creating system to be reprehensible and disastrous, (take Nazi ideology for example). It’s a shame that Dennett pretty much discounts the work of all previous cultural theorists for the sake of a rhetorical device, rather than at least attempt to use someone like Geertz as a jumping off point.
But Geertz’s work gets at the heart of the problem with memetics; it may function as an explanation for the phenomenon of culture, but I think any attempt to use it in a robust analysis of empirical events must involve dropping its most characteristic feature – the idea of individual units of culture attempting to replicate at any cost and governed by natural selection. The following comes from ‘Person, Time, and Conduct in Bali’(1966):
One cannot run symbolic forms through some sort of cultural assay to discover their harmony content, their stability ratio, or their index of incongruity; one can only look and see if the forms in question are in fact coexisting, changing, or interfering with one another in some way or other, which is like tasting sugar to see if it is sweet or dropping a glass to see if it is brittle, not like investigating the chemical composition of sugar or the physical structure of glass. The reason for this is, of course, that meaning is not intrinsic in the objects, acts, processes, and so on, which bear it, but–as Durkheim, Weber, and so many others have emphasized–imposed upon them; and the explanation of its properties must therefore be sought in that which does the imposing–men living in society. The study of thought is, to borrow a phrase from Joseph Levenson, the study of men thinking; and as they think not in some special place of their own, but in the same place–the social world–that they do everything else, the nature of cultural integration, cultural change, or cultural conflict is to be probed for there: in the experiences of individuals and groups of individuals as, under the guidance of symbols, they perceive, feel, reason, judge, and act.
Symbols, if they are to be understood, cannot be divorced from their function/use in creating order and meaning for individuals and groups. This is why, I believe, Dennett admits in the summary of chapter 12 of Darwin’s Dangerous Idea, that “the prospects for elaborating a rigorous science of memetics are doubtful,” though he maintains that “the concept provides a valuable perspective from which to investigate the complex relationship between cultural and genetic heritage.” From what I’ve read, none of Dennett’s claims which hold water are original, or require the memetic perspective.
I’ve tried my best to be thorough here, but it’s impossible to cover everything. As always, comments are welcome.
Posted on October 8, 2010 - by David
I am working directly from the unpublished text of Liah Greenfeld’s forthcoming book, Mind, Madness, and Modernity: The Impact of Culture on Human Experience. All the original ideas, and all interpretations and analysis of primary and secondary source materials used to support the ideas are attributable to Liah Greenfeld. Read the introduction to the exposition here.
part 3 – Madness: A Modern Phenomenon
In this last installment, we consider how Greenfeld’s theory of the mind makes it possible to see schizophrenia and manic-depressive illness (that is, major depression and bipolar disorder), which are usually considered distinct disorders, as diseases of the will, existing on a continuum of complexity of the will-impairment experienced.
Culture – the symbolic transmission of human ways of life- is an emergent phenomenon, a new reality with its own rules, that nonetheless operates within the boundary conditions of life. This symbolic reality is only alive, (the process can only occur), in individual brains, hence the understanding of the mind as “culture in the brain,” or “individualized culture.” As described in part 2, three important “structures” of the mind – (patterned and systematic symbolic processes which must be supported by corresponding patterned and systematic processes in the brain) – are identity, will, and the thinking self.
Identity - the relationally constituted self – is always a reflection of a particular cultural environment. Greenfeld hypothesizes that the lack of direction given by modern culture makes the once relatively simple process of identity formation much more complicated. A well formed identity is able to subjectively rank the choices present at any moment, giving the will, (or acting self), a basis for decision-making. It follows then that problems with identity formation lead to problems with the will. Malformation of identity and impairment of the will necessarily affect the functioning of the thinking self (the “I of self-consciousness”) – the part of the mind which is explicitly symbolic in the sense that it operates with formal symbols – above all, language. The thinking self may become fixed on questions of identity; it may have to stand in for the will, when a person has to talk him/herself into acting in situations which normally wouldn’t require self-conscious reflection (e.g going to the bathroom, eating, getting out of bed); or in the most severe cases, the thinking self may become completely disconnected from individualized culture, in which case all the cultural resources of the mind range free, without direction from identity and will.
The experiences of those who suffer from mental illness begin to make sense within this framework. In major depression, the will is impaired in its motivating function – the ability to force oneself to act or think as one would like to, or as would seem appropriate, is severely lessened. The mind at this stage remains individualized and one has a definite, though distorted and painful, sense of self. The thinking self becomes negatively obsessed with identity, and an incredible dialogue of self-loathing thoughts takes hold. It is insufferable to be oneself, and death naturally suggests itself as the only possibility of escape. Though tragically, as we all know, many depressed people do take their lives, for many even the will to take this action is not present. In bipolar disorder, the impairment of the motivating function of the will in depression mixes with the impairment of its restraining capacity in mania. One can neither move oneself in the desired direction nor restrain one’s thoughts and actions from running in every direction. The negative self-obsession of depression (which can still be justifiably considered delusional) alternates with (the often more noticeable to the outside observer) grandiose and exalted self-image and beliefs. Mania can either cycle back to depression or, through delusional tension, develop into acute psychosis.
The most characteristic symptoms of schizophrenia – hallucinations and elaborate delusions – are usually preceded by a prodrome which bears significant resemblance to certain aspects of depression and mania. This is often a period of social withdrawal, when the experience of the outside world seems to move from a sense of unreality to a sense of the profound yet ambiguous meaningfulness of all things. In healthy minds, identity provides a relatively stable image of the cultural world and the individual’s place in it, and thus the will directs thought and action towards relevant goals. Naturally, at each moment much of the environment is overlooked so that attention can be focused where it should be. In the prodrome, however, the thinking self becomes fixated on mundane aspects of reality, and things in the environment which are usually taken for granted become alternately senseless or imbued with special significance. This experience of the world as incomprehensible and inconsistent suggests a serious problem with identity. The will, (which in healthy cases is a largely unconscious process directed by identity), gets put on the shelf, so to speak, and the thinking self takes on the task of trying to piece together this unreal or hyperreal outside world.
The prodrome is usually only identified after the fact, since it is the appearance of hallucinations and delusions which allows the illness to be diagnosed as schizophrenia. Delusions, (often also present in patients diagnosed with bipolar), are the best known feature of schizophrenia. We can understand delusion as the inability to separate between subjective and objective realities, or put another way, the inability to distinguish between the cultural process on the individual level (the mind) and culture on the collective level. Thus internally-generated experiences are mistakenly thought to have originated outside. The elaborate delusions described by schizophrenic patients can be seen as a kind of rationalization of the experience of acute psychosis. It is important to distinguish between delusional accounts of the acutely psychotic phase, given after the fact in moments of relative self-possession, and the experience itself. In the midst of acute psychosis, a person is almost always incommunicative. Descriptions of this stage often mention the loss of the sense of self, as well as the sense of being watched by an external observer. The mental process, no longer individualized, is beyond willed control. Schneider’s first-rank symptoms, such as the belief that thoughts are extracted or implanted and that physical sensations and actions are controlled by an external force, clearly point to the experienced loss of will which runs underneath so many schizophrenic delusions. The sense of an alien presence is explained by the continued processing of the thinking self even after identity and will have (if only temporarily) disintegrated. Lacking this individualized direction, the “I of self-consciousness” becomes the “eye of unwilled self-consciousness,” – the defenseless sufferer necessarily experiences this free-ranging cultural process as foreign, and quite possibly terrifying, because it is beyond his control.
The formal abnormalities of thought which were so important to Eugen Bleuler’s diagnosis of schizophrenia also fit into the cultural framework. Schizophrenics are often unable to privilege conventional, socially-accepted associations in thought. Most of the time in our modern societies, normal associations follow the rules of logic, (in the strict sense of Aristotelian logic based on the principle of no contradiction). (However, it must be noted that logic is an historical, thus cultural phenomenon, so the inability to think logically should not be taken as evidence of brain malfunction). Of course, depending on the context, some other logic may be culturally appropriate, and arbitrating between contextual logics is one of the primary ways that the will directs thought. In schizophrenia, though, with the will impaired, thought is unanchored to any of these logics, and seems to jump from one to another at random. This becomes most evident in the use of language, which seems to speak itself, flowing without direction and often tied together by the sonic qualities of words or connections in meaning which would usually be overlooked as irrelevant. While the use of language will necessarily depend on the particular cultural resources present in the individual’s mind, it is impersonal in the sense that it draws it life from the associations inherent in language itself, rather than associations pertinent to individual identity or the objective cultural context.
Not only does Greenfeld’s continuum model better account for the huge overlap between the illnesses as currently defined, it also allows us to pay closer attention to movement along this continuum throughout the course of an individual’s illness. While anomie is presumed to be the initial cause of mental illness early in life through interference with identity formation, the various swings on the spectrum may become more comprehensible when we consider what is happening to the individual at the time when the change in symptoms occurs. It is possible that specifically anomic situations may lead to shifts in the already existing illness. (These considerations are explored in Greenfeld’s analyses of the well-publicized cases of John Nash, ( Nobel prize winner in economics), and Kay Redfield Jamison, co-author of the authoritative book on manic-depressive illness.)
The focus on the symbolic, mental processes at work in these “diseases of the will” should not be misunderstood as in any way taking away from the biological reality of major mental illness. Just as the activity of healthy minds corresponds to certain brain activity, so the abnormal processes of a sick mind would be expected to correspond to atypical patterns of brain function. Neither does the hypothesis that mental illness has a cultural rather than biological cause ignore potential genetic conditions that might make certain individuals more vulnerable than others. In fact, it is possible that mechanisms of interaction between culture and genes may become known with continued research in epigenetics – the study of changes in gene expression not caused by changes to the underlying DNA sequence. Some have already hypothesized that gene-environment interaction may lead to epigenetic changes that are central to the expression of mental illness. Of course, unless epigenetic research is specifically designed to take the symbolic nature of the environment into account, it will probably do little to help us to better understand mental disease and the mental process in general.
Part 1 of the exposition looks at the the mind/body problem which has stood at the center of Western Philosophy for over 2000 years, and considers Greenfeld’s proposed resolution – a 3 layer view of reality (matter, life, and culture/mind) in which the top 2 are emergent phenomenon. Greenfeld credits Charles Darwin with making it possible to view the world in terms of emergent phenomenon, which in turn makes possible her theory of culture and the mind which can put the mind/body question to rest. At the same time, she exposes the historical roots of the dogmatic bias of science (as it is normally practiced) towards materialism, and dismisses the notion that science has (or can) in any way empirically prove this position, thereby maintaining that there is no inherent conflict between faith and rigorous empirical study.
In part 2, the proposed solution to the dualist problem is developed – culture is a symbolic process emergent from biological phenomena and operating within the boundary conditions of life, yet fundamentally autonomous and governed by different set of rules. As life organizes the matter out of which it is composed into unlikely patterns, so the symbolic process of culture organizes the brain, (which at all times both supports and provides the boundary conditions for the process) to suit its own needs. Greenfeld logically deduces that the point of emergence for culture and the mind must have been the moment vocal signs were first intentionally articulated, and became symbols. The internalization of this intention creates the mental structure of the will. Yes, this means that in a single moment, culture, the mind, and “free will” as we know it appear together, forever separating homo sapiens from all other animal species and making humanity a reality of its own kind. This view of culture, as a symbolic process which not only structures social life but individual minds, has radical implications for the many disciplines which study the various aspects of humanity. This view also demands the attention of neuroscience, which will remain purely descriptive and not gain any ground in the attempt to understand and explain “consciousness” until it takes into account the symbolic reality – by far the most important aspect of the human environment.
Part 3 reiterates the ideas about nationalism developed in Greenfeld’s first two book and takes things a step further. She identifies nationalism, a fundamentally secular consciousness based on the principles of popular sovereignty and egalitarianism, as the defining element of modernity, responsible for massive changes in the nature of human experience. More specifically here, she claims that love, ambition, and madness as we know them today emerged out of this new consciousness in 16th century England and spread from there to other societies that adopted and adapted the nationalist culture.
Part 4 challenges the current psychiatric dogma that manic-depressive illness and schizophrenia are distinct illnesses with biological causes. The need to rethink this distinction is evidenced by the high degree of overlap in symptoms between two conditions and the failure to find consistent functional or structural brain abnormalities which would allow for accurate differential diagnosis. Not only have genetic researchers been unable to find individual genes that cause schizophrenia or mdi, their best work suggests a shared vulnerability to both illnesses. Epidemiological data seems to show that mental illness occurs at greater rates in modern nations with Western-derived culture, and studies within these nations suggest that the upper classes (i.e those individuals who fully experience the openness of society and have the greatest number of choices) are particularly affected. Both of these findings are consistent with Greenfeld’s hypothesis that anomie causes mental illness. Nevertheless, this data is consistently ignored or rejected as flawed, since it flies in the face of the currently accepted notion of mental illness as biologically caused and uniformly spread across cultures and throughout history. Likewise, the fact that no genetic cause of mdi or schizophrenia has been found has done little to dhake the faith that such a cause will one day be found. Unfortunately, this systemic materialist bias can only continue to impede progress in the understanding of these fatal conditions.
The theoretical view of mental illness as ultimately stemming from problems with the formation of identity is a new one, and thus it does not come packaged with some ingenious cure. However, the clear implication is that something must be done to help individuals in anomic modern societies to create well formed identities. Since this process begins very early in childhood, the intervention must begin then as well. Educating children about the multitude of choices they will face in their extremely open environment, and alerting them to the presence of the many competing and often contradictory cultural voices vying for their attention would become priorities. We should also be cautious (as the recent work of people like Ethan Watters suggests) of the potential side effects of exporting our culture to other societies.
While this exposition is in some sense finished, there is much more to say, and I will continue exploring these ideas and comparing them with other perspectives in my future posts. I realize this work is controversial, and can be difficult to take in all at once. Please, if any part (of the whole) of this seems unclear, unsupported, or simply outrageous, ask a question or give your critique. I’m eager to hear what others have to say.
Posted on October 1, 2010 - by David
I am working directly from the unpublished text of Liah Greenfeld’s forthcoming book, Mind, Madness, and Modernity: The Impact of Culture on Human Experience. All the original ideas, and all interpretations and analysis of primary and secondary source materials used to support the ideas are attributable to Liah Greenfeld. Read the introduction to the exposition here.
part 3 – Madness: A Modern Phenomenon
With all that has been written about schizophrenia and manic-depressive illness, the countless studies that have been conducted, and the growing list of medications used in treatment, it would be easy to mistakenly assume that we now understand the nature and cause of these ailments. The history of the separation of psychoses of unknown cause into these two categories leads us to Emil Kraepelin (1856-1926). This German psychiatrist believed that these were heritable brain diseases, and he led a revolution in classification in German-language psychiatry around the turn of the twentieth century, trying to discover just what kind of brain diseases he was dealing with. Kraepelin used a latin version (dementia praecox) of the French term demence precoce (coined in 1852 by Benedict Morel), to distinguish a form of insanity with an early onset and rapid development from the common geriatric dementia. Kraepelin then separated Dementia praecox from manic-depressive insanity (called by the French folie circulaire or folie a double forme). Up until that point, the two conditions were believed to constitute one general category of insanity.
Kraepelin’s use of the term dementia praecox, which suggested a progressive slowing of mental processes, to refer to a condition characterized largely by delusions and hallucinations, (which imply not mental lethargy but imaginative hyperactivity) may have contributed to the misinterpretation of schizophrenia, (still common today), as degeneration of cognitive/reasoning capacities. The evidence suggests that it is rather the strange character of thought, the inability to think in normal, commonly accepted ways, which distinguishes schizophrenia from geriatric dementia. The name “schizophrenia” (meaning “splitting of the mind”) was introduced to replace dementia praecox in 1908 by Swiss psychiatrist Eugen Bleuler. Bleuler saw the disease mainly in terms of four features: abnormal thought associations, autism (self-centeredness), affective abnormality, and ambivalence (inability to make decisions). Then in the 1930’s, another German psychiatrist, Kurt Schneider, contributed greatly to the diagnosis of schizophrenia by identifying “first-rank symptoms,” primarily related to hallucinations and delusions. Hearing voices speak one’s thoughts aloud, discuss one in the third person and describe one’s actions; feeling like an outside force is controlling one’s bodily sensations or actions and extracting, inserting, or stopping thoughts; believing that one’s thoughts are “broadcast” into the outside world – these are some of the experiences which Schneider found to be characteristic of the illness which Bleuler had recently renamed.
It should be noted that although Schneider’s first rank symptoms are essentially psychotic symptoms, (and schizophrenia is by definition a psychotic illness), very often those diagnosed with schizophrenia do not experience these symptoms. Diagnostic standards today distinguish between positive symptoms, (symptoms like hallucination and delusions which are not present in healthy individuals), and negative symptoms (e.g blunted affect, lack of fluent speech, inability to experience pleasure, lack of motivation). Anti-psychotic medications are often effective in treating some of the positive (i.e psychotic) symptoms of schizophrenia, but attempts to alleviate negative symptoms with medication have been largely unsuccessful, and the prognosis tends to be worse for sufferers who experience primarily negative symptoms.
By far the most authoritative and extensive work (over 1200 pages long) on that other half of madness is Manic Depressive Illness: Bipolar disorders and Recurrent Depression, written by Drs. Frederick Goodwin and Kay Redfield Jamison. The subtitle (Bipolar disorders and Recurrent Depression) added for the 2nd edition, (published in 2007), emphasizes the essential unity of all the major affective illnesses. In the introduction, the authors stress their reliance on Kraepelin’s model for their own conceptualization of mdi. (They, like Kraepelin, see it as a brain disease with genetics playing a significant causal role). But because Kraepelin’s major act of classification was to divide psychotic illness into two distinct disorders, any definition of mdi based on his work depends on having a clear definition of schizophrenia, which is clearly lacking. Kraepelin’s distinction between the two was based primarily not on differences in symptoms, but on course of illness and outcome, with schizophrenia (or in his terminology, dementia praecox) being much more malignant and causing significant deterioration over time. It was in fact Eugen Bleuler who first called mdi an “affective illness,” not because schizophrenia occurred without major mood disturbance, but because in mdi he saw it as “the predominant feature.” This characterization has proven to be extremely important for the current conception of major mental illness; the original distinction as between two psychotic illnesses has largely been obscured, and mdi is now viewed essentially as a mood disorder, with schizophrenia, by contrast, appearing to be essentially a thought disorder.
Though manic-depressive illness includes a variety of mood disorder diagnoses, the main distinction is between major depression and bipolar disorder (alternating episodes of depression and mania). A few decades ago, the bipolar label was split into bipolar-I and bipolar-II. Bipolar-I is the severe form of the disease in which both depressive and manic episodes are serious enough to require treatment. A diagnosis of bipolar-II may be given when a patient suffers from major depressive episodes and also experiences “hypomanic” episodes (meaning basically “mildly manic” and therefore lacking psychotic features). Even Goodwin and Jamison seem skeptical of the value of this and other divisions in classification.
In order to compare manic-depressive illness with schizophrenia, then, we should concentrate on descriptions of, (go figure), depression and mania. According to the DSM-IV, typical symptoms of depression include “loss of interest or pleasure in nearly all activity,” irritability, “changes in appetite and weight, sleep, and psychomotor activity; decreased energy; feelings of worthlessness or guilt; difficulty thinking, concentrating, or making decisions; [and] recurrent thoughts of death or suicidal ideation, plans, or attempts.” The description given by Goodwin and Jamison is along the same lines, though much more vivid:
Mood in all of the depressive states is bleak, pessimistic, and despairing. A deep sense of futility is often accompanied, if not preceded, by the belief that the ability to experience pleasure is permanently gone. The physical and mental worlds are experienced as monochromatic, as shades of gray and black. Heightened irritability, anger, paranoia, emotional turbulence, and anxiety are common. (MDI 66)
Further descriptions from patients and clinical observers add more layers to this general body of symptoms; among the most interesting, lack of facial expression, and a sometimes frightening sense of unreality. It is quite clear that depression is something altogether different from normal sadness, and even “abnormally low mood.” These descriptions show a huge variation in the level of emotion experienced, from almost no feeling at all, to unbearably acute anxiety. A depressed person’s thinking may be slowed almost to the point of paralysis, or he may alternately be unable to control an unending torrent of painful thoughts. All that seems consistent within descriptions and definitions of depressive episodes is that it is an extremely unpleasant experience.
There is such a diagnosis as psychotic depression, (featuring obvious delusions and hallucinations, in which case it is not clear how it can be diagnosed differently from schizophrenia) but even its more ordinary form, many of the symptoms of depression cannot be easily distinguished from the negative symptoms of schizophrenia, which include flat affect and paralyzed thought. And what good reason is there not to consider the firm belief in one’s utter worthlessness, the obsession with death, and the sense of the absolute necessity of ending one’s life as instances of delusion or thought disorder?
Just as depression is not just extreme sadness, mania is not an exaggerated form of joy. According to the DSM-IV, a manic episode is a period of “abnormally and persistently elevated, expansive, or irritable mood,” with typical symptoms being “inflated self-esteem or grandiosity, decreased need for sleep, pressure of speech, flight of ideas, distractability, increased involvement in goal-directed activities or psychomotor agitation, and excessive involvement in pleasurable activities with a high potential for painful consequences.” To be considered a manic (rather than merely “hypomanic) episode, “the disturbance must be sufficiently severe to cause marked impairment in social or occupational functioning or to require hospitalization or it is characterized by the presence of psychotic features.” Mood within a manic episode may be highly variable, and the frequent alternation between euphoria and irritability is noted.
Grandiose delusions are common – the extreme expression of the inflated sense of self-importance so typical in mania. (Again, one wonders why the beliefs which spring from the typical sense of worthlessness in depression – the polar opposite of the grandiose beliefs in mania – should not be considered delusions as well). Grandiosity often manifests in compulsive writing which the sufferer may believe has special significance but is usually characterized by “flight of ideas” and “distractability.” This behavior is not unique to mania, and has been well documented in patients diagnosed with schizophrenia.
Delusions may be not only grandiose, but, (as in schizophrenia), paranoid as well. In some severe cases, the sufferer may reach the stage of delirious mania, which the authors of MDI describe by quoting Kraepelin:
At the beginning the patients frequently display the signs of senseless raving mania, dance about, perform peculiar movements, shake their head, throw the bedclothes pell-mell, are destructive, pass their motions under them, smear everything, make impulsive attempts at suicide, take off their clothes. A patient was found completely naked in a public park. Another ran half-clothed into the corridor and then into the street, in one hand a revolver in the other a crucifix….Their linguistic utterances alternate between inarticulate sounds, praying, abusing, entreating, stammering, disconnected talk, in which clang-associations, senseless rhyming, diversion by external impressions, persistence of individual phrases are recognized. …Waxy flexibility, echolalia, or echopraxis can be demonstrated frequently. (36)
The descriptions of delirious mania provided by recent clinicians are similar to Kraepelin’s. Quite obviously, a patient in the condition described above is suffering from some of the most characteristic symptoms of schizophrenia. Of course for those following in Kraepelin’s footsteps, this similarity should come as no surprise, since (as was mentioned earlier) his distinction between the two psychotic disorders was not based on differences in symptoms. Indeed, the need to clarify the blurry boundary between psychotic mania and schizophrenia has resulted not in further distinction, but the creation of hybrid diagnostic categories like schizoaffective and schizo bipolar. In summarizing the findings of a number of studies over a thirty year span comparing thought disorder in schizophrenia and mania, Goodwin and Jamison are forced to conclude that there is no quantitative difference in thought disorder between the two conditions. Nevertheless, (needing to maintain the distinction between their area of expertise and the even more mysterious realm of schizophrenia) they maintain there are qualitative differences in thought disorder, though the studies used to support this claim point in a number of different directions. Of course, these studies were done only after patients received a particular diagnosis, so differences in thought disorder may also have been related to the effects of different medications. After considering the huge overlap between these two diagnoses, and the fact that differences seem to be more of degree than kind, it seems possible that perhaps they might not be two distinct diseases after all.
While the technological advancements of recent decades allow us to map the human genome and look at the brain on the molecular level, the enormous amount of data that has been amassed is virtually useless for psychiatrists trying to diagnose their sick patients because the assumed biological causes of schizophrenia and manic-depressive illness have not been found. No brain abnormalities that are specific to either illness or present in all cases have been identified. Nevertheless, the experts who study and treat schizophrenia and mdi keep the faith (quite literally) that a breakthrough is just around the corner.
For years, genetic research has appeared to be the most promising of the recently opened avenues, but the excitement seems unwarranted by the findings. The relatively large number of chromosomal regions which may be implicated in susceptibility for bipolar means that hope of finding a specific bipolar gene or even a small number of genes must be given up. Some researchers think the way to go is to narrow the search by looking for genes associated with specific aspects of the disease. Of course, this further refinement is only possible because of the huge variation in symptoms and experiences of those who fall under the mdi/bipolar umbrella, and we are once again reminded of the difficulty of defining what this illness or group of illnesses even is. Furthermore, even the distinction between schizophrenia and mdi seems to collapse in light of the genetic linkage data. Goodwin and Jamison write:
While the search for predisposing genes had traditionally tended to proceed under the assumption that schizophrenia and bipolar disorder are separate disease entities with different underlying etiologies, emerging findings from many fields of psychiatric research do not fit well with this model. Most notably, the pattern of findings emerging from genetic studies shows increasing evidence for an overlap in genetic susceptibility across the traditional classification categories. (49)
Genetic studies in the schizophrenia research community lead to pretty much the same hypothesis as with bipolar: genetic susceptibility is most likely polygenic, meaning dependent on the total number of certain genes which may contribute to vulnerability. It must be noted that genetic vulnerability is a condition, not a cause of schizophrenia and bipolar – something else must be acting on this vulnerability. In one way or another, this fact is usually noted in the literature that deals with genetic data, but it is often obscured by a tone of confidence which suggests the information may be more meaningful and explanatory than it truly is.
Even when a specific gene has been well studied across illnesses, its usefulness in understanding genetic susceptibility may be extremely limited. Some studies in both schizophrenia and mdi have found an increased risk of illness for those who possess the short form of the serotonin transporter promoter gene 5-HTT. The thing is, each of us has two copies of this gene, and over two-thirds of us have one long and one short form, meaning that having the normal variant of the gene is the risk factor! If most of us possess a gene which puts us at risk for an illness which only a small minority of people have, then this particular trait is obviously not much of a causal explanation.
Still today, the most important evidence for the heritability of schizophrenia and bipolar are traditional genetic-epidemiological studies – “genetic” research only in the sense that we know that relatives share genes. There is significantly greater lifetime risk of illness for people with a first degree relative who suffers schizophrenia, and studies of bipolar and major depression (i.e manic-depressive illness) have had parallel findings. However, the overwhelming majority of schizophrenics do not have parents or first-degree relatives with schizophrenia, and most of them do not have children themselves, making it difficult to establish the genetic component by looking at family history in a large percentage of cases.
Studies of twins are particularly important for the heritability argument. Calculations from these studies find a 63% risk of having bipolar disorder if an identical (monozygotic) twin has it. The risk for major depression is significantly lower. In schizophrenia the risk is under 50%. The ideal study design for attempting to separate the contributions of biology and environment involves identical twins, separated at birth, adopted, and raised apart, with at least one of them suffering from mental illness. As can be imagined, these cases are hard to come by (4 in mdi and 14 in schizophrenia), and the small number of cases makes generalization suspect (though generalizations are often still made). Another method, for which there is significantly more data, is to compare the risks of identical (monozygotic) and fraternal (dizygotic) twins. Because both kinds of twins are assumed to share the same environment, but fraternal twins only share 50% of their genes, the difference in risk between fraternal and identical twins is attributed to genetics. But this method depends on an extremely limited understanding of environment, reducing it to simply having the same parents. It’s likely that identical twins would be treated in very similar ways by their parents and society at large, but fraternal twins, being biologically different (perhaps even in gender) will likely be treated in very different ways. Therefore, it is highly doubtful that twin studies are able to separate the contributions of biology and environment to lifetime risk of mental illness to anywhere near the degree that is suggested. The fact that over one-third of identical twins are not affected by the disease from which their twin suffers reveals again that genetic susceptibility is at most a condition, and not a cause of schizophrenia and mdi.
The prevailing assumption that schizophrenia and mdi have biological causes naturally leads to the expectation of finding them distributed uniformly across cultures and throughout history. In the case of schizophrenia, this belief justifies the adoption of the standard worldwide lifetime risk of 1%, (a nice round number), extrapolated from an embarrassingly small number of studies – one from Germany in 1928, and two from the 1940’s in rural Scandinavian communities. However, there is a serious lack of evidence of the existence of these illnesses before the early modern period, and studies have consistently found significant differences in the rates of mental illness across cultures and between social classes within cultures. Nevertheless, (perhaps because the idea that serious mental illness may affect different populations at different rates does not sit well with us), variations are often explained away with charges of inaccurate reporting and under or over diagnosis. But epidemiological studies sponsored by the World Health Organization carried out over several decades have found that the illness identified as schizophrenia in poorer, “developing” countries tends to be less chronic (fewer psychotic episodes), causes less disability, and has a better prognosis than schizophrenia in more affluent, “developed” societies. Some of the data from Western nations suggests a lifetime risk of schizophrenia greater than 1%, while in poorer societies the number often appears lower. Multiple studies have found the rate of schizophrenia among Afro-Carribeans born in the UK to be higher than the prevalence in the islands from which their families immigrated. Both schizophrenia and mdi have been found to be less prevalent in Asian countries.
Overall, cross-cultural data supports the hypothesis that schizophrenia and mdi are diseases caused by modern culture, and more specifically, that the more anomic a society becomes, (i.e the more identity becomes a matter of individual choice and the less guidance is given by culture), the more mental illness will be found. Research in the U.S has shown a lower age of onset and higher rates of prevalence for manic-depressive illness in those born after 1944 compared to those born before, though this increase has been attributed to the inadequacy of earlier data-collection techniques, which systematically underestimated the true prevalence of affective disorders. Usually, when environment is allowed a causal role in mental illness, poverty and the stress of the urban environment is the safest target to blame, with studies as early as 1939 finding a higher incidence of schizophrenia in lower-class, urban areas. However, when studies began to consider social class of origin rather than merely the status of the patient when the illness was first recognized, the picture changed significantly. The social mobility of schizophrenic patients displays a “downward drift,” suggesting that their greater proportion among the lower class is due to the disability of the disease rather than the stress of this environment. Furthermore, it appears that the upper-class supplies more schizophrenics than could be predicted by the total upper-class share in the population. The majority of studies of manic-depressive illness show significantly lower rates in blacks compared to whites, but this, like so many other findings which make no sense within the biological framework, is dismissed for a variety of reasons as a mistake.
Finally, Goodwin and Jamison tell us that “the majority of studies report an association between manic-depressive illness and one or more measures reflecting upper social class.” (169) To explain this finding, they consider the possibility that certain personality traits associated with affective illness may contribute to a rise in social position. (One assumes they mean the occasionally “positive” aspects of mild mania, since it is unclear how crippling depression or delusional mania would aid in social climbing). A second hypothesis, that manic-depressive illness could be related to the particular “stresses of being in or moving into the upper social classes,” is deemed simply “implausible, because it assumes that, compared with lower classes, there is a special kind of stress associated with being in the upper social classes, one capable of precipitating major psychotic episodes.” Furthermore, they accuse such a hypothesis of ignoring genetic factors, though discounting genetic vulnerability as a condition for mdi is quite obviously not implied by this idea.
By now it should be quite clear that the belief that major mental illness is caused biologically has made it virtually impossible to reconsider what the empirical evidence actually tells us. Each time the research that is supposed to support this belief comes up short, it is another occasion for the reaffirmation of faith in a soon-to-come breakthrough. Where the data appears to blatantly contradict their hypothesis, they often simply discount its reliability. While many of the most important experts will freely admit how little we actually understand about mental illness, despite all efforts, it is hard to imagine the direction of these efforts will change much anytime soon. This is not a recipe for scientific progress.
The final post of this series will bring Greenfeld’s theory of the mind together with what we know about schizophrenia and manic-depressive illness, considering the two as one disease existing on a continuum of complexity of will-impairment.
Posted on September 24, 2010 - by David
I am working directly from the unpublished text of Liah Greenfeld’s forthcoming book, Mind, Madness, and Modernity: The Impact of Culture on Human Experience. All the original ideas, and all interpretations and analysis of primary and secondary source materials used to support the ideas are attributable to Liah Greenfeld. Read the introduction to the exposition here.
“Identity-formation is likely to be faster and more successful the simpler is the (always very complex) cultural environment in which it is formed – i.e., the fewer and the more clearly defined are the relations that must be taken into account in the relationally-constituted self.”
- from Mind, Madness, and Modernity: The Impact of Culture on Human Experience
For most of human history, in most societies, identity was not something one had to go searching for – it was given at birth. For most individuals, the socio-cultural space relevant to their lives was easy to map out, and directions for proper navigation were well understood from a young age. Life may have been extremely difficult in the physical sense, but at least it was not confusing – people knew their proper place.
As Greenfeld has demonstrated since her first major work, this changed in 16th century England following the War of Roses, which wrecked the nobility and left the rigidly stratified society of orders in disarray. In its place, a new consciousness emerged – nationalism – the modern consciousness, which redefined the possibilities for life in England and in the other societies to which it soon spread. We call this new consciousness nationalism simply because “nation” was the name given to the society in which it emerged by those 16th century Englishmen who first experienced its dignifying effects.
Nationalism is a fundamentally secular and humanistic consciousness based on the principles of popular sovereignty and egalitarianism. (Three distinctive features which most often take shape along with this consciousness are an open class structure, the state form of government, and an economy oriented towards sustained growth). At the beginning of the 16th century, someone among the newly elevated English aristocracy began equating the word “nation,” which had formerly referred to a political and cultural elite, with the word “people,” which referred originally to the lower classes. This equation of “nation” and “people” both reflected and reinforced the new reality of English society, where the principles of popular sovereignty and egalitarianism made the nation and all its members an elite. No longer confined to a particular station in life by a closed societal structure ordained by Divine Providence, man became his own ruler, the maker of his own destiny. This elevation in dignity for every member of the nation meant that life in the here and now gained much greater importance – eternity was no longer the realm of the meaningful. This is the source of the secularism of modern society – God was not consciously abolished, but was essentially replaced by man.
For the first time in history, identity-formation became the responsibility of each individual, and this has proven to be a mixed blessing. With the opportunity to rise above the position of one’s birth comes the possibility of failing to successfully make the climb, or falling suddenly and senselessly from whatever height one is able to reach. The abundance of options in every aspect of life lets in a nagging suspicion that one has not made the best choice. The presence of circumstantial, or worse, socially imposed, obstacles to one’s advancement clashes with the belief in one’s equality and right to self-governance. Belief in equality becomes the idea of equality with the best, making it difficult to tolerate the sense that another person is better, or better-off. This inability of culture to provide the individuals within it with consistent guidance is called anomie – recognized by the great French sociologist Emile Durkheim over 100 years ago as the most dangerous problem of modernity.
With the changes in the nature of existential experience brought on by the mixed blessing of modernity came changes to the English language. A new vocabulary was needed to express and reinforce the ideas behind this new reality. By following linguistic changes in 16th century English, we can actually observe the emergence of several aspects of this new experience – at once, so elevating and devastating for the individual mind. That is to say, we can see several specific sources of anomie, which, it is hypothesized, makes identity formation difficult and complicated, leading in some of the worst cases to the development of mental illness.
Of course, there are many who would react strongly to the idea that changes in language may reflect fundamental changes in the nature of human experience. The same materialist tendencies which have us assuming the universality of schizophrenia lead us to assume the universality of a whole range of human emotions and experiences. If human nature is reduced to a set of biological capacities, the idea that various emotions, (which we must feel are a very important part of being human), have emerged in different places and at different times in history seems outrageous.
But as was demonstrated in the previous post, it is culture – the symbolic transmission of human ways of life – which distinguishes our species from all others and makes humanity a reality of its own kind. Culture is a fundamentally historical process, which means that the possibilities for thought and emotion for an individual at any one place and time are dependent on context – the context of what has gone on before and what is going on around the individual, the infinitely complex history of connections and intersections of the variety of symbolic systems which collectively make up that individual’s cultural resources.
This doesn’t mean that emotions are purely cultural phenomena. At the most basic level, emotions are physical sensations – we feel them. We know we must share certain primary emotions with animals – pleasure and pain, fear, positive and negative excitement – which we assume they experience through neurobiological processes similar to ours. We can see also that certain animals experience secondary emotions like affection and sorrow, which are once removed from their physical expression, and usually serve to strengthen social ties within a group. Considering this, it is obvious that human emotional functioning depends to a large extent on biological capacities that we share with other species. But inevitably, culture interacts with these biological capacities, creating more complex emotions which are tied to particular beliefs and ideas – specific experiences of a symbolic nature – which cannot be reduced to a combination of physical sensations.
When a new idea/emotion/experience emerges, it is usually closely linked to a particular word or set of words – either new words, new derivatives of old words, or old words with new or increased significance, evidenced by changes in usage and context. Ignoring these cultural developments under the mistaken assumption that all human experience is essentially the same results in a few common problems of backwards translation:
- the new meaning of a word is attributed to earlier uses of the same word, obscuring the historical shift in definition (and lived experience)
- other words which denote phenomena which may be related or similar to, but are nonetheless distinct from, the phenomenon to which the new word refers, are taken to be synonyms of the new term.
In both of these cases, differences between cultures and over time are blurred, making it nearly impossible to use the history of a language as empirical evidence. But if language, the most important of the various symbolic systems which collectively make up culture, is off limits as evidence, then no meaningful argument about culture can ever be made.
The two new great passions of modernity – the ultimate expressions of the sovereignty of the self – were ambition and love. The opening of these two realms of choice, and their importance in the formation of individual identity is reflected in the growth of the vocabulary of related terms soon after the birth of the English nation.
While ambition was not a new word, before modernity it usually carried a negative connotation, meaning basically an overgrown (and therefore sinful) desire for honor. Over time, though, it became more neutral, meaning something like “a strong desire”. Thanks to the principles of nationalism, such a desire for attainment of an earthly goal was legitimized and even encouraged, making ambition in many instances a virtue rather than a sin. A positive or negative qualifier would introduce the word to denote which type of ambition was being referenced.
The language of ambition was bolstered by other shifts in meaning and new derivative words. The OED finds only one instance of the use of the word “aspire” in 15th century, with all its derivatives – aspiration, aspiring, and aspirer – appearing mostly in the late 16th century. The verb to achieve acquired a new meaning of gaining dignity by effort, (as in Shakespeare’s: “Some are born great, some atchieue greatnesse”), and from this were derived achievement, achiever, and achievance. The use of the verb to better, referring to improvement by human action, (e.g “bettering oneself), was another permanent addition to the language. Success, which was originally a neutral term meaning any outcome of an attempt, came to refer only to a positive or desired outcome, and its derivatives, successful and successfully, obviously carried this new meaning as well.
Love, (first defined as a passion by Shakespeare), became a calling, a means of defining, or perhaps more accurately, discovering, who one was. While the word love had been commonly used with a variety of meanings – from the ideal of Christian, brotherly love, to the divine love of God, to the essentially sinful sexual lust – the 16th century English concept of love — which is our concept – was dramatically different. “Romantic” love, as it is sometimes called, occurred between a man and woman – therefore it retained clear sexual connotations – but it was above all a union of two minds (or souls, for by this point, the words mind and soul were nearly synonyms). In love, one recognized one’s true self through identification with another, giving meaning and purpose to life in this strangely open world where God, formerly the source of meaning, was conspicuously absent.
The ultimate end of ambition and love was another modern concept: happiness. This word refers to a phenomenon distinct from many of the historically earlier ideas with which it is sometimes identified. It is not luck, which could be either bad or good and was beyond one’s control; not eudemonia, freedom from fear of death which depended in large measure on avoiding excessive enjoyment of life; not the Christian felicity of certitude of salvation, requiring denial of bodily pleasures up to the point of martyrdom. Happiness was rather conceived of as a living experience, a pleasant one, which was purely good and could be pursued. The OED shows the first instance of this general meaning for the word happy in 1525; the same meaning of the noun form – happiness – doesn’t appear until 1591.
Happiness was knowing who and what one was, being content with one’s place in the world – in other words, successfully creating a satisfactory identity. But what was to become of those whose ambitions were left frustrated and unfulfilled? Of those who lost, or failed to find true love, or were kept, either by circumstance or society, from experiencing true love once it was found? What happened to those sensitive minds for whom the responsibility of building an identity proved too great a struggle?
In the same 16th century England which brought the world ambition and love, a new form of mental disease – Madness – appeared. While previously known forms of mental illness were temporary, related perhaps to an infection, an accident damaging the brain, a pregnancy, a bodily illness like “pox” (syphilis), or old age, madness was chronic – usually appearing at a fairly young age (without evidence of an organic cause) and lasting till death. Another of its names, lunacy, reflected the suspicion of a physical cause – specifically implicating the waxing and waning of the moon in the periodic alterations in the character and symptoms of the sufferers. The word insanity entered English at that time too, apparently referring to the same phenomenon as madness and lunacy.
The chronic nature of madness made it a legal issue from the very beginning; the first provision in English law for mentally disturbed individuals — referred to, specifically, as “madmen and lunatics” — dates back only to 1541. Also in the middle of the 16th century, Bethlehem Hospital – more commonly known as Bedlam, the world’s first mental asylum – became a public institution, transferred to the city of London in 1547. While there was probably little to be praised in terms of humane treatment and comfortable accommodations, Bedlam continued to expand into the 17th century to meet what seemed to be a growing need to house the severely mentally ill.
Important for this argument is the fact that folly was separated from madness. Though it sometimes referred to a moral deficiency, folly was generally a synonym for idiocy – a mental dysfunction or deficiency but not a disease. In ‘An Essay on Human Understanding,’(1689) John Locke summarized the difference between madness and folly as such:
In fine the defect in naturals [fools], seems to proceed from want of quickness, activity, and motion, in the intellectual faculties, whereby they are deprived of reason, whereas madmen, on the other side, seem to suffer by other extreme. For they do not appear to me to have lost the faculty of reasoning, but having joined together some ideas very wrongly they mistake them for truths… as though incoherent ideas have been cemented together so powerfully as to remain united. But there are degrees of madness, as of folly; the disorderly jumbling ideas together in some more in some less. In short herein seems to lie the difference between idiots and madmen. That madmen put wrong ideas together, and so make wrong propositions but argue and reason right from them. But idiots make very few or no propositions, but argue and reason scarce at all.
Physicians of the day sought to describe and understand this new phenomenon, but their methods, sources, and interpretations were thoroughly mixed. Their reliance on classical Greek and Latin terms of mental disturbance resulted in a liberal blend of (their interpretation of) the old ideas with the new reality, and though they attempted to draw distinctions between conditions, they were far from clear. The cause was usually assumed to be organic. The common attribution of madness to an imbalance of the four humors shows the strong influence of the classical medical understanding. (The use of the term melancholy as a name for mental illness in general or a particular variety of it is a prime example). Insanity might also be explained by the stars under which one was born. Some authors distinguished between organic madness and spiritual madness caused by demonic influence. Still others focused on mental states that could in turn affect the body.
Obviously, early observers of madness were far from a uniform hypothesis as to its nature and cause. Nevertheless, these sources do contain some revealing descriptions and suggestions. Andrew Boorde recommended that the patient be kept from “musynge and studieng,” (implying very obviously a literate madman), and likewise Thomas Cogan, a physician and head master of a grammar school, advised against “studying in the night” deeming “wearinesse of the minde” worse than “wearinesse of the bodie.” Sir Thomas Elyot noted a “sorowe,” or “hevynesse of mynde” which affected the memory and the ability to reason properly, relating it to such experiences as the death of a child and even disappointed ambition. Christopher Langton saw “sorrow” as a chronic condition, the most serious of four “affections of the mynde” that could “make great alteration in all the body.” Philip Barrough’s description of melancholy, (which he calls “an alienation of the mind troubling reason..”), mentions mood swings, suicidal thinking, hallucinations, and paranoid delusions – in short, some of the most characteristic features of major psychosis which might be diagnosed alternately as bipolar or schizophrenia today. Timothy Bright’s ‘Treatise of Melancholie’ contains the idea that being “over-passionate,” put one at risk for mental disease.
By far the longest and most famous book on the topic in the early modern period was The Anatomy of Melancholy by Robert Burton, first published in 1621. It was essentially a collection all the information he could find on mental disease – both past and present - and therefore (unfortunately) contributed greatly to the confusion of terms, translating as “madness” a whole variety of words from Latin and Greek sources. Despite his mistake, which allowed him to find English madness scattered throughout history, it seemed to him a particularly pressing problem in his day. He noted among his “chief motives” for writing the book “the generality of the disease, the necessity of the cure, and the commodity or common good that will arise to all men by the knowledge of it.” Burton’s description of his society as a “world turned upside downward” is loaded with colorful yet tragic examples of apparent inconsistency and injustice – in a word, sources of anomie common to modern life. One can hypothesize that the inclusion of such a description of the contradictions within culture, in a work that is dedicated to the understanding of what is deemed a medical illness with an essentially organic cause, is related to Burton’s sense that the two phenomena – anomie and mental illness – are related. Indeed, some of the mental symptoms of melancholy “common to all or most” – “fear and sorrow without a just cause, suspicion, jealousy, discontent, solitariness, irksomeness, continual cogitations, restless thoughts, vain imaginations” – begin to make sense if mental illness is seen as stemming from fundamental problems with identity caused by anomie. Some of these symptoms appear identical to the causes of melancholy which fall under Burton’s general category of “passions and perturbations of the mind.” Ambition and related passions like envy and emulation figure prominently here, but most striking of all is the inclusion of love – the cause, apparently of a special madness called “love-melancholy” which afflicted primarily men of the upper classes.
But perhaps the greatest early chronicler of madness was William Shakespeare. Dr. Amariah Brigham and Dr. Isaac Ray, ( two of the most important figures in 19th century American psychiatry), each devoted an extensive article in the early years of the American Journal of Insanity (today the American Journal of Psychiatry) to the consideration of his work. They saw in his plays, (in particular King Lear and Hamlet), such accurate portrayals of insanity that they were certain he must have drawn his inspiration at least partly from first-hand observation. Whatever might be said today in criticism of the method of these doctors, who had no qualms about using literary study to supplement clinical observation, it is significant that the mental illness they observed in their asylums was the same as that which Shakespeare brought to life in his tragedies more than two and half centuries earlier.
Apparently, the medical understanding of madness, lunacy, insanity, melancholy – whichever name one chooses – had not advanced very far from the time of Shakespeare to the middle of the 1800’s. “But,” most of us would confidently assume and assert,”since then we have come a long way, we know so much more now.” But do we? Certainly at the time when Brigham and Ray were writing about Shakespeare , serious psychiatric establishments were already taking shape in a number of modern nations. The growth that has taken place since the 19th century within this medical specialization in terms of publications, practitioners, institutions, associations, research, and treatments would have been difficult to imagine. But are we really any closer to identifying a cause, or having a cure to offer to those who suffer from mental illness?
The next post will look at what we know about the schizophrenia and the range of diagnoses which fall under the category of manic-depressive illness.
Posted on September 16, 2010 - by David
From part 1
…With the recognition of the autonomous new world of life, Darwin’s breakthrough not only opened the door to advances in biology, it makes possible our escape from the dualist cage. In place of two mutually inconsistent realms, reality may be imagined as consisting of three autonomous but related layers, with the two upper layers being emergent phenomena — the layer of matter, the layer of life, and the layer of the mind.
The mind emerges from three organic elements – the brain, the human larynx, and perception and communication by signs. Two of these, (the brain and the larynx), are specific organs, while the third – the use of signs – is a certain evolutionary stage of the process of perception and communication of perception within a biological group.
For animals, adaptation to the physical environment means developing the ability to perceive a stimulus (e.g food, a predator, etc.) and communicate its presence to other members of the group. The more complex the environment, the more stimuli there are that signify to an organism, and thus more signs to which the organism must learn to respond to appropriately. We can describe a sign as an aspect of a stimulus, or of the encoded reaction to it, signifying the stimulus, respectively, to the perceiving organism and to members of the organism’s group.
To reiterate, an emergent phenomenon is a complex phenomenon that cannot be reduced to the sum of its elements. Therefore, the mind’s emergence was not the result of a simple combination of the brain, the larynx, and the use of signs, since these elements were in place long before the transformation occurred. While it is impossible to reconstruct the moment of emergence, we can deduce logically the general nature of this most improbable event – the discovery that sound signs could be intentionally articulated.
Intentionally articulated signs are symbols. A sign corresponds directly to the phenomenon it signifies – it is not open to interpretation, for in the animal world, reading signs correctly is a matter of life and death. Unlike signs, whose meanings are fully contained in their referents in the environment, symbols are arbitrary, given their meaning from the context in which they appear. While the number of signs was essentially limited by the number of potential referents in the environment, symbols, being arbitrary, are not bound by the material environment, instead drawing their life and meaning mainly from the context of other symbols.
Until now, we have referred to this emergent phenomenon as the mind, but this symbolic reality that emerged with the transformation of sign to symbol is a process occurring simultaneously on the individual and collective level. On the individual level, we call this phenomenon “the mind”; on the collective level, we call it “culture.” Of course, the individual mind doesn’t generate its own symbolic process. It is dependent upon the symbolic process on the collective level – culture. For this reason, the mind can be conceptualized as “culture in the brain,” or “individualized culture.” Make no mistake though – these two terms denote one and the same process occurring on two different levels. The mind constantly borrows symbols from culture, but culture can only be processed – i.e, symbols can only have significance and be symbols – in the mind.
In distinction to all other animals, humans transmit their social ways of life symbolically, rather than genetically. This means that culture – the symbolic process of transmission of human ways of life – is what distinguishes us from the rest of the living world, and in fact, makes humanity itself an emergent phenomenon. The mistaken notion that society is what makes humanity unique is quite pervasive, but society – structured cooperation and collective organization for the purpose of collective survival and transmission of a form of life — is a corollary of life in numerous species. It is essentially a biological phenomenon, a product of evolution. What makes human society unique is that it is structured not genetically, but symbolically, on the basis of culture. Culture being a dynamic, historical process, not governed biologically, the social arrangements of humans are much less rigid than those of other animal species and are subject to change.
Just as animals adapt to the physical environment in which they live, so we too must adapt to the cultural environment in which we find ourselves. If we consider this process in animals, we see that it depends not only on the ability to perceive and remember information supplied by the environment, but also on the ability to create supplementary information to complete the picture. In humans, we call this imagination. There is ample evidence that animals possess this ability also – the success of rodents in tests of transitive inference, and the countless creative solutions to problems posed by the physical environment which animals come up with make this hard to deny. This must be an unconscious process – the imaginer is not aware of the steps that lead from the perceived and stored to new information, but, so to speak, “jumps to conclusions” over these steps. Humans must adapt primarily to the cultural (symbolic) environment, and so the largely unconscious process by which we create new information out of information already stored in memory can be called symbolic imagination.
Symbolic imagination, probably, is the central faculty of the human mind, the means by which we “discover” the operative logic of each of the many autonomous yet interdependent symbolic systems which make up the cultural environment. Most symbolic systems – language, fashion, class structure, etc. – are historical, and therefore changeable, with governing principles that have little to do with logic proper – that is, logic based on the principle of no contradiction. While this makes symbolic imagination almost infinitely more complex than imagination in other animal species, we are nevertheless able to find the organizing principles of culture with remarkable success, for the most part without thinking about them explicitly.
Culture, the symbolic process on the collective level, is organized on the individual level, (the mind), by symbolic imagination through the creation of three mental “structures.” It is useful to think of these mental processes as structures, since they are patterned and systematic, and so, we can deduce logically, they must be supported by corresponding patterned and systematic processes in the brain. These structures are compartments of the self and include: 1) identity – the relationally-constituted self; 2) will, or acting self and 3) the thinking self, or the I of self-consciousness.
Identity refers to symbolic self-definition. It is the image of one’s position in the socio-cultural “space,” within the larger image of the relevant socio-cultural terrain itself. This “cognitive map” displays the possibilities for adaptation to the particular cultural environment, allowing them to be ranked subjectively. As soon as a child begins to (unconsciously) figure out the organizing principles of various symbolic systems, he begins to form an identity, figuring out where he belongs in the symbolic environment which is still in the process of being constructed itself. It is reasonable to suppose that identity-formation is strongly influenced by the emotional charge with which certain stimuli are delivered. Identity is likely to solidify more quickly the simpler is the (always very complex) cultural environment in which it is formed. This is a largely unconscious process – questions about identity are usually only made explicit if the identity proves to be problematic. In other words, the question, “who am I?” would most likely only occur to someone who would have difficulty answering it.
The will is, simply put, the part of our mind that makes decisions. While identity is the product a particular cultural environment at a specific time in history, the will is a product of culture in general – a function of symbols. To operate with symbols –intentional , thus arbitrary, signs – we internalize the principle of their intentionality. The will takes its direction from Identity, choosing the appropriate “operative logic” to follow given the context. Usually, this is an unconscious process – the will decides without us having to reflect on our decision – but sometimes this process becomes explicit, we become aware that we are faced with options and must exercise our will, and think about our decisions. Because the will operates on the basis of identity, problems with identity may translate into impairment of the will – the person may become indecisive and unmotivated, or, the decision making could become completely haphazard and unrestrained.
Finally there is the thinking self or the “I of self-consciousness.” This is consciousness turned upon itself, the phenomenon to which Descartes referred with in the oft quoted “I think, therefore I am.” The other mental processes described above remain hypotheses, but the existence of the thinking self cannot be doubted – it is the only certain knowledge we have. While identity and will are processes informed and directed by the symbolic environment, they are mostly unconscious. The thinking self, though, is explicitly symbolic, meaning that it actually operates with formal symbols – above all, language. This explicit, self-conscious symbolic process does not seem to be a requirement for individual adaptation in the same sense as identity and will, and there is no reason to assume it exists to the same degree in all people. Its most important function seems to be rather the continuation of the cultural process on the collective level. By thinking things through – talking to oneself using symbolic systems like language, math, and music – the mental process can be reconstructed and made explicit, packaged in formal symbolic media for delivery to other minds.
In the exceptionally rare cases when the thinking self is perfectly integrated with identity and will, true genius can appear and usher in dramatic cultural change. It seems much more common, though, that a very active thinking self is implicated in mental disease. As was mentioned earlier, problems with identity lead to impairment of the will, and without these mental “structures” working properly, the “I of self-consciousness” may become deindividualized, experienced as the explicit processing of the undirected resources of culture in general, and felt as a disturbing, alien presence within the self. This is essentially the new theory of mental illness that Greenfeld is offering. It will be developed in much greater detail over the next three posts.
Next, we’ll look at the historical development of this new form of mental disease.
9/24 – Madness: A Modern Phenomenon
Posted on September 14, 2010 - by David
Before the hypothesis that modern culture can cause biologically real mental disease can be given serious consideration, one major conceptual obstacle must be removed: this is the dualist vision of reality. In the dualist conception, central to Western thought for well over two thousand years, reality, (which is presumed to be consistent) expresses itself in two fundamentally distinct, mutually inconsistent ways: the material and the spiritual. This dichotomy has been formulated in a number of ways – the physical and the psychical, the real and the ideal, and the mind/body split, but the idea remains basically the same.
Obviously, the concept of two mutually inconsistent realms existing in a world that is presumed to be consistent presents us with a logical problem. Until now, the only way to resolve this problem has been to take one or another monistic position, seeing only one of these expressions of reality as real in the sense of being causally active, the other being merely a secondary expression of the first one. For a long time, the dominant belief was that the spiritual element was causally active, with the material brought into being by some divine creative intelligence. But for several hundred years now, matter has been seen as the causal factor, and the spiritual element, (whichever specific name we give it), was gradually reduced to the status of only apparent reality.
It is important to realize, though, that the materialist view has come to reign supreme for reasons that are primarily historical. The secular focus of nationalism increased the importance of life here on earth, resulting in the emotional weakening of religious faith, while increasing the value placed on scientific inquiry into the empirical world. Likewise, science as an institution, rationally organized in its efforts toward increasing knowledge of empirical (material) reality, first came into being in England with the rise of nationalism. Science being the only epistemological system which has consistently led to humanity’s greater understanding, and control, of certain aspects of empirical reality, it is no surprise that its prestige is so great, and that beliefs associated with it quickly gain authority.
The dominance of the materialist position can be seen clearly in the history of psychiatry. While one approach aimed at addressing the “psychical,” (Freudian psychoanalysis), was extremely influential for about a 50 year span during the 20th century, the biological approach was destined to prevail. Psychiatry is, after all, a medical specialization, and medicine, with the body as its subject, is a decidedly scientific endeavor. After the publication of Darwin’s Origin of Species, the prestige of biology was solidified. To question the biological paradigm was to effectively exclude oneself from the medical field.
Around the turn of the 20th century, German-language psychiatrists, (above all Emil Kraepelin), worked hard to improve the scientific status of the profession. They carefully described and classified those mental diseases of unknown cause which remained for psychiatry after treatment of organic mental diseases like paresis, epilepsy, and puerperal insanity had shifted to their proper medical specializations. The main division of major psychoses into the broad classes of schizophrenia and manic-depressive illness dates to this time. While the etiology of these crippling illnesses remained a mystery, psychiatrists like Kraepelin were confident that they were brain diseases with organic causes which would one day be discovered.
In the United States, the foundation of the National Institute of Mental Health in 1947 strengthened the biological position, and with the discovery and development of several waves of anti-depressant, mood-stabilizing, and antipsychotic drugs from the 1950’s on, the interests of large pharmaceutical companies have further supported this view.
There is, no doubt, a constantly growing body of information about the brain and the various abnormalities in anatomy and neurochemistry which have been observed in psychiatric patients, and genetic researchers have made tentative progress in identifying certain genes which may increase vulnerability to schizophrenia and manic-depressive illness. But the much celebrated technological advances in this field of study have not led to any new, precise methods of diagnosis – there is no brain scan or genetic test psychiatrists can use to determine whether someone “has” schizophrenia. The data is descriptive, not explanatory, and any genetic vulnerability only represents, at most, a condition for mental illness (and so far we cannot even say a necessary condition). And of course, we must remember not to confuse conditions with causes. Finally, none of the drugs used to treat mental illness can be said to constitute a cure.
Despite the failure of these tools to transform our understanding of mental illness (which remains essentially unchanged since Emil Kraepelin’s classifications), the experts in the field have placed their faith in science, believing wholeheartedly (and without evidence) that schizophrenia and manic-depressive illness have a biological cause, and that its discovery is just around the corner.
The problem is, science is not supposed to be a set of beliefs but a method, that method being logical formulation of hypotheses, followed by attempts to refute them with the help of empirical evidence. Science is therefore, as a matter of principle, (though obviously not always as a matter of practice), skeptical of belief. Science is especially skeptical about the immaterial, because of the close association between the immaterial (or the spiritual, ideal, etc., call it what you will), and religion, since religion is always a matter of belief. Unfortunately, science has transformed this skepticism into a dogma – that there is nothing more to empirical reality than the material. This dogma is evident in the faithful expectation of the discovery of a biological cause for mental illness, and the belief that human consciousness is reducible to (i.e caused by) the organ of the brain.
The materialist answer to the mind/body problem can only be that the mind is nothing more than the subjective experience of the functioning of the brain, which is in effect to say there is no such thing as the mind – that the brain is all that is real. But any amount of self-reflection reveals that the reality we experience, (i.e that for which we have empirical proof), is always mental. A majority of our experiences involve words and images which are symbolic, and therefore, part of a non-material reality. We see things in our “mind’s eye” that are not really there, we hear songs play in our heads, though no corresponding sound waves move through the air. Our emotions certainly have physical aspects – a quickened pulse, an upset stomach, a flushed face – but these physical reactions cannot be said to cause the specific thoughts that follow our change in mood. Ultimately, we are enclosed in the subjectivity of our mental experience. To insist on the material nature of empirically knowable reality is deny reality as we actually experience it.
Nevertheless, we believe that there is more than this subjectivity. We believe that we have our experience through our bodies, which constitute part of an objective reality. We ignore the irrefutable solipsistic proposition – that reality is merely a product of my imagination – and go on feeding and clothing ourselves, because this fundamental belief in the objective world is literally necessary for our survival.
This belief in the objective world is obviously fundamental for science as well. But science also depends on the belief that this objective world is consistently ordered – that is to say, most scientists believe that empirical reality is actually a logical reality, and can therefore only accept reality to the extent that it fits this belief. But belief that the objective world is consistently ordered is not, in fact, a fundamental belief – there are, or have been, societies in which chaos was assumed to be the condition of reality. Aristotelian logic, based on the principle of no contradiction, is a historical, thus cultural phenomenon. (Ironically for science, a case - based on logic and circumstantial evidence – can be made that it was through exposure to monotheistic culture that Thales, the Miletian, arrived at the idea of an unchanging organizing principle which he introduced to Greek philosophy in the 6th century B.C, helping to bring about the transition from mythos to logos).
So while the twin pillars of science are supposed to be logic and empirical evidence, we see that there is a great deal of belief mixed in. As stated before, we reject solipsism, believing in the objectivity of what we perceive physically through the senses, but the meaning which we give to these physical perceptions is affected by our beliefs, beliefs which usually lack empirical proof. This is why some of the most important scientific beliefs remain theories. Where empirical evidence is lacking, we draw inferences using logic, whatever empirical evidence we do have, and our beliefs about the world. This is what circumstantial evidence is – substitution of logical consistency for information of sensual perception. So, it turns out that science ultimately rests on logic.
But despite the shortcomings of science – that it is sometimes even more dogmatic than religion, and that the evidence it relies on is not strictly empirical, but circumstantial – it remains our only option for attaining objective knowledge about the subjective empirical reality of the mind. Without attaining such knowledge, no new theory of mental illness is possible. But in order to use science (which means to use logic), we still must deal with the logical contradiction of the dualist conception of reality.
It is in fact Darwin who helps us resolve this problem. Though many have mistakenly understood his theory of evolution by natural selection as proving the triumph of materialism in the dualist debate, it actually moved beyond this debate altogether. In distinction to the philosophical materialists of his day, Darwin proved that life was a reality of its own kind, irreducible to the inanimate matter of which each cell is composed, but, in distinction to philosophical idealists, or vitalists, who claimed that life was independent from the material reality studied by physics, he proved that laws of life could only operate within the conditions provided by physical laws.
Thanks to Darwin, we can conceptualize the objective world in terms of emergent phenomena. An emergent phenomenon is a complex phenomenon that cannot be reduced to the sum of its elements, a case in which a specific combination of elements, which no one element, and no law in accordance with which the elements function, renders likely, produces a certain new quality (in most important instances, a certain law or tendency) which in a large measure determines the nature and the existence of the phenomenon, as well as of its elements.
The fact that the emergent phenomenon cannot be reduced to its elements means that at the moment of emergence there is a break in continuity, a leap from one layer of reality into another, essentially distinct and yet fundamentally consistent with the initial layer. By definition, this transformation cannot be traced exclusively to the first reality, and is, at least in part, extraneous to it.
With the recognition of the autonomous new world of life, Darwin’s breakthrough not only opened the door to advances in biology, it makes possible our escape from the dualist cage. In place of two mutually inconsistent realms, reality may be imagined as consisting of three autonomous but related layers, with the two upper layers being emergent phenomena — the layer of matter, the layer of life, and the layer of the mind.
This top layer of the mind – the layer of symbolic reality – will be the subject of the next post in the series.
9/24 – Madness: A Modern Phenomenon
Posted on September 12, 2010 - by David
In her forthcoming book, Mind, Madness and Modernity: The Impact of Culture on Human Experience, Liah Greenfeld presents a new framework for understanding mental illness. Readers of this blog may be familiar with some of these ideas from earlier posts, but her position is so distinct from all other theoretical approaches to mental illness that the central claim should be clearly stated from the outset :
Schizophrenia and Manic-Depressive Illness are biologically real diseases caused by modern culture.
Until now, most of what has been written from a “social science” perspective has focused on attitudes toward mental illness or the history of the psychiatric establishment, rather than the phenomenon of mental illness itself. This is because the theoretical approach usually involves either…
- A tacit acceptance of the dominant model, which holds that mental diseases are caused biologically and therefore occur at equal rates across cultures and throughout history
- A denial of the biological reality of these illnesses, which comes with the view that mental illness is a social construction (derived from the likes of Michel Foucault and Thomas Szasz)
- Some in-between version of the first two, (e.g the recent work of Allan Horwitz), emphasizing the medicalization of some normal human conditions, while leaving severe psychosis in the realm of the universal/biological
In light of these views, Greenfeld’s hypothesis that culture, a symbolic (and therefore non-material) reality, is capable of disrupting the normal functioning of the brain, appears quite unique, possibly to the point of seeming outrageous.
Precisely because this idea must seem unbelievable to so many people, I am happy to announce that it will no longer go unsupported. Over the next two weeks, I will be doing an exposition of the new book through a series of posts, outlining the major elements of the argument, and summarizing logical, empirical, and historical evidence to support the claims.
To be clear, I am working directly from the unpublished text of the book. All the original ideas, and all interpretations and analysis of primary and secondary source materials used to support the ideas are attributable to Liah Greenfeld.
Here’s the schedule:
9/24 – Madness: A Modern Phenomenon