Posts Tagged ‘Greenfeld’
Posted on January 24, 2011 - by David
Of the various writings, videos, and recollections of teachers, classmates, and former friends with which we are left to piece together a picture of Jared Loughner’s mind before the shooting, the most intelligible is a poem titled ‘Meat Head’ that he wrote for one of his classes at Pima Community College last fall.
Awaking on the first day of school
Pain of a morning hang over
Attending a weight lifting class for college credit
Attempting to exercise since freshmen year of high school
Crawling out of bed and walking to the shower
Warm water hitting my back
Thoughts of being promiscuous with a female again
Putting on a old medium red tee shirt, light brown cameo shorts, and black Adidas
For breakfast a glass of water, cold pepperoni pizza, and two Advil
Bringing my Nano Ipod with heavy metal music
Taking the local bus on a overcast morning
Waiting with crack heads after their nightly binge
Bus is cheap, two dollars for a ride anywhere in the city
Sitting in back against a hard plastic seat
Staring at stop lights, brand new cars, and graffiti
Coming to a slow halt in front of the school
Entering the gym as the glowing florescent lights are humming
Next to the treadmills, putting a green foam mat on the ground
Stretching for fifteen minutes, loosening the muscles in my legs, back and arms
Cleaning the mat with anti-bacterial spray and a paper towel
Jogging for ten minutes, my heart beating, beating, beating
Pain in my right side of the last minute of twenty
Looking around, the cute women are catching my eye
Probably waiting for their hot boyfriends wandering in the locker room
All the men are in shape with their new white tank shirts, basketball short, and Nike shoes
Confusing look on my face of no idea what to do
Deciding to copy other men’s routines of
Arm Curls, Leg presses, Rows, Squats, Military something’s, and Isolated whatever’s
Leaving the gym thinking
Waiting for the bus with alcoholics that are going to the bars early
Coming home for another shower
While grabbing the white towel, the eureka moment is lingering
Quick nap and lunch is on my mind
Setting the alarm one hour before getting ready for my next class
Getting into bed
The title already suggests a problematic identity; (for any who might not know), ‘meathead’ is a derogatory term for jocks, muscular men, athletes, etc., but the poem focuses on Loughner’s own time in the gym, so he is criticizing himself? He has been “attempting to exercise since freshmen year of high school,” (this “attempting” implies a lack of success), and yet the title and tone suggest he sees himself as different from, and possibly somehow superior to, those working out around him. He has trouble looking the part, as his gym clothes are mismatched and worn out (“old medium red tee shirt, light brown cameo [probably meant camo] shorts, and black Adidas”) while the others “are in shape with their new white tank shirts, basketball short, and Nike shoes.” Not only does he feel like he looks out of place, he’s not sure how to act (“confusing look on my face of no idea what to do”), so he ends up “deciding to copy” what others do. It seems Loughner relates his lack of romantic or sexual involvement with the opposite sex to his inability to be like these other men, and while there is clearly an element of sexual tension in the poem, I don’t think it is as simple as Loughner being some kind of sex-crazed pervert. In fact, a MySpace posting from November 17th (“It hurts to have been never sexually active at 22!”) reveals that it is not so much a lack of sexual activity that is the problem, but his consciousness that in this society, a 22-year old virgin would probably be deemed abnormal. The people Loughner connects himself to most closely are the “crack heads” and “alcoholics” with whom he waits for the bus, and even this mention of public transportation seems another to reflect upon his inadequacy – he notes the contrast as he views “brand new cars, and graffiti” from his “hard plastic seat” on the “cheap” bus.
When read, ‘Meat Head’ is more depressing than disturbing, especially if one forgets for a moment who wrote it. But apparently the style of his presentation in class didn’t quite match the overall flatness of the poem. According to Don Coorough, a classmate who provided copies of two of his poems to the media, Jared “had the poem memorized, and he stood up in class and performed it with great drama — at one point, grabbing his crotch.” This performance, along with his inappropriate emotional responses to others’ poems (he laughed and joked as a tearful female student read a poem about abortion), contributed to the complaints which resulted in his suspension from Pima.
Another poem, ‘Dead as a dodo,’ may be an attempt to paint an allegorical scene, though it’s anyone’s guess who the dodo is (is it Loughner? Giffords?) or what the other objects, creatures, and movements might symbolize.
Dead as a dodo
On the island of Mauritius a heavy storm is leaving.
In the fields of the ancient wild forest a wild field of mushrooms is growing.
Snails and grasshoppers are ready for the warmth.
The old grass growing with lizards are jolting for crickets while snakes looking for lonely mice.
Falcons are flying for pray.
Shallow light Blue Ocean shimmering at each wave as the black clouds are rolling.
Waves are lapping.
Fisherman on the reefs are casting their poles.
In warm water a pack of clown fish are floating.
Tiger sharks are swimming free.
Steel drums beating in the distance.
The full moon slowly setting for the sun is rising.
At the local cemetery there is weeping.
The dodo is finally dieing.
But one wonders, why was this kid taking a poetry class when the unanswered question which proved nearly fatal for Rep. Giffords was, “what is government if words have no meaning?” His friends, at 4:06 in the video below, describe Jared’s obsession with what he perceived as the meaninglessness of language:
“He was obsessed with how words were meaningless, you know, you could say, “oh, this is a cup,” and hold a cup, and he’d be like, “oh, is it a cup? or is it a pool? is it a shark? is it an airplane?”
While his friends, and others since the shooting, have interpreted these statements as nonsensical, he is on to something very real here, despite his difficulty in expressing it: Jared realized that words, as symbols, are arbitrary, given their meaning by the history of their (socially agreed upon) use. There is nothing in the physical composition of the object we call a “cup” that makes us use that sound and those letters to refer to it, and for Jared this arbitrariness was equal to unreality. This fact of culture, overlooked or taken for granted by most, seems to have been both exhilarating and terrifying for Loughner; exhilarating because it meant there was no good reason why he should be constrained by social conventions, and terrifying because he was, in fact, constrained – someone or something else was “controlling the grammar.”
In addition to discovering the arbitrary nature of symbols, Jared senses the importance of logic in our culture, and his attempts to make sense of his reality rest largely on a series of if-then syllogisms like those in the video above. He seems to think that by formulating his delusional beliefs, (which he takes as facts), into logical statements, he has proven these beliefs true to his (at the time he made the videos, probably imagined, but now very real) “listener.”
Of course, since the premises themselves are faulty, nearly all of Jared’s syllogisms fail, except perhaps the following:
All humans are in need of sleep
Jared Loughner is a human
Hence, Jared Loughner is in need of sleep
If we consider what Loughner does (or tries to do) when he sleeps, our image of him becomes even more interesting: according to his friends, Jared was an enthusiastic practitioner of lucid dreaming. His own writings refer to “conscience dreaming” by which he presumably meant “conscious dreaming” (another term for lucid dreaming. Apparently, he preferred the dream world to waking world, feeling a greater sense of freedom and control while asleep.
Examined in the light of Liah Greenfeld’s hypothesized mental processes, Jared Loughner’s struggle to determine his own reality demonstrates fundamental problems with his Identity which manifested in problems with the Will. But one of the most important questions – from a legal standpoint at least – will be whether or not Loughner fully understood and was in control of his actions when he opened fire on January 8th. The evidence indeed suggests this was a willful act – planned ahead of time, and executed according to plan, so how do we reconcile this with the image of a deranged mind? In my next post on the subject, I’ll look at how Loughner’s delusional beliefs and other psychotic symptoms fit into existing definitions of mental illness, and consider what this might tell us about Jared’s mindset the moment he pulled the trigger.
Posted on January 19, 2011 - by David
By now, the search for political or ideological motivations in the January 8th shooting in Tuscon has given way almost entirely to a search for signs of mental illness in Jared Loughner’s past, and while debates over gun control, inflammatory political rhetoric, and the responsibility of colleges when it comes to dealing with troubled students will certainly continue in the wake of this tragedy, agreement is pretty much universal that this was the work of a madman.
I’m a pretty big fan of Jon Stewart, and wasn’t surprised that in his first show after this all happened, he took a characteristically sensible view, drawing the focus away from the much discussed “vitriol” even before the overall tone of reporting had shifted. But his acknowledgment of the role of insanity contains a subtle, unquestioned assumption that may need to be challenged, as controversial as such a challenge may be; this is the idea that mental illness, or at least the kind of mental illness that plays into an attack like this, has always existed. At 3:33 into the opening, Stewart said:
“We live in a complex ecosystem of influences and motivations, and I wouldn’t blame our political rhetoric any more than I would blame heavy metal music for Columbine. And by the way, that is coming from somebody who truly hates our political environment – it is toxic, it is unproductive, but to say that that is what has caused this, or that the people in that are responsible for this, I just don’t think you can do it. Boy would that be nice. Boy would it be nice to be able to draw a straight line of causation from this horror to something tangible, because then we could convince ourselves that if we just stop this, the horrors will end. You know, to have the feeling, however fleeting, that this type of event can be prevented, forever. But it’s hard not to feel like it can’t. You know, you cannot outsmart crazy, you don’t know what a troubled mind will get caught on – crazy always seems to find a way, it always has…”
But has it always? And how would we know? We’ve become increasingly convinced that serious mental illnesses – especially psychoses usually classified as bipolar or schizophrenia – are caused genetically, even though what we actually know about these illnesses doesn’t justify this faith in the biological model. The assumption that mental illness has existed in generally the same form, at generally the same rate throughout history and across cultures deserves more scrutiny than it is normally given today. Liah Greenfeld has hypothesized that madness is a modern phenomenon, emerging in 16th century England simultaneous with the emergence of nationalism. Consider the parallels between Jared Loughner and the case of Peter Berchet, a “lunatic” and a “deranged Puritan,” as described in Greenfeld’s forthcoming book:
In 1573, Berchet, a law student, stabbed Sir John Hawkins, a very firm Protestant, whom he mistook for Sir Christopher Hatton, an advisor to the Queen and also a Protestant, accused by Berchet of being “a wylfull Papyst [who] hindereth the glory of God.” The incident taking place at the time of increasing Puritan agitation, Elizabeth wished Berchet to be questioned under torture to reveal the names of co-conspirators she suspected. On the testimony of two of his fellow students, however, Berchet’s examiners became convinced that he was not a political/religious extremist, but, rather, suffered from “nawghtye mallenchollye,” i.e., was stark mad…
The distemper expressed itself in “very strange behavior” at the Middle Temple which his friends attributed to overmuch study and which, shortly before the attack on Hawkins reached a stage we would consider psychotic. “He rarely slept and would pace up and down in his room, striking himself upon his breast, throwing his hands in the air, filliping with [snapping] his fingers and speaking softly to himself… while alone in his chamber, [he] would walk up and down reciting biblical verses and rhymes to himself, then suddenly he would race to the window. With a pointed diamond that he wore in a ring on his little finger, he would scrawl one of his own compositions upon the glass,” when asked by a friend whether he was all right, he responded that “there was ‘a thing at his hart wich noe man in the world showld knowe’ and … would throw his hands in the air and use other ‘frantic gestures’.” To distract him, his friends took Berchet to a wedding in the country, where he proceeded to inform the bride that “she was another man’s daughter, and that she had been born in London. Staring into her eyes while pounding his hands upon the table, Berchet declared that he had ‘seene the verrye same eyes but not the same face,’” punctuating his “outrageous monologue… with unspecified but insulting gestures.” Before his departure from the house of the friend with whom Berchet and his fellow students stayed in the country, he “for no apparent reason beat a young boy … sent to his room to build a fire” and then “Berchet came out of his room, filipping his fingers and talking very strangely, saying in a loud voice, ‘watche, shall I watche hark, the wynd bloweth, but there is neither rayne, wynd, nor hayle, nor the Deuyll hym self that can feare me, for my trust is in thee Lord.’” On the way back to London his companions thought that his “head was verrye muche troubled,” among other things, he “galloped away from the party, dagger in hand, determined to kill some crows that had offended him.” In London, one of Berchet’s friends warned him that, if he continued behaving so, “his position at the Temple would be jeopardized. Berchet reproached [the friend] and maintained that he had ‘a thing at my hart which them nor anye man alyue shall knowe.’ The day that Berchet performed the fateful act, he and a fellow student… had attended a lecture given by Puritan zealot Thomas Sampson. The lecture seemed to provide Berchet with a necessary inspiration to attack Hawkins, for later the same day [another friend] observed Berchet by peering at him through the keyhole of his room door and heard him, as he filliped with his fingers, remark, ‘shall I doe it and what shall I doe it? Why? Then I will doe it.’ Running quickly toward the Temple gate, Berchet hesitated for a brief moment, repeated the same words, then dashed into the Strand where he confronted Hawkins.”
The outraged Queen, as mentioned above, wished Berchet to be both questioned under torture and executed immediately. Instead, following the testimony of his friends, he was committed to the Lollards Tower for his heretical beliefs, where the Bishop of London promised him that, if he recanted, he would live. Berchet recanted and was transferred to the Tower, apparently for an indefinite term of imprisonment under relatively humane conditions, to judge by the fact that the room was kept warm and had light enough, allowing his personal keeper to stand comfortably and read his Bible by the window. At this, however, Berchet took umbrage, promptly killing this innocent with a piece of firewood supplied by the charitable state. Thus, in the end, he was executed – not because his original, and, from the viewpoint of the authorities, graver, crime was attributed to madness (which, in fact, could save him), but because his madness could not be contained.
(The description of this case is based on Cynthia Chermely’s “’Nawghtye Mallenchollye’: Some Faces of Madness in Tudor England,” The Historian, v.49:3 (1987), pp. 309-328.)
Of course, this historical comparison is not meant to somehow explain Loughner’s actions, but if we consider for a moment the possibility that mental illness serious enough to drive someone to murder might have a cultural cause, then we must also consider that this cause is not rooted in the specific content of any particular cultural conflict – neither Puritan vs. Catholics nor Tea Party vs. Progressives – but in the general conditions of modernity which make identity formation so problematic. In my next post, I’ll look at some of Loughner’s preoccupations, including logic, language, and lucid dreaming, and consider how they might make sense within Greenfeld’s cultural model of mental illness.
Posted on October 8, 2010 - by David
I am working directly from the unpublished text of Liah Greenfeld’s forthcoming book, Mind, Madness, and Modernity: The Impact of Culture on Human Experience. All the original ideas, and all interpretations and analysis of primary and secondary source materials used to support the ideas are attributable to Liah Greenfeld. Read the introduction to the exposition here.
part 3 – Madness: A Modern Phenomenon
In this last installment, we consider how Greenfeld’s theory of the mind makes it possible to see schizophrenia and manic-depressive illness (that is, major depression and bipolar disorder), which are usually considered distinct disorders, as diseases of the will, existing on a continuum of complexity of the will-impairment experienced.
Culture – the symbolic transmission of human ways of life- is an emergent phenomenon, a new reality with its own rules, that nonetheless operates within the boundary conditions of life. This symbolic reality is only alive, (the process can only occur), in individual brains, hence the understanding of the mind as “culture in the brain,” or “individualized culture.” As described in part 2, three important “structures” of the mind – (patterned and systematic symbolic processes which must be supported by corresponding patterned and systematic processes in the brain) – are identity, will, and the thinking self.
Identity - the relationally constituted self – is always a reflection of a particular cultural environment. Greenfeld hypothesizes that the lack of direction given by modern culture makes the once relatively simple process of identity formation much more complicated. A well formed identity is able to subjectively rank the choices present at any moment, giving the will, (or acting self), a basis for decision-making. It follows then that problems with identity formation lead to problems with the will. Malformation of identity and impairment of the will necessarily affect the functioning of the thinking self (the “I of self-consciousness”) – the part of the mind which is explicitly symbolic in the sense that it operates with formal symbols – above all, language. The thinking self may become fixed on questions of identity; it may have to stand in for the will, when a person has to talk him/herself into acting in situations which normally wouldn’t require self-conscious reflection (e.g going to the bathroom, eating, getting out of bed); or in the most severe cases, the thinking self may become completely disconnected from individualized culture, in which case all the cultural resources of the mind range free, without direction from identity and will.
The experiences of those who suffer from mental illness begin to make sense within this framework. In major depression, the will is impaired in its motivating function – the ability to force oneself to act or think as one would like to, or as would seem appropriate, is severely lessened. The mind at this stage remains individualized and one has a definite, though distorted and painful, sense of self. The thinking self becomes negatively obsessed with identity, and an incredible dialogue of self-loathing thoughts takes hold. It is insufferable to be oneself, and death naturally suggests itself as the only possibility of escape. Though tragically, as we all know, many depressed people do take their lives, for many even the will to take this action is not present. In bipolar disorder, the impairment of the motivating function of the will in depression mixes with the impairment of its restraining capacity in mania. One can neither move oneself in the desired direction nor restrain one’s thoughts and actions from running in every direction. The negative self-obsession of depression (which can still be justifiably considered delusional) alternates with (the often more noticeable to the outside observer) grandiose and exalted self-image and beliefs. Mania can either cycle back to depression or, through delusional tension, develop into acute psychosis.
The most characteristic symptoms of schizophrenia – hallucinations and elaborate delusions – are usually preceded by a prodrome which bears significant resemblance to certain aspects of depression and mania. This is often a period of social withdrawal, when the experience of the outside world seems to move from a sense of unreality to a sense of the profound yet ambiguous meaningfulness of all things. In healthy minds, identity provides a relatively stable image of the cultural world and the individual’s place in it, and thus the will directs thought and action towards relevant goals. Naturally, at each moment much of the environment is overlooked so that attention can be focused where it should be. In the prodrome, however, the thinking self becomes fixated on mundane aspects of reality, and things in the environment which are usually taken for granted become alternately senseless or imbued with special significance. This experience of the world as incomprehensible and inconsistent suggests a serious problem with identity. The will, (which in healthy cases is a largely unconscious process directed by identity), gets put on the shelf, so to speak, and the thinking self takes on the task of trying to piece together this unreal or hyperreal outside world.
The prodrome is usually only identified after the fact, since it is the appearance of hallucinations and delusions which allows the illness to be diagnosed as schizophrenia. Delusions, (often also present in patients diagnosed with bipolar), are the best known feature of schizophrenia. We can understand delusion as the inability to separate between subjective and objective realities, or put another way, the inability to distinguish between the cultural process on the individual level (the mind) and culture on the collective level. Thus internally-generated experiences are mistakenly thought to have originated outside. The elaborate delusions described by schizophrenic patients can be seen as a kind of rationalization of the experience of acute psychosis. It is important to distinguish between delusional accounts of the acutely psychotic phase, given after the fact in moments of relative self-possession, and the experience itself. In the midst of acute psychosis, a person is almost always incommunicative. Descriptions of this stage often mention the loss of the sense of self, as well as the sense of being watched by an external observer. The mental process, no longer individualized, is beyond willed control. Schneider’s first-rank symptoms, such as the belief that thoughts are extracted or implanted and that physical sensations and actions are controlled by an external force, clearly point to the experienced loss of will which runs underneath so many schizophrenic delusions. The sense of an alien presence is explained by the continued processing of the thinking self even after identity and will have (if only temporarily) disintegrated. Lacking this individualized direction, the “I of self-consciousness” becomes the “eye of unwilled self-consciousness,” – the defenseless sufferer necessarily experiences this free-ranging cultural process as foreign, and quite possibly terrifying, because it is beyond his control.
The formal abnormalities of thought which were so important to Eugen Bleuler’s diagnosis of schizophrenia also fit into the cultural framework. Schizophrenics are often unable to privilege conventional, socially-accepted associations in thought. Most of the time in our modern societies, normal associations follow the rules of logic, (in the strict sense of Aristotelian logic based on the principle of no contradiction). (However, it must be noted that logic is an historical, thus cultural phenomenon, so the inability to think logically should not be taken as evidence of brain malfunction). Of course, depending on the context, some other logic may be culturally appropriate, and arbitrating between contextual logics is one of the primary ways that the will directs thought. In schizophrenia, though, with the will impaired, thought is unanchored to any of these logics, and seems to jump from one to another at random. This becomes most evident in the use of language, which seems to speak itself, flowing without direction and often tied together by the sonic qualities of words or connections in meaning which would usually be overlooked as irrelevant. While the use of language will necessarily depend on the particular cultural resources present in the individual’s mind, it is impersonal in the sense that it draws it life from the associations inherent in language itself, rather than associations pertinent to individual identity or the objective cultural context.
Not only does Greenfeld’s continuum model better account for the huge overlap between the illnesses as currently defined, it also allows us to pay closer attention to movement along this continuum throughout the course of an individual’s illness. While anomie is presumed to be the initial cause of mental illness early in life through interference with identity formation, the various swings on the spectrum may become more comprehensible when we consider what is happening to the individual at the time when the change in symptoms occurs. It is possible that specifically anomic situations may lead to shifts in the already existing illness. (These considerations are explored in Greenfeld’s analyses of the well-publicized cases of John Nash, ( Nobel prize winner in economics), and Kay Redfield Jamison, co-author of the authoritative book on manic-depressive illness.)
The focus on the symbolic, mental processes at work in these “diseases of the will” should not be misunderstood as in any way taking away from the biological reality of major mental illness. Just as the activity of healthy minds corresponds to certain brain activity, so the abnormal processes of a sick mind would be expected to correspond to atypical patterns of brain function. Neither does the hypothesis that mental illness has a cultural rather than biological cause ignore potential genetic conditions that might make certain individuals more vulnerable than others. In fact, it is possible that mechanisms of interaction between culture and genes may become known with continued research in epigenetics – the study of changes in gene expression not caused by changes to the underlying DNA sequence. Some have already hypothesized that gene-environment interaction may lead to epigenetic changes that are central to the expression of mental illness. Of course, unless epigenetic research is specifically designed to take the symbolic nature of the environment into account, it will probably do little to help us to better understand mental disease and the mental process in general.
Part 1 of the exposition looks at the the mind/body problem which has stood at the center of Western Philosophy for over 2000 years, and considers Greenfeld’s proposed resolution – a 3 layer view of reality (matter, life, and culture/mind) in which the top 2 are emergent phenomenon. Greenfeld credits Charles Darwin with making it possible to view the world in terms of emergent phenomenon, which in turn makes possible her theory of culture and the mind which can put the mind/body question to rest. At the same time, she exposes the historical roots of the dogmatic bias of science (as it is normally practiced) towards materialism, and dismisses the notion that science has (or can) in any way empirically prove this position, thereby maintaining that there is no inherent conflict between faith and rigorous empirical study.
In part 2, the proposed solution to the dualist problem is developed – culture is a symbolic process emergent from biological phenomena and operating within the boundary conditions of life, yet fundamentally autonomous and governed by different set of rules. As life organizes the matter out of which it is composed into unlikely patterns, so the symbolic process of culture organizes the brain, (which at all times both supports and provides the boundary conditions for the process) to suit its own needs. Greenfeld logically deduces that the point of emergence for culture and the mind must have been the moment vocal signs were first intentionally articulated, and became symbols. The internalization of this intention creates the mental structure of the will. Yes, this means that in a single moment, culture, the mind, and “free will” as we know it appear together, forever separating homo sapiens from all other animal species and making humanity a reality of its own kind. This view of culture, as a symbolic process which not only structures social life but individual minds, has radical implications for the many disciplines which study the various aspects of humanity. This view also demands the attention of neuroscience, which will remain purely descriptive and not gain any ground in the attempt to understand and explain “consciousness” until it takes into account the symbolic reality – by far the most important aspect of the human environment.
Part 3 reiterates the ideas about nationalism developed in Greenfeld’s first two book and takes things a step further. She identifies nationalism, a fundamentally secular consciousness based on the principles of popular sovereignty and egalitarianism, as the defining element of modernity, responsible for massive changes in the nature of human experience. More specifically here, she claims that love, ambition, and madness as we know them today emerged out of this new consciousness in 16th century England and spread from there to other societies that adopted and adapted the nationalist culture.
Part 4 challenges the current psychiatric dogma that manic-depressive illness and schizophrenia are distinct illnesses with biological causes. The need to rethink this distinction is evidenced by the high degree of overlap in symptoms between two conditions and the failure to find consistent functional or structural brain abnormalities which would allow for accurate differential diagnosis. Not only have genetic researchers been unable to find individual genes that cause schizophrenia or mdi, their best work suggests a shared vulnerability to both illnesses. Epidemiological data seems to show that mental illness occurs at greater rates in modern nations with Western-derived culture, and studies within these nations suggest that the upper classes (i.e those individuals who fully experience the openness of society and have the greatest number of choices) are particularly affected. Both of these findings are consistent with Greenfeld’s hypothesis that anomie causes mental illness. Nevertheless, this data is consistently ignored or rejected as flawed, since it flies in the face of the currently accepted notion of mental illness as biologically caused and uniformly spread across cultures and throughout history. Likewise, the fact that no genetic cause of mdi or schizophrenia has been found has done little to dhake the faith that such a cause will one day be found. Unfortunately, this systemic materialist bias can only continue to impede progress in the understanding of these fatal conditions.
The theoretical view of mental illness as ultimately stemming from problems with the formation of identity is a new one, and thus it does not come packaged with some ingenious cure. However, the clear implication is that something must be done to help individuals in anomic modern societies to create well formed identities. Since this process begins very early in childhood, the intervention must begin then as well. Educating children about the multitude of choices they will face in their extremely open environment, and alerting them to the presence of the many competing and often contradictory cultural voices vying for their attention would become priorities. We should also be cautious (as the recent work of people like Ethan Watters suggests) of the potential side effects of exporting our culture to other societies.
While this exposition is in some sense finished, there is much more to say, and I will continue exploring these ideas and comparing them with other perspectives in my future posts. I realize this work is controversial, and can be difficult to take in all at once. Please, if any part (of the whole) of this seems unclear, unsupported, or simply outrageous, ask a question or give your critique. I’m eager to hear what others have to say.
Posted on October 1, 2010 - by David
I am working directly from the unpublished text of Liah Greenfeld’s forthcoming book, Mind, Madness, and Modernity: The Impact of Culture on Human Experience. All the original ideas, and all interpretations and analysis of primary and secondary source materials used to support the ideas are attributable to Liah Greenfeld. Read the introduction to the exposition here.
part 3 – Madness: A Modern Phenomenon
With all that has been written about schizophrenia and manic-depressive illness, the countless studies that have been conducted, and the growing list of medications used in treatment, it would be easy to mistakenly assume that we now understand the nature and cause of these ailments. The history of the separation of psychoses of unknown cause into these two categories leads us to Emil Kraepelin (1856-1926). This German psychiatrist believed that these were heritable brain diseases, and he led a revolution in classification in German-language psychiatry around the turn of the twentieth century, trying to discover just what kind of brain diseases he was dealing with. Kraepelin used a latin version (dementia praecox) of the French term demence precoce (coined in 1852 by Benedict Morel), to distinguish a form of insanity with an early onset and rapid development from the common geriatric dementia. Kraepelin then separated Dementia praecox from manic-depressive insanity (called by the French folie circulaire or folie a double forme). Up until that point, the two conditions were believed to constitute one general category of insanity.
Kraepelin’s use of the term dementia praecox, which suggested a progressive slowing of mental processes, to refer to a condition characterized largely by delusions and hallucinations, (which imply not mental lethargy but imaginative hyperactivity) may have contributed to the misinterpretation of schizophrenia, (still common today), as degeneration of cognitive/reasoning capacities. The evidence suggests that it is rather the strange character of thought, the inability to think in normal, commonly accepted ways, which distinguishes schizophrenia from geriatric dementia. The name “schizophrenia” (meaning “splitting of the mind”) was introduced to replace dementia praecox in 1908 by Swiss psychiatrist Eugen Bleuler. Bleuler saw the disease mainly in terms of four features: abnormal thought associations, autism (self-centeredness), affective abnormality, and ambivalence (inability to make decisions). Then in the 1930’s, another German psychiatrist, Kurt Schneider, contributed greatly to the diagnosis of schizophrenia by identifying “first-rank symptoms,” primarily related to hallucinations and delusions. Hearing voices speak one’s thoughts aloud, discuss one in the third person and describe one’s actions; feeling like an outside force is controlling one’s bodily sensations or actions and extracting, inserting, or stopping thoughts; believing that one’s thoughts are “broadcast” into the outside world – these are some of the experiences which Schneider found to be characteristic of the illness which Bleuler had recently renamed.
It should be noted that although Schneider’s first rank symptoms are essentially psychotic symptoms, (and schizophrenia is by definition a psychotic illness), very often those diagnosed with schizophrenia do not experience these symptoms. Diagnostic standards today distinguish between positive symptoms, (symptoms like hallucination and delusions which are not present in healthy individuals), and negative symptoms (e.g blunted affect, lack of fluent speech, inability to experience pleasure, lack of motivation). Anti-psychotic medications are often effective in treating some of the positive (i.e psychotic) symptoms of schizophrenia, but attempts to alleviate negative symptoms with medication have been largely unsuccessful, and the prognosis tends to be worse for sufferers who experience primarily negative symptoms.
By far the most authoritative and extensive work (over 1200 pages long) on that other half of madness is Manic Depressive Illness: Bipolar disorders and Recurrent Depression, written by Drs. Frederick Goodwin and Kay Redfield Jamison. The subtitle (Bipolar disorders and Recurrent Depression) added for the 2nd edition, (published in 2007), emphasizes the essential unity of all the major affective illnesses. In the introduction, the authors stress their reliance on Kraepelin’s model for their own conceptualization of mdi. (They, like Kraepelin, see it as a brain disease with genetics playing a significant causal role). But because Kraepelin’s major act of classification was to divide psychotic illness into two distinct disorders, any definition of mdi based on his work depends on having a clear definition of schizophrenia, which is clearly lacking. Kraepelin’s distinction between the two was based primarily not on differences in symptoms, but on course of illness and outcome, with schizophrenia (or in his terminology, dementia praecox) being much more malignant and causing significant deterioration over time. It was in fact Eugen Bleuler who first called mdi an “affective illness,” not because schizophrenia occurred without major mood disturbance, but because in mdi he saw it as “the predominant feature.” This characterization has proven to be extremely important for the current conception of major mental illness; the original distinction as between two psychotic illnesses has largely been obscured, and mdi is now viewed essentially as a mood disorder, with schizophrenia, by contrast, appearing to be essentially a thought disorder.
Though manic-depressive illness includes a variety of mood disorder diagnoses, the main distinction is between major depression and bipolar disorder (alternating episodes of depression and mania). A few decades ago, the bipolar label was split into bipolar-I and bipolar-II. Bipolar-I is the severe form of the disease in which both depressive and manic episodes are serious enough to require treatment. A diagnosis of bipolar-II may be given when a patient suffers from major depressive episodes and also experiences “hypomanic” episodes (meaning basically “mildly manic” and therefore lacking psychotic features). Even Goodwin and Jamison seem skeptical of the value of this and other divisions in classification.
In order to compare manic-depressive illness with schizophrenia, then, we should concentrate on descriptions of, (go figure), depression and mania. According to the DSM-IV, typical symptoms of depression include “loss of interest or pleasure in nearly all activity,” irritability, “changes in appetite and weight, sleep, and psychomotor activity; decreased energy; feelings of worthlessness or guilt; difficulty thinking, concentrating, or making decisions; [and] recurrent thoughts of death or suicidal ideation, plans, or attempts.” The description given by Goodwin and Jamison is along the same lines, though much more vivid:
Mood in all of the depressive states is bleak, pessimistic, and despairing. A deep sense of futility is often accompanied, if not preceded, by the belief that the ability to experience pleasure is permanently gone. The physical and mental worlds are experienced as monochromatic, as shades of gray and black. Heightened irritability, anger, paranoia, emotional turbulence, and anxiety are common. (MDI 66)
Further descriptions from patients and clinical observers add more layers to this general body of symptoms; among the most interesting, lack of facial expression, and a sometimes frightening sense of unreality. It is quite clear that depression is something altogether different from normal sadness, and even “abnormally low mood.” These descriptions show a huge variation in the level of emotion experienced, from almost no feeling at all, to unbearably acute anxiety. A depressed person’s thinking may be slowed almost to the point of paralysis, or he may alternately be unable to control an unending torrent of painful thoughts. All that seems consistent within descriptions and definitions of depressive episodes is that it is an extremely unpleasant experience.
There is such a diagnosis as psychotic depression, (featuring obvious delusions and hallucinations, in which case it is not clear how it can be diagnosed differently from schizophrenia) but even its more ordinary form, many of the symptoms of depression cannot be easily distinguished from the negative symptoms of schizophrenia, which include flat affect and paralyzed thought. And what good reason is there not to consider the firm belief in one’s utter worthlessness, the obsession with death, and the sense of the absolute necessity of ending one’s life as instances of delusion or thought disorder?
Just as depression is not just extreme sadness, mania is not an exaggerated form of joy. According to the DSM-IV, a manic episode is a period of “abnormally and persistently elevated, expansive, or irritable mood,” with typical symptoms being “inflated self-esteem or grandiosity, decreased need for sleep, pressure of speech, flight of ideas, distractability, increased involvement in goal-directed activities or psychomotor agitation, and excessive involvement in pleasurable activities with a high potential for painful consequences.” To be considered a manic (rather than merely “hypomanic) episode, “the disturbance must be sufficiently severe to cause marked impairment in social or occupational functioning or to require hospitalization or it is characterized by the presence of psychotic features.” Mood within a manic episode may be highly variable, and the frequent alternation between euphoria and irritability is noted.
Grandiose delusions are common – the extreme expression of the inflated sense of self-importance so typical in mania. (Again, one wonders why the beliefs which spring from the typical sense of worthlessness in depression – the polar opposite of the grandiose beliefs in mania – should not be considered delusions as well). Grandiosity often manifests in compulsive writing which the sufferer may believe has special significance but is usually characterized by “flight of ideas” and “distractability.” This behavior is not unique to mania, and has been well documented in patients diagnosed with schizophrenia.
Delusions may be not only grandiose, but, (as in schizophrenia), paranoid as well. In some severe cases, the sufferer may reach the stage of delirious mania, which the authors of MDI describe by quoting Kraepelin:
At the beginning the patients frequently display the signs of senseless raving mania, dance about, perform peculiar movements, shake their head, throw the bedclothes pell-mell, are destructive, pass their motions under them, smear everything, make impulsive attempts at suicide, take off their clothes. A patient was found completely naked in a public park. Another ran half-clothed into the corridor and then into the street, in one hand a revolver in the other a crucifix….Their linguistic utterances alternate between inarticulate sounds, praying, abusing, entreating, stammering, disconnected talk, in which clang-associations, senseless rhyming, diversion by external impressions, persistence of individual phrases are recognized. …Waxy flexibility, echolalia, or echopraxis can be demonstrated frequently. (36)
The descriptions of delirious mania provided by recent clinicians are similar to Kraepelin’s. Quite obviously, a patient in the condition described above is suffering from some of the most characteristic symptoms of schizophrenia. Of course for those following in Kraepelin’s footsteps, this similarity should come as no surprise, since (as was mentioned earlier) his distinction between the two psychotic disorders was not based on differences in symptoms. Indeed, the need to clarify the blurry boundary between psychotic mania and schizophrenia has resulted not in further distinction, but the creation of hybrid diagnostic categories like schizoaffective and schizo bipolar. In summarizing the findings of a number of studies over a thirty year span comparing thought disorder in schizophrenia and mania, Goodwin and Jamison are forced to conclude that there is no quantitative difference in thought disorder between the two conditions. Nevertheless, (needing to maintain the distinction between their area of expertise and the even more mysterious realm of schizophrenia) they maintain there are qualitative differences in thought disorder, though the studies used to support this claim point in a number of different directions. Of course, these studies were done only after patients received a particular diagnosis, so differences in thought disorder may also have been related to the effects of different medications. After considering the huge overlap between these two diagnoses, and the fact that differences seem to be more of degree than kind, it seems possible that perhaps they might not be two distinct diseases after all.
While the technological advancements of recent decades allow us to map the human genome and look at the brain on the molecular level, the enormous amount of data that has been amassed is virtually useless for psychiatrists trying to diagnose their sick patients because the assumed biological causes of schizophrenia and manic-depressive illness have not been found. No brain abnormalities that are specific to either illness or present in all cases have been identified. Nevertheless, the experts who study and treat schizophrenia and mdi keep the faith (quite literally) that a breakthrough is just around the corner.
For years, genetic research has appeared to be the most promising of the recently opened avenues, but the excitement seems unwarranted by the findings. The relatively large number of chromosomal regions which may be implicated in susceptibility for bipolar means that hope of finding a specific bipolar gene or even a small number of genes must be given up. Some researchers think the way to go is to narrow the search by looking for genes associated with specific aspects of the disease. Of course, this further refinement is only possible because of the huge variation in symptoms and experiences of those who fall under the mdi/bipolar umbrella, and we are once again reminded of the difficulty of defining what this illness or group of illnesses even is. Furthermore, even the distinction between schizophrenia and mdi seems to collapse in light of the genetic linkage data. Goodwin and Jamison write:
While the search for predisposing genes had traditionally tended to proceed under the assumption that schizophrenia and bipolar disorder are separate disease entities with different underlying etiologies, emerging findings from many fields of psychiatric research do not fit well with this model. Most notably, the pattern of findings emerging from genetic studies shows increasing evidence for an overlap in genetic susceptibility across the traditional classification categories. (49)
Genetic studies in the schizophrenia research community lead to pretty much the same hypothesis as with bipolar: genetic susceptibility is most likely polygenic, meaning dependent on the total number of certain genes which may contribute to vulnerability. It must be noted that genetic vulnerability is a condition, not a cause of schizophrenia and bipolar – something else must be acting on this vulnerability. In one way or another, this fact is usually noted in the literature that deals with genetic data, but it is often obscured by a tone of confidence which suggests the information may be more meaningful and explanatory than it truly is.
Even when a specific gene has been well studied across illnesses, its usefulness in understanding genetic susceptibility may be extremely limited. Some studies in both schizophrenia and mdi have found an increased risk of illness for those who possess the short form of the serotonin transporter promoter gene 5-HTT. The thing is, each of us has two copies of this gene, and over two-thirds of us have one long and one short form, meaning that having the normal variant of the gene is the risk factor! If most of us possess a gene which puts us at risk for an illness which only a small minority of people have, then this particular trait is obviously not much of a causal explanation.
Still today, the most important evidence for the heritability of schizophrenia and bipolar are traditional genetic-epidemiological studies – “genetic” research only in the sense that we know that relatives share genes. There is significantly greater lifetime risk of illness for people with a first degree relative who suffers schizophrenia, and studies of bipolar and major depression (i.e manic-depressive illness) have had parallel findings. However, the overwhelming majority of schizophrenics do not have parents or first-degree relatives with schizophrenia, and most of them do not have children themselves, making it difficult to establish the genetic component by looking at family history in a large percentage of cases.
Studies of twins are particularly important for the heritability argument. Calculations from these studies find a 63% risk of having bipolar disorder if an identical (monozygotic) twin has it. The risk for major depression is significantly lower. In schizophrenia the risk is under 50%. The ideal study design for attempting to separate the contributions of biology and environment involves identical twins, separated at birth, adopted, and raised apart, with at least one of them suffering from mental illness. As can be imagined, these cases are hard to come by (4 in mdi and 14 in schizophrenia), and the small number of cases makes generalization suspect (though generalizations are often still made). Another method, for which there is significantly more data, is to compare the risks of identical (monozygotic) and fraternal (dizygotic) twins. Because both kinds of twins are assumed to share the same environment, but fraternal twins only share 50% of their genes, the difference in risk between fraternal and identical twins is attributed to genetics. But this method depends on an extremely limited understanding of environment, reducing it to simply having the same parents. It’s likely that identical twins would be treated in very similar ways by their parents and society at large, but fraternal twins, being biologically different (perhaps even in gender) will likely be treated in very different ways. Therefore, it is highly doubtful that twin studies are able to separate the contributions of biology and environment to lifetime risk of mental illness to anywhere near the degree that is suggested. The fact that over one-third of identical twins are not affected by the disease from which their twin suffers reveals again that genetic susceptibility is at most a condition, and not a cause of schizophrenia and mdi.
The prevailing assumption that schizophrenia and mdi have biological causes naturally leads to the expectation of finding them distributed uniformly across cultures and throughout history. In the case of schizophrenia, this belief justifies the adoption of the standard worldwide lifetime risk of 1%, (a nice round number), extrapolated from an embarrassingly small number of studies – one from Germany in 1928, and two from the 1940’s in rural Scandinavian communities. However, there is a serious lack of evidence of the existence of these illnesses before the early modern period, and studies have consistently found significant differences in the rates of mental illness across cultures and between social classes within cultures. Nevertheless, (perhaps because the idea that serious mental illness may affect different populations at different rates does not sit well with us), variations are often explained away with charges of inaccurate reporting and under or over diagnosis. But epidemiological studies sponsored by the World Health Organization carried out over several decades have found that the illness identified as schizophrenia in poorer, “developing” countries tends to be less chronic (fewer psychotic episodes), causes less disability, and has a better prognosis than schizophrenia in more affluent, “developed” societies. Some of the data from Western nations suggests a lifetime risk of schizophrenia greater than 1%, while in poorer societies the number often appears lower. Multiple studies have found the rate of schizophrenia among Afro-Carribeans born in the UK to be higher than the prevalence in the islands from which their families immigrated. Both schizophrenia and mdi have been found to be less prevalent in Asian countries.
Overall, cross-cultural data supports the hypothesis that schizophrenia and mdi are diseases caused by modern culture, and more specifically, that the more anomic a society becomes, (i.e the more identity becomes a matter of individual choice and the less guidance is given by culture), the more mental illness will be found. Research in the U.S has shown a lower age of onset and higher rates of prevalence for manic-depressive illness in those born after 1944 compared to those born before, though this increase has been attributed to the inadequacy of earlier data-collection techniques, which systematically underestimated the true prevalence of affective disorders. Usually, when environment is allowed a causal role in mental illness, poverty and the stress of the urban environment is the safest target to blame, with studies as early as 1939 finding a higher incidence of schizophrenia in lower-class, urban areas. However, when studies began to consider social class of origin rather than merely the status of the patient when the illness was first recognized, the picture changed significantly. The social mobility of schizophrenic patients displays a “downward drift,” suggesting that their greater proportion among the lower class is due to the disability of the disease rather than the stress of this environment. Furthermore, it appears that the upper-class supplies more schizophrenics than could be predicted by the total upper-class share in the population. The majority of studies of manic-depressive illness show significantly lower rates in blacks compared to whites, but this, like so many other findings which make no sense within the biological framework, is dismissed for a variety of reasons as a mistake.
Finally, Goodwin and Jamison tell us that “the majority of studies report an association between manic-depressive illness and one or more measures reflecting upper social class.” (169) To explain this finding, they consider the possibility that certain personality traits associated with affective illness may contribute to a rise in social position. (One assumes they mean the occasionally “positive” aspects of mild mania, since it is unclear how crippling depression or delusional mania would aid in social climbing). A second hypothesis, that manic-depressive illness could be related to the particular “stresses of being in or moving into the upper social classes,” is deemed simply “implausible, because it assumes that, compared with lower classes, there is a special kind of stress associated with being in the upper social classes, one capable of precipitating major psychotic episodes.” Furthermore, they accuse such a hypothesis of ignoring genetic factors, though discounting genetic vulnerability as a condition for mdi is quite obviously not implied by this idea.
By now it should be quite clear that the belief that major mental illness is caused biologically has made it virtually impossible to reconsider what the empirical evidence actually tells us. Each time the research that is supposed to support this belief comes up short, it is another occasion for the reaffirmation of faith in a soon-to-come breakthrough. Where the data appears to blatantly contradict their hypothesis, they often simply discount its reliability. While many of the most important experts will freely admit how little we actually understand about mental illness, despite all efforts, it is hard to imagine the direction of these efforts will change much anytime soon. This is not a recipe for scientific progress.
The final post of this series will bring Greenfeld’s theory of the mind together with what we know about schizophrenia and manic-depressive illness, considering the two as one disease existing on a continuum of complexity of will-impairment.
Posted on September 24, 2010 - by David
I am working directly from the unpublished text of Liah Greenfeld’s forthcoming book, Mind, Madness, and Modernity: The Impact of Culture on Human Experience. All the original ideas, and all interpretations and analysis of primary and secondary source materials used to support the ideas are attributable to Liah Greenfeld. Read the introduction to the exposition here.
“Identity-formation is likely to be faster and more successful the simpler is the (always very complex) cultural environment in which it is formed – i.e., the fewer and the more clearly defined are the relations that must be taken into account in the relationally-constituted self.”
- from Mind, Madness, and Modernity: The Impact of Culture on Human Experience
For most of human history, in most societies, identity was not something one had to go searching for – it was given at birth. For most individuals, the socio-cultural space relevant to their lives was easy to map out, and directions for proper navigation were well understood from a young age. Life may have been extremely difficult in the physical sense, but at least it was not confusing – people knew their proper place.
As Greenfeld has demonstrated since her first major work, this changed in 16th century England following the War of Roses, which wrecked the nobility and left the rigidly stratified society of orders in disarray. In its place, a new consciousness emerged – nationalism – the modern consciousness, which redefined the possibilities for life in England and in the other societies to which it soon spread. We call this new consciousness nationalism simply because “nation” was the name given to the society in which it emerged by those 16th century Englishmen who first experienced its dignifying effects.
Nationalism is a fundamentally secular and humanistic consciousness based on the principles of popular sovereignty and egalitarianism. (Three distinctive features which most often take shape along with this consciousness are an open class structure, the state form of government, and an economy oriented towards sustained growth). At the beginning of the 16th century, someone among the newly elevated English aristocracy began equating the word “nation,” which had formerly referred to a political and cultural elite, with the word “people,” which referred originally to the lower classes. This equation of “nation” and “people” both reflected and reinforced the new reality of English society, where the principles of popular sovereignty and egalitarianism made the nation and all its members an elite. No longer confined to a particular station in life by a closed societal structure ordained by Divine Providence, man became his own ruler, the maker of his own destiny. This elevation in dignity for every member of the nation meant that life in the here and now gained much greater importance – eternity was no longer the realm of the meaningful. This is the source of the secularism of modern society – God was not consciously abolished, but was essentially replaced by man.
For the first time in history, identity-formation became the responsibility of each individual, and this has proven to be a mixed blessing. With the opportunity to rise above the position of one’s birth comes the possibility of failing to successfully make the climb, or falling suddenly and senselessly from whatever height one is able to reach. The abundance of options in every aspect of life lets in a nagging suspicion that one has not made the best choice. The presence of circumstantial, or worse, socially imposed, obstacles to one’s advancement clashes with the belief in one’s equality and right to self-governance. Belief in equality becomes the idea of equality with the best, making it difficult to tolerate the sense that another person is better, or better-off. This inability of culture to provide the individuals within it with consistent guidance is called anomie – recognized by the great French sociologist Emile Durkheim over 100 years ago as the most dangerous problem of modernity.
With the changes in the nature of existential experience brought on by the mixed blessing of modernity came changes to the English language. A new vocabulary was needed to express and reinforce the ideas behind this new reality. By following linguistic changes in 16th century English, we can actually observe the emergence of several aspects of this new experience – at once, so elevating and devastating for the individual mind. That is to say, we can see several specific sources of anomie, which, it is hypothesized, makes identity formation difficult and complicated, leading in some of the worst cases to the development of mental illness.
Of course, there are many who would react strongly to the idea that changes in language may reflect fundamental changes in the nature of human experience. The same materialist tendencies which have us assuming the universality of schizophrenia lead us to assume the universality of a whole range of human emotions and experiences. If human nature is reduced to a set of biological capacities, the idea that various emotions, (which we must feel are a very important part of being human), have emerged in different places and at different times in history seems outrageous.
But as was demonstrated in the previous post, it is culture – the symbolic transmission of human ways of life – which distinguishes our species from all others and makes humanity a reality of its own kind. Culture is a fundamentally historical process, which means that the possibilities for thought and emotion for an individual at any one place and time are dependent on context – the context of what has gone on before and what is going on around the individual, the infinitely complex history of connections and intersections of the variety of symbolic systems which collectively make up that individual’s cultural resources.
This doesn’t mean that emotions are purely cultural phenomena. At the most basic level, emotions are physical sensations – we feel them. We know we must share certain primary emotions with animals – pleasure and pain, fear, positive and negative excitement – which we assume they experience through neurobiological processes similar to ours. We can see also that certain animals experience secondary emotions like affection and sorrow, which are once removed from their physical expression, and usually serve to strengthen social ties within a group. Considering this, it is obvious that human emotional functioning depends to a large extent on biological capacities that we share with other species. But inevitably, culture interacts with these biological capacities, creating more complex emotions which are tied to particular beliefs and ideas – specific experiences of a symbolic nature – which cannot be reduced to a combination of physical sensations.
When a new idea/emotion/experience emerges, it is usually closely linked to a particular word or set of words – either new words, new derivatives of old words, or old words with new or increased significance, evidenced by changes in usage and context. Ignoring these cultural developments under the mistaken assumption that all human experience is essentially the same results in a few common problems of backwards translation:
- the new meaning of a word is attributed to earlier uses of the same word, obscuring the historical shift in definition (and lived experience)
- other words which denote phenomena which may be related or similar to, but are nonetheless distinct from, the phenomenon to which the new word refers, are taken to be synonyms of the new term.
In both of these cases, differences between cultures and over time are blurred, making it nearly impossible to use the history of a language as empirical evidence. But if language, the most important of the various symbolic systems which collectively make up culture, is off limits as evidence, then no meaningful argument about culture can ever be made.
The two new great passions of modernity – the ultimate expressions of the sovereignty of the self – were ambition and love. The opening of these two realms of choice, and their importance in the formation of individual identity is reflected in the growth of the vocabulary of related terms soon after the birth of the English nation.
While ambition was not a new word, before modernity it usually carried a negative connotation, meaning basically an overgrown (and therefore sinful) desire for honor. Over time, though, it became more neutral, meaning something like “a strong desire”. Thanks to the principles of nationalism, such a desire for attainment of an earthly goal was legitimized and even encouraged, making ambition in many instances a virtue rather than a sin. A positive or negative qualifier would introduce the word to denote which type of ambition was being referenced.
The language of ambition was bolstered by other shifts in meaning and new derivative words. The OED finds only one instance of the use of the word “aspire” in 15th century, with all its derivatives – aspiration, aspiring, and aspirer – appearing mostly in the late 16th century. The verb to achieve acquired a new meaning of gaining dignity by effort, (as in Shakespeare’s: “Some are born great, some atchieue greatnesse”), and from this were derived achievement, achiever, and achievance. The use of the verb to better, referring to improvement by human action, (e.g “bettering oneself), was another permanent addition to the language. Success, which was originally a neutral term meaning any outcome of an attempt, came to refer only to a positive or desired outcome, and its derivatives, successful and successfully, obviously carried this new meaning as well.
Love, (first defined as a passion by Shakespeare), became a calling, a means of defining, or perhaps more accurately, discovering, who one was. While the word love had been commonly used with a variety of meanings – from the ideal of Christian, brotherly love, to the divine love of God, to the essentially sinful sexual lust – the 16th century English concept of love — which is our concept – was dramatically different. “Romantic” love, as it is sometimes called, occurred between a man and woman – therefore it retained clear sexual connotations – but it was above all a union of two minds (or souls, for by this point, the words mind and soul were nearly synonyms). In love, one recognized one’s true self through identification with another, giving meaning and purpose to life in this strangely open world where God, formerly the source of meaning, was conspicuously absent.
The ultimate end of ambition and love was another modern concept: happiness. This word refers to a phenomenon distinct from many of the historically earlier ideas with which it is sometimes identified. It is not luck, which could be either bad or good and was beyond one’s control; not eudemonia, freedom from fear of death which depended in large measure on avoiding excessive enjoyment of life; not the Christian felicity of certitude of salvation, requiring denial of bodily pleasures up to the point of martyrdom. Happiness was rather conceived of as a living experience, a pleasant one, which was purely good and could be pursued. The OED shows the first instance of this general meaning for the word happy in 1525; the same meaning of the noun form – happiness – doesn’t appear until 1591.
Happiness was knowing who and what one was, being content with one’s place in the world – in other words, successfully creating a satisfactory identity. But what was to become of those whose ambitions were left frustrated and unfulfilled? Of those who lost, or failed to find true love, or were kept, either by circumstance or society, from experiencing true love once it was found? What happened to those sensitive minds for whom the responsibility of building an identity proved too great a struggle?
In the same 16th century England which brought the world ambition and love, a new form of mental disease – Madness – appeared. While previously known forms of mental illness were temporary, related perhaps to an infection, an accident damaging the brain, a pregnancy, a bodily illness like “pox” (syphilis), or old age, madness was chronic – usually appearing at a fairly young age (without evidence of an organic cause) and lasting till death. Another of its names, lunacy, reflected the suspicion of a physical cause – specifically implicating the waxing and waning of the moon in the periodic alterations in the character and symptoms of the sufferers. The word insanity entered English at that time too, apparently referring to the same phenomenon as madness and lunacy.
The chronic nature of madness made it a legal issue from the very beginning; the first provision in English law for mentally disturbed individuals — referred to, specifically, as “madmen and lunatics” — dates back only to 1541. Also in the middle of the 16th century, Bethlehem Hospital – more commonly known as Bedlam, the world’s first mental asylum – became a public institution, transferred to the city of London in 1547. While there was probably little to be praised in terms of humane treatment and comfortable accommodations, Bedlam continued to expand into the 17th century to meet what seemed to be a growing need to house the severely mentally ill.
Important for this argument is the fact that folly was separated from madness. Though it sometimes referred to a moral deficiency, folly was generally a synonym for idiocy – a mental dysfunction or deficiency but not a disease. In ‘An Essay on Human Understanding,’(1689) John Locke summarized the difference between madness and folly as such:
In fine the defect in naturals [fools], seems to proceed from want of quickness, activity, and motion, in the intellectual faculties, whereby they are deprived of reason, whereas madmen, on the other side, seem to suffer by other extreme. For they do not appear to me to have lost the faculty of reasoning, but having joined together some ideas very wrongly they mistake them for truths… as though incoherent ideas have been cemented together so powerfully as to remain united. But there are degrees of madness, as of folly; the disorderly jumbling ideas together in some more in some less. In short herein seems to lie the difference between idiots and madmen. That madmen put wrong ideas together, and so make wrong propositions but argue and reason right from them. But idiots make very few or no propositions, but argue and reason scarce at all.
Physicians of the day sought to describe and understand this new phenomenon, but their methods, sources, and interpretations were thoroughly mixed. Their reliance on classical Greek and Latin terms of mental disturbance resulted in a liberal blend of (their interpretation of) the old ideas with the new reality, and though they attempted to draw distinctions between conditions, they were far from clear. The cause was usually assumed to be organic. The common attribution of madness to an imbalance of the four humors shows the strong influence of the classical medical understanding. (The use of the term melancholy as a name for mental illness in general or a particular variety of it is a prime example). Insanity might also be explained by the stars under which one was born. Some authors distinguished between organic madness and spiritual madness caused by demonic influence. Still others focused on mental states that could in turn affect the body.
Obviously, early observers of madness were far from a uniform hypothesis as to its nature and cause. Nevertheless, these sources do contain some revealing descriptions and suggestions. Andrew Boorde recommended that the patient be kept from “musynge and studieng,” (implying very obviously a literate madman), and likewise Thomas Cogan, a physician and head master of a grammar school, advised against “studying in the night” deeming “wearinesse of the minde” worse than “wearinesse of the bodie.” Sir Thomas Elyot noted a “sorowe,” or “hevynesse of mynde” which affected the memory and the ability to reason properly, relating it to such experiences as the death of a child and even disappointed ambition. Christopher Langton saw “sorrow” as a chronic condition, the most serious of four “affections of the mynde” that could “make great alteration in all the body.” Philip Barrough’s description of melancholy, (which he calls “an alienation of the mind troubling reason..”), mentions mood swings, suicidal thinking, hallucinations, and paranoid delusions – in short, some of the most characteristic features of major psychosis which might be diagnosed alternately as bipolar or schizophrenia today. Timothy Bright’s ‘Treatise of Melancholie’ contains the idea that being “over-passionate,” put one at risk for mental disease.
By far the longest and most famous book on the topic in the early modern period was The Anatomy of Melancholy by Robert Burton, first published in 1621. It was essentially a collection all the information he could find on mental disease – both past and present - and therefore (unfortunately) contributed greatly to the confusion of terms, translating as “madness” a whole variety of words from Latin and Greek sources. Despite his mistake, which allowed him to find English madness scattered throughout history, it seemed to him a particularly pressing problem in his day. He noted among his “chief motives” for writing the book “the generality of the disease, the necessity of the cure, and the commodity or common good that will arise to all men by the knowledge of it.” Burton’s description of his society as a “world turned upside downward” is loaded with colorful yet tragic examples of apparent inconsistency and injustice – in a word, sources of anomie common to modern life. One can hypothesize that the inclusion of such a description of the contradictions within culture, in a work that is dedicated to the understanding of what is deemed a medical illness with an essentially organic cause, is related to Burton’s sense that the two phenomena – anomie and mental illness – are related. Indeed, some of the mental symptoms of melancholy “common to all or most” – “fear and sorrow without a just cause, suspicion, jealousy, discontent, solitariness, irksomeness, continual cogitations, restless thoughts, vain imaginations” – begin to make sense if mental illness is seen as stemming from fundamental problems with identity caused by anomie. Some of these symptoms appear identical to the causes of melancholy which fall under Burton’s general category of “passions and perturbations of the mind.” Ambition and related passions like envy and emulation figure prominently here, but most striking of all is the inclusion of love – the cause, apparently of a special madness called “love-melancholy” which afflicted primarily men of the upper classes.
But perhaps the greatest early chronicler of madness was William Shakespeare. Dr. Amariah Brigham and Dr. Isaac Ray, ( two of the most important figures in 19th century American psychiatry), each devoted an extensive article in the early years of the American Journal of Insanity (today the American Journal of Psychiatry) to the consideration of his work. They saw in his plays, (in particular King Lear and Hamlet), such accurate portrayals of insanity that they were certain he must have drawn his inspiration at least partly from first-hand observation. Whatever might be said today in criticism of the method of these doctors, who had no qualms about using literary study to supplement clinical observation, it is significant that the mental illness they observed in their asylums was the same as that which Shakespeare brought to life in his tragedies more than two and half centuries earlier.
Apparently, the medical understanding of madness, lunacy, insanity, melancholy – whichever name one chooses – had not advanced very far from the time of Shakespeare to the middle of the 1800’s. “But,” most of us would confidently assume and assert,”since then we have come a long way, we know so much more now.” But do we? Certainly at the time when Brigham and Ray were writing about Shakespeare , serious psychiatric establishments were already taking shape in a number of modern nations. The growth that has taken place since the 19th century within this medical specialization in terms of publications, practitioners, institutions, associations, research, and treatments would have been difficult to imagine. But are we really any closer to identifying a cause, or having a cure to offer to those who suffer from mental illness?
The next post will look at what we know about the schizophrenia and the range of diagnoses which fall under the category of manic-depressive illness.
Posted on September 16, 2010 - by David
From part 1
…With the recognition of the autonomous new world of life, Darwin’s breakthrough not only opened the door to advances in biology, it makes possible our escape from the dualist cage. In place of two mutually inconsistent realms, reality may be imagined as consisting of three autonomous but related layers, with the two upper layers being emergent phenomena — the layer of matter, the layer of life, and the layer of the mind.
The mind emerges from three organic elements – the brain, the human larynx, and perception and communication by signs. Two of these, (the brain and the larynx), are specific organs, while the third – the use of signs – is a certain evolutionary stage of the process of perception and communication of perception within a biological group.
For animals, adaptation to the physical environment means developing the ability to perceive a stimulus (e.g food, a predator, etc.) and communicate its presence to other members of the group. The more complex the environment, the more stimuli there are that signify to an organism, and thus more signs to which the organism must learn to respond to appropriately. We can describe a sign as an aspect of a stimulus, or of the encoded reaction to it, signifying the stimulus, respectively, to the perceiving organism and to members of the organism’s group.
To reiterate, an emergent phenomenon is a complex phenomenon that cannot be reduced to the sum of its elements. Therefore, the mind’s emergence was not the result of a simple combination of the brain, the larynx, and the use of signs, since these elements were in place long before the transformation occurred. While it is impossible to reconstruct the moment of emergence, we can deduce logically the general nature of this most improbable event – the discovery that sound signs could be intentionally articulated.
Intentionally articulated signs are symbols. A sign corresponds directly to the phenomenon it signifies – it is not open to interpretation, for in the animal world, reading signs correctly is a matter of life and death. Unlike signs, whose meanings are fully contained in their referents in the environment, symbols are arbitrary, given their meaning from the context in which they appear. While the number of signs was essentially limited by the number of potential referents in the environment, symbols, being arbitrary, are not bound by the material environment, instead drawing their life and meaning mainly from the context of other symbols.
Until now, we have referred to this emergent phenomenon as the mind, but this symbolic reality that emerged with the transformation of sign to symbol is a process occurring simultaneously on the individual and collective level. On the individual level, we call this phenomenon “the mind”; on the collective level, we call it “culture.” Of course, the individual mind doesn’t generate its own symbolic process. It is dependent upon the symbolic process on the collective level – culture. For this reason, the mind can be conceptualized as “culture in the brain,” or “individualized culture.” Make no mistake though – these two terms denote one and the same process occurring on two different levels. The mind constantly borrows symbols from culture, but culture can only be processed – i.e, symbols can only have significance and be symbols – in the mind.
In distinction to all other animals, humans transmit their social ways of life symbolically, rather than genetically. This means that culture – the symbolic process of transmission of human ways of life – is what distinguishes us from the rest of the living world, and in fact, makes humanity itself an emergent phenomenon. The mistaken notion that society is what makes humanity unique is quite pervasive, but society – structured cooperation and collective organization for the purpose of collective survival and transmission of a form of life — is a corollary of life in numerous species. It is essentially a biological phenomenon, a product of evolution. What makes human society unique is that it is structured not genetically, but symbolically, on the basis of culture. Culture being a dynamic, historical process, not governed biologically, the social arrangements of humans are much less rigid than those of other animal species and are subject to change.
Just as animals adapt to the physical environment in which they live, so we too must adapt to the cultural environment in which we find ourselves. If we consider this process in animals, we see that it depends not only on the ability to perceive and remember information supplied by the environment, but also on the ability to create supplementary information to complete the picture. In humans, we call this imagination. There is ample evidence that animals possess this ability also – the success of rodents in tests of transitive inference, and the countless creative solutions to problems posed by the physical environment which animals come up with make this hard to deny. This must be an unconscious process – the imaginer is not aware of the steps that lead from the perceived and stored to new information, but, so to speak, “jumps to conclusions” over these steps. Humans must adapt primarily to the cultural (symbolic) environment, and so the largely unconscious process by which we create new information out of information already stored in memory can be called symbolic imagination.
Symbolic imagination, probably, is the central faculty of the human mind, the means by which we “discover” the operative logic of each of the many autonomous yet interdependent symbolic systems which make up the cultural environment. Most symbolic systems – language, fashion, class structure, etc. – are historical, and therefore changeable, with governing principles that have little to do with logic proper – that is, logic based on the principle of no contradiction. While this makes symbolic imagination almost infinitely more complex than imagination in other animal species, we are nevertheless able to find the organizing principles of culture with remarkable success, for the most part without thinking about them explicitly.
Culture, the symbolic process on the collective level, is organized on the individual level, (the mind), by symbolic imagination through the creation of three mental “structures.” It is useful to think of these mental processes as structures, since they are patterned and systematic, and so, we can deduce logically, they must be supported by corresponding patterned and systematic processes in the brain. These structures are compartments of the self and include: 1) identity – the relationally-constituted self; 2) will, or acting self and 3) the thinking self, or the I of self-consciousness.
Identity refers to symbolic self-definition. It is the image of one’s position in the socio-cultural “space,” within the larger image of the relevant socio-cultural terrain itself. This “cognitive map” displays the possibilities for adaptation to the particular cultural environment, allowing them to be ranked subjectively. As soon as a child begins to (unconsciously) figure out the organizing principles of various symbolic systems, he begins to form an identity, figuring out where he belongs in the symbolic environment which is still in the process of being constructed itself. It is reasonable to suppose that identity-formation is strongly influenced by the emotional charge with which certain stimuli are delivered. Identity is likely to solidify more quickly the simpler is the (always very complex) cultural environment in which it is formed. This is a largely unconscious process – questions about identity are usually only made explicit if the identity proves to be problematic. In other words, the question, “who am I?” would most likely only occur to someone who would have difficulty answering it.
The will is, simply put, the part of our mind that makes decisions. While identity is the product a particular cultural environment at a specific time in history, the will is a product of culture in general – a function of symbols. To operate with symbols –intentional , thus arbitrary, signs – we internalize the principle of their intentionality. The will takes its direction from Identity, choosing the appropriate “operative logic” to follow given the context. Usually, this is an unconscious process – the will decides without us having to reflect on our decision – but sometimes this process becomes explicit, we become aware that we are faced with options and must exercise our will, and think about our decisions. Because the will operates on the basis of identity, problems with identity may translate into impairment of the will – the person may become indecisive and unmotivated, or, the decision making could become completely haphazard and unrestrained.
Finally there is the thinking self or the “I of self-consciousness.” This is consciousness turned upon itself, the phenomenon to which Descartes referred with in the oft quoted “I think, therefore I am.” The other mental processes described above remain hypotheses, but the existence of the thinking self cannot be doubted – it is the only certain knowledge we have. While identity and will are processes informed and directed by the symbolic environment, they are mostly unconscious. The thinking self, though, is explicitly symbolic, meaning that it actually operates with formal symbols – above all, language. This explicit, self-conscious symbolic process does not seem to be a requirement for individual adaptation in the same sense as identity and will, and there is no reason to assume it exists to the same degree in all people. Its most important function seems to be rather the continuation of the cultural process on the collective level. By thinking things through – talking to oneself using symbolic systems like language, math, and music – the mental process can be reconstructed and made explicit, packaged in formal symbolic media for delivery to other minds.
In the exceptionally rare cases when the thinking self is perfectly integrated with identity and will, true genius can appear and usher in dramatic cultural change. It seems much more common, though, that a very active thinking self is implicated in mental disease. As was mentioned earlier, problems with identity lead to impairment of the will, and without these mental “structures” working properly, the “I of self-consciousness” may become deindividualized, experienced as the explicit processing of the undirected resources of culture in general, and felt as a disturbing, alien presence within the self. This is essentially the new theory of mental illness that Greenfeld is offering. It will be developed in much greater detail over the next three posts.
Next, we’ll look at the historical development of this new form of mental disease.
9/24 – Madness: A Modern Phenomenon
Posted on September 14, 2010 - by David
Before the hypothesis that modern culture can cause biologically real mental disease can be given serious consideration, one major conceptual obstacle must be removed: this is the dualist vision of reality. In the dualist conception, central to Western thought for well over two thousand years, reality, (which is presumed to be consistent) expresses itself in two fundamentally distinct, mutually inconsistent ways: the material and the spiritual. This dichotomy has been formulated in a number of ways – the physical and the psychical, the real and the ideal, and the mind/body split, but the idea remains basically the same.
Obviously, the concept of two mutually inconsistent realms existing in a world that is presumed to be consistent presents us with a logical problem. Until now, the only way to resolve this problem has been to take one or another monistic position, seeing only one of these expressions of reality as real in the sense of being causally active, the other being merely a secondary expression of the first one. For a long time, the dominant belief was that the spiritual element was causally active, with the material brought into being by some divine creative intelligence. But for several hundred years now, matter has been seen as the causal factor, and the spiritual element, (whichever specific name we give it), was gradually reduced to the status of only apparent reality.
It is important to realize, though, that the materialist view has come to reign supreme for reasons that are primarily historical. The secular focus of nationalism increased the importance of life here on earth, resulting in the emotional weakening of religious faith, while increasing the value placed on scientific inquiry into the empirical world. Likewise, science as an institution, rationally organized in its efforts toward increasing knowledge of empirical (material) reality, first came into being in England with the rise of nationalism. Science being the only epistemological system which has consistently led to humanity’s greater understanding, and control, of certain aspects of empirical reality, it is no surprise that its prestige is so great, and that beliefs associated with it quickly gain authority.
The dominance of the materialist position can be seen clearly in the history of psychiatry. While one approach aimed at addressing the “psychical,” (Freudian psychoanalysis), was extremely influential for about a 50 year span during the 20th century, the biological approach was destined to prevail. Psychiatry is, after all, a medical specialization, and medicine, with the body as its subject, is a decidedly scientific endeavor. After the publication of Darwin’s Origin of Species, the prestige of biology was solidified. To question the biological paradigm was to effectively exclude oneself from the medical field.
Around the turn of the 20th century, German-language psychiatrists, (above all Emil Kraepelin), worked hard to improve the scientific status of the profession. They carefully described and classified those mental diseases of unknown cause which remained for psychiatry after treatment of organic mental diseases like paresis, epilepsy, and puerperal insanity had shifted to their proper medical specializations. The main division of major psychoses into the broad classes of schizophrenia and manic-depressive illness dates to this time. While the etiology of these crippling illnesses remained a mystery, psychiatrists like Kraepelin were confident that they were brain diseases with organic causes which would one day be discovered.
In the United States, the foundation of the National Institute of Mental Health in 1947 strengthened the biological position, and with the discovery and development of several waves of anti-depressant, mood-stabilizing, and antipsychotic drugs from the 1950’s on, the interests of large pharmaceutical companies have further supported this view.
There is, no doubt, a constantly growing body of information about the brain and the various abnormalities in anatomy and neurochemistry which have been observed in psychiatric patients, and genetic researchers have made tentative progress in identifying certain genes which may increase vulnerability to schizophrenia and manic-depressive illness. But the much celebrated technological advances in this field of study have not led to any new, precise methods of diagnosis – there is no brain scan or genetic test psychiatrists can use to determine whether someone “has” schizophrenia. The data is descriptive, not explanatory, and any genetic vulnerability only represents, at most, a condition for mental illness (and so far we cannot even say a necessary condition). And of course, we must remember not to confuse conditions with causes. Finally, none of the drugs used to treat mental illness can be said to constitute a cure.
Despite the failure of these tools to transform our understanding of mental illness (which remains essentially unchanged since Emil Kraepelin’s classifications), the experts in the field have placed their faith in science, believing wholeheartedly (and without evidence) that schizophrenia and manic-depressive illness have a biological cause, and that its discovery is just around the corner.
The problem is, science is not supposed to be a set of beliefs but a method, that method being logical formulation of hypotheses, followed by attempts to refute them with the help of empirical evidence. Science is therefore, as a matter of principle, (though obviously not always as a matter of practice), skeptical of belief. Science is especially skeptical about the immaterial, because of the close association between the immaterial (or the spiritual, ideal, etc., call it what you will), and religion, since religion is always a matter of belief. Unfortunately, science has transformed this skepticism into a dogma – that there is nothing more to empirical reality than the material. This dogma is evident in the faithful expectation of the discovery of a biological cause for mental illness, and the belief that human consciousness is reducible to (i.e caused by) the organ of the brain.
The materialist answer to the mind/body problem can only be that the mind is nothing more than the subjective experience of the functioning of the brain, which is in effect to say there is no such thing as the mind – that the brain is all that is real. But any amount of self-reflection reveals that the reality we experience, (i.e that for which we have empirical proof), is always mental. A majority of our experiences involve words and images which are symbolic, and therefore, part of a non-material reality. We see things in our “mind’s eye” that are not really there, we hear songs play in our heads, though no corresponding sound waves move through the air. Our emotions certainly have physical aspects – a quickened pulse, an upset stomach, a flushed face – but these physical reactions cannot be said to cause the specific thoughts that follow our change in mood. Ultimately, we are enclosed in the subjectivity of our mental experience. To insist on the material nature of empirically knowable reality is deny reality as we actually experience it.
Nevertheless, we believe that there is more than this subjectivity. We believe that we have our experience through our bodies, which constitute part of an objective reality. We ignore the irrefutable solipsistic proposition – that reality is merely a product of my imagination – and go on feeding and clothing ourselves, because this fundamental belief in the objective world is literally necessary for our survival.
This belief in the objective world is obviously fundamental for science as well. But science also depends on the belief that this objective world is consistently ordered – that is to say, most scientists believe that empirical reality is actually a logical reality, and can therefore only accept reality to the extent that it fits this belief. But belief that the objective world is consistently ordered is not, in fact, a fundamental belief – there are, or have been, societies in which chaos was assumed to be the condition of reality. Aristotelian logic, based on the principle of no contradiction, is a historical, thus cultural phenomenon. (Ironically for science, a case - based on logic and circumstantial evidence – can be made that it was through exposure to monotheistic culture that Thales, the Miletian, arrived at the idea of an unchanging organizing principle which he introduced to Greek philosophy in the 6th century B.C, helping to bring about the transition from mythos to logos).
So while the twin pillars of science are supposed to be logic and empirical evidence, we see that there is a great deal of belief mixed in. As stated before, we reject solipsism, believing in the objectivity of what we perceive physically through the senses, but the meaning which we give to these physical perceptions is affected by our beliefs, beliefs which usually lack empirical proof. This is why some of the most important scientific beliefs remain theories. Where empirical evidence is lacking, we draw inferences using logic, whatever empirical evidence we do have, and our beliefs about the world. This is what circumstantial evidence is – substitution of logical consistency for information of sensual perception. So, it turns out that science ultimately rests on logic.
But despite the shortcomings of science – that it is sometimes even more dogmatic than religion, and that the evidence it relies on is not strictly empirical, but circumstantial – it remains our only option for attaining objective knowledge about the subjective empirical reality of the mind. Without attaining such knowledge, no new theory of mental illness is possible. But in order to use science (which means to use logic), we still must deal with the logical contradiction of the dualist conception of reality.
It is in fact Darwin who helps us resolve this problem. Though many have mistakenly understood his theory of evolution by natural selection as proving the triumph of materialism in the dualist debate, it actually moved beyond this debate altogether. In distinction to the philosophical materialists of his day, Darwin proved that life was a reality of its own kind, irreducible to the inanimate matter of which each cell is composed, but, in distinction to philosophical idealists, or vitalists, who claimed that life was independent from the material reality studied by physics, he proved that laws of life could only operate within the conditions provided by physical laws.
Thanks to Darwin, we can conceptualize the objective world in terms of emergent phenomena. An emergent phenomenon is a complex phenomenon that cannot be reduced to the sum of its elements, a case in which a specific combination of elements, which no one element, and no law in accordance with which the elements function, renders likely, produces a certain new quality (in most important instances, a certain law or tendency) which in a large measure determines the nature and the existence of the phenomenon, as well as of its elements.
The fact that the emergent phenomenon cannot be reduced to its elements means that at the moment of emergence there is a break in continuity, a leap from one layer of reality into another, essentially distinct and yet fundamentally consistent with the initial layer. By definition, this transformation cannot be traced exclusively to the first reality, and is, at least in part, extraneous to it.
With the recognition of the autonomous new world of life, Darwin’s breakthrough not only opened the door to advances in biology, it makes possible our escape from the dualist cage. In place of two mutually inconsistent realms, reality may be imagined as consisting of three autonomous but related layers, with the two upper layers being emergent phenomena — the layer of matter, the layer of life, and the layer of the mind.
This top layer of the mind – the layer of symbolic reality – will be the subject of the next post in the series.
9/24 – Madness: A Modern Phenomenon
Posted on September 12, 2010 - by David
In her forthcoming book, Mind, Madness and Modernity: The Impact of Culture on Human Experience, Liah Greenfeld presents a new framework for understanding mental illness. Readers of this blog may be familiar with some of these ideas from earlier posts, but her position is so distinct from all other theoretical approaches to mental illness that the central claim should be clearly stated from the outset :
Schizophrenia and Manic-Depressive Illness are biologically real diseases caused by modern culture.
Until now, most of what has been written from a “social science” perspective has focused on attitudes toward mental illness or the history of the psychiatric establishment, rather than the phenomenon of mental illness itself. This is because the theoretical approach usually involves either…
- A tacit acceptance of the dominant model, which holds that mental diseases are caused biologically and therefore occur at equal rates across cultures and throughout history
- A denial of the biological reality of these illnesses, which comes with the view that mental illness is a social construction (derived from the likes of Michel Foucault and Thomas Szasz)
- Some in-between version of the first two, (e.g the recent work of Allan Horwitz), emphasizing the medicalization of some normal human conditions, while leaving severe psychosis in the realm of the universal/biological
In light of these views, Greenfeld’s hypothesis that culture, a symbolic (and therefore non-material) reality, is capable of disrupting the normal functioning of the brain, appears quite unique, possibly to the point of seeming outrageous.
Precisely because this idea must seem unbelievable to so many people, I am happy to announce that it will no longer go unsupported. Over the next two weeks, I will be doing an exposition of the new book through a series of posts, outlining the major elements of the argument, and summarizing logical, empirical, and historical evidence to support the claims.
To be clear, I am working directly from the unpublished text of the book. All the original ideas, and all interpretations and analysis of primary and secondary source materials used to support the ideas are attributable to Liah Greenfeld.
Here’s the schedule:
9/24 – Madness: A Modern Phenomenon
Posted on May 3, 2010 - by David
The following paper was presented May 1, 2010 at a student conference at Boston University called ‘Mentalism, Madness, and the Mind.” Audio from the conference is available here. Thanks to all those who participated.
“Owing to this struggle for life, any variation, however slight and from whatever cause preceding, if it be in any degree profitable to an individual of any species, in its infinitely complex relations to other organic beings and to external nature, will tend to the preservation of that individual, and will generally be inherited by its offspring.”
- Charles Darwin, The Origin of Species (1859)
In the conclusion to this book – a work of undeniable importance to modern science and modern thought in general – after tracking natural selection through amazingly detailed observations of nature and logical deductions, Darwin imagined what his discoveries might mean for the study of humanity. He writes:
“In the distant future I see open fields for far more important researches. Psychology will be based on a new foundation, that of the necessary acquirement of each mental power and capacity by gradation. Light will be thrown on the origin of man and his history.” (Darwin, Origin 476)
Empowered by the growing acceptance of his theories, Darwin himself endeavored to throw this light on human experience in The Descent of Man. His believed that there was “no fundamental difference between man and the higher mammals in their mental faculties,” (Darwin, Descent 34) and attributed our sense of morality, in his eyes man’s most distinctive feature, to highly developed social instincts and advanced powers of reasoning.
Nevertheless, one can sense another phenomenon at work in Darwin’s descriptions. He notes man’s “large power of connecting definite sounds with definite ideas,” and sees the central place of language in the mind, writing that “a long and complex train of thought can no more be carried on without the aid of words, whether spoken or silent, than a long calculation without the use of figures or algebra.” (Darwin, Descent 53, 55) His insistence that the difference between man and animal was one of degree and not kind is challenged by obvious differences between cultures. He attributes the lower moral sense of “savages” partly to “insufficient powers of reasoning,” and speculates that moral tendencies might be inherited traits, though he admits that there is “hardly sufficient evidence on this head.” (Darwin Descent 93, 98) In trying to account for the many “absurd rules of conduct” proscribed by various religions, Darwin makes a decidedly non-materialist observation, writing, “it is worthy of remark that a belief constantly inculcated in the early years of life, while the brain is impressible, appears to acquire almost the nature of an instinct; and the very essence of an instinct is that it is followed independently of reason.” (Darwin, Descent 95-96) This power of culture was literally right next to him, waiting to be discovered and explained, on the ship which took him around the world. He writes: “The Fuegians rank among the lowest barbarians; but I was continually struck with surprise how closely the three natives on board H.M.S “Beagle,” who had lived some years in England, and could talk a little English, resembled us in disposition, and in most of our mental faculties.” (Darwin, Descent 33-34) It seems with all his powers of observation and scientific genius, his attachment to his newly embraced theory and the predictable (and accepted) prejudices of a 19th -century Englishman would not allow Darwin to see what he was missing.
I consider this extended Darwinian introduction justified, because despite the fact that these ideas were published nearly a hundred and fifty years ago, they are, with many of the same troublesome claims and implications, still very much alive today in the various strains of evolutionary psychology.
Before going any further, I must turn to mentalism, as developed by Liah Greenfeld, so that the basis for my objections to evolutionary psychology is clear. In her yet unpublished work (to which this conference is directly related), Greenfeld actually demonstrates how Darwin’s work made her own theories possible. Here I quote Greenfeld directly:
“On the basis of meticulously constructed circumstantial evidence (that is, pieces of empirical evidence, gaps in empirical evidence, considerations of scholars in other fields, specifically geology, certain beliefs regarding the nature of reality, contradictions in other beliefs regarding it, etc., that were fitted perfectly together, creating a logically watertight argument) Darwin proved that there was a law pertaining to the development of life on earth that had nothing whatsoever to do with laws of physics, and yet was logically consistent with them, because it operated within the boundary conditions of the physical laws. That is, in distinction to philosophical materialists, Darwin proved that life indeed could be irreducible to inanimate matter, but, in distinction to philosophical idealists, or vitalists, who claimed that life was independent from the material reality studied by physics, he proved that laws of life could only operate within the conditions provided by physical laws. By proving that life was an autonomous reality, Darwin made biology independent from physics: biologists now could take physics for granted and explore the ways biological laws operated.” (Greenfeld 69-70)
In short, rather than create “a unified framework in which everything could be understood,” Darwin made it possible to see the world in terms of emergent phenomena. Greenfeld defines an emergent phenomenon as “a complex phenomenon that cannot be reduced to the sum of its elements, a case in which a specific combination of elements, which no one element, and no law in accordance with which the elements function, renders likely, produces a certain new quality (in most important instances, a certain law or tendency) which in a large measure determines the nature and the existence of the phenomenon, as well as of its elements.” (Greenfeld 71)
Unlike Darwin or the evolutionary psychologists, we see humanity as distinguished from all other forms of organic life by the emergent phenomenon of culture. Culture can be defined most generally as the symbolic transmission of human ways of life across generations and distances. In ‘Nationalism and the Mind,’ Greenfeld describes it like this:
The products of this cultural process are stored in the environment within which our biological life takes place, but the process itself goes on inside us. In other words, culture exists dynamically, develops, regenerates, transforms only by means of our minds – which makes culture a mental process. Let me reiterate: culture is a symbolic and a mental process. The fact that it is a mental process means that it occurs by means of the mechanisms of the brain. The fact that it is a symbolic process means that its logic cannot be reduced to the logic of the brain mechanisms, that it is an emergent phenomenon and a reality sui generis. (Greenfeld “N&M” 213)
Greenfeld has therefore described the mind as “individualized culture,” or “culture in the brain,” making the mind, like culture, an emergent phenomenon. Again I quote from her current work, to make clear that culture and mind should not be taken separately:
“These are not just two elements of the same — symbolic and mental — reality, they are one and the same process occurring on two different levels — the individual and the collective, similar to the life of an organism and of the species to which it belongs in the organic world. The fundamental laws governing this process on both levels are precisely the same laws and at every moment, at every stage in it, it moves back and forth between the levels; it cannot, not for a split second, occur on only one of them. The mind constantly borrows symbols from culture, but culture can only be processed – i.e., symbols can only have significance and be symbols — in the mind.” (Greenfeld 81)
In On the Origin of Societies by Natural Selection (2008), Jonathan Turner and Alexandra Maryanski provide a theoretical description of how our nature as individualistic apes has shaped the evolution of human societies. In the conclusion to the first chapter they make the following statement: “All societies, we argue, go against humans’ ancestral ape propensities for weak ties, individualism, and mobility, but some social formations impose greater conflicts with our ape ancestry than others.” (T&M 27) They hypothesize that the development of an emotional language would have played the most important role in creating solidarity among our ape ancestors. They imagine an almost infinitely deep pool of complex emotions that could have been formed through combination of the four primary emotions – happiness, fear, anger, and sadness – which they say “all researchers agree” are hardwired into our brains. These emotions would have been communicated through vocalizations, gestures, and facial expressions to forge stronger social ties. Symbolic communication through language is given a kind of secondary status to this primary language of emotions. They tell us that, “sociality is enhanced by speech because inflections of voice and substance of sentences can add extra layers of emotional content to interaction.” (T&M 117). The chapter titled ‘The Emergence of Culture,’ is mostly devoted to describing the adaptations they believe would have been selected for to create an individualistic ape who forged social ties via an emotional language but also possessed the brain capacity for verbal, symbolic communication. Culture and its development, therefore, are just products of natural selection, though they admit the rules seem to change a bit once culture emerges. They outline five “forces of the social universe” which generate selection pressures – population, production, distribution, regulation, and reproduction. (T&M 125) Equipped with a theoretical image of our progenitors and these five forces, they proceed to explain the rest of human history, which for Turner and Maryanski consists of emotional, individualistic apes moving from one “sociocultural cage” to another.
In the final chapter, they defend individualistic modern society because it is less constraining than most of the “cages” we have lived in throughout history. They say that sociologists who see pathological elements are confused about our heritage. “Humans are not the descendants of monkey ancestors, as most sociological criticisms of modernity imply.” (T&M 316) I can’t rightly say what “most sociological criticisms of modernity imply,” but I know that it wasn’t confusion about our primate ancestors that led Émile Durkheim to first describe anomie; rather, it was the study of a real phenomenon, the fact that people in modern societies seemed to be killing themselves at an alarming rate. Likewise, Liah Greenfeld’s current work aims to address a similar phenomenon: the emergence of mental illness with the rise of modernity and its increasing prevalence in particularly individualistic and anomic societies like America.
For me, the most frustrating part of this book was the misuse of Durkheim. At one point, they give a one paragraph summary of his work in The Elementary Forms of Religious Life and then guess, on the basis of occasionally observed gatherings of groups of chimpanzees, that “there may be a hardwired basis for this propensity to symbolize social relationships with sacred totems.” (T&M 148) Durkheim’s name is sprinkled throughout the book, and– in what appears to me an attempt to give sociological weight to a book more concerned with apes than humanity – they conclude by quoting Durkheim on the importance of turning to the past if we want to accomplish something useful. What they effectively do is turn all the way back past the emergence of the very thing that makes us human, which allows them to make such insightful speculations as, “Contemporary humans enjoy travel perhaps because they are evolved apes.” (T&M 307)
Unfortunately, Durkheim’s own mistake of making a god out of society allows his ideas to be easily misappropriated, such that Turner’s and Maryanski’s congregation of chimps howling out their innate emotions doesn’t seem a far cry from the effervescence of a religious ritual. We can summarize Durkheim’s misstep using Greenfeld’s words from an essay written 15 years ago: “…Durkheim imagined the emergent phenomenon of society as, fundamentally, physical energy generated by the physical proximity of individual biological organisms,…” (Greenfeld “Praxis” 132) It is nevertheless clear that when Durkheim talks about society, he is describing something other than a material force, namely, the emergent phenomenon of culture and mind. The following comes from The Elementary Forms of Religious Life, interestingly enough, from his chapter on ‘The Notion of the Soul’:
“… there really is a part of ourselves that is not immediately subordinate to the organic factor: namely, everything inside us that represents society. The general ideas that religion or science imprint in our minds, the mental operations these ideas presuppose, the beliefs and feelings that are at the basis of moral life – all the higher forms of psychic activity that society awakens and develops in us – do not follow in the wake of the body, like our sensations and our bodily states. This is because, as we have shown, the world of representations in which social life unfolds is overlaid on its material substrate and does not originate there.” (Durkheim 201)
This “world of representations” is a world of symbols. We can define symbols as intentionally articulated signs. What I see described in the Origin of Societies is a kind of gradual emergence of culture, and kind of effortless slide to symbols from signs. Statements like “we know that symbolic capacities were enhanced as the brain grew,” and “with the first push for a larger brain in Homo habilis, it became possible to construct a more symbolic culture,” give me the impression that symbolic processing was occurring in the brains of our ancestors before the emergence of articulate speech. (T&M 113,110) But how could symbolic thought take place without symbols to be processed?
The elements that made possible the emergence of symbols (and therefore culture and the mind) were a highly developed brain, the use of signs, and the larynx, but this combination in no way made the emergent phenomenon likely. Greenfeld writes:
“The biological species of homo sapiens had completely evolved — brain, larynx, and all — hundreds of thousands of years before the mind made its first appearance among its members. This means that it was not caused by the organic combination that made it possible, but a result of a most improbable accident — the transformation (a complete change in character) of one of its elements.” (Greenfeld 77-78)
This transformation of sign to symbol – however exactly this incredible accident occurred – was the point of emergence for culture and the mind. I quote Greenfeld again, because I believe highlighting the difference between signs and symbols is necessary to distinguish mentalism from evolutionary theories describing a weak, gradual emergence of culture:
“The meaning (the significance) of a symbol was not given in the phenomenon it was signifying – its referent, or genetically; it was given to it by the context in which it was used, and increasingly this context became mostly the context of other symbols. Thus the significance of symbols constantly changed. Unlike signs, which could be very many, but whose number was essentially limited by their referents in the environment, symbols were endlessly proliferating. (The very introduction of a symbol would change the environment and initiate a symbolic chain reaction.) Unlike signs, which exist in sets, they, from the first formed systems, ever changing and becoming more complex and connected by constantly transforming ties of inter-dependence. Symbols, in other words, constituted a world of their own; an autonomous, self-creative world in which things were happening according to laws of causation which did not apply anywhere else.” (Greenfeld 78 -79)
I feel it’s important to state, despite how obvious this may seem to some of you, that this symbolic reality was made possible only by some collectivity. The homo sapiens who first discovered the ability to intentionally articulate a sign would have had to intentionally articulate to someone in order to spark the symbolic process which has created the world we live in today. Therefore, while the symbolic process occurs only in individual brains, to see it as a product of individual brains is a mistake.
Steven Pinker, in the The Stuff of Thought (2007), puts forth the theory of conceptual semantics, the idea that the true “language of thought” is a set of innate concepts, closely corresponding to the Kantian categories of space, time, causality, substance, and so on. Like Turner and Maryanski with their emotional proto-language hypothesis, Pinker tends to treat language itself as a kind of secondary phenomenon, almost coincidental to our innate conceptual processes. In a section of the book arguing against linguistic determinism, he writes:
“One reason that the language we speak can’t be too central in our mental functioning is that we had to learn it in the first place. It’s not hard to imagine how language acquisition might work if children could figure out some of the events and intentions around them and tried to map them onto the sounds coming out of their parents’ mouths. But how a raw stream of noise could conjure up concepts in the child’s mind out of nothing is a mystery. It’s not surprising that studies of the minds of prelinguistic infants have shown them to be sensitive to cause and effect, human agency, spatial relations, and other ideas that form the core of conceptual structure.” (Pinker 149)
But is Pinker saying that humans are the only animals sensitive to cause and effect or spatial relations? To demonstrate the existence and operation of this innate “language of thought,” he has to actually break down language itself. In other words, it is only when confronted with a system of symbols that these innate capacities or tendencies can have the sort of explicit work to do which Pinker describes. Apart from culture, we probably only possess these biological sensitivities to a slightly greater degree than other very intelligent animals.
At points, Pinker’s resistance to the idea of mind as a symbolic process is very clear, and rather weak. Using the example of Shakespeare, he argues that “a name really has no definition in terms of other words, concepts, or pictures,” but rather “points to a person in the world in the same way that I can point to a rock in front of me right now.” I understand that the thoughts that occur to me – what I know or feel about Shakespeare, – are not a definition of Shakespeare. However, the flow of ideas and images which begins when I hear his name is much more than a connection to “the original act of christening” as Pinker puts it – it is steeped in the cultural context in which I learned about Shakespeare, and includes innumerable strands of connection to other ideas which, removed from the context of the symbolic process happening in my head, bear no relation whatsoever to the sound Shakespeare’s parents decided would signify their newborn child. (Pinker 12)
Pinker’s theory still leaves us with the problem of the emergence of culture, and seems unable to account for the development of cultural differences apart from the idea that they are merely the result of the peculiar interplay of a set of biologically programmed concepts.
In Daniel Dennett’s Consciousness Explained (1992), it seems to me there is also no strict emergence of culture in terms of a transformation from signs to symbols. He mentions “communicative (or quasi-communicative) acts” in which hominids would have shared useful information with one another -“asked” and “answered” each others questions – and hypothesizes that a hominid would have discovered that he could ask and answer his own questions, this over time becoming a silent internalized cognitive process. Again, like in Turner and Maryanski’s retelling, signs slide into symbols without much notice. (Dennett 194-197)
Once culture does exist, (I don’t feel I can say “emerge” and remain consistent with Dennett’s rendition), the law that governs its evolution is natural selection. Dennett subscribes to idea of the meme, defined by Richard Dawkins as a “unit of cultural transmission, or a unit of imitation.,” (in Dennett 202) Dennett tells us that Dawkins meant this to be taken literally: “Meme evolution is not just analogous to biological or genetic evolution, not just a process that be metaphorically described in these evolutionary idioms, but a phenomenon that obeys the laws of natural selection exactly. The theory of evolution by natural selection is neutral regarding the differences between memes and genes; these are just different kinds of replicators evolving in different media at different rates.” (Dennett 202) Memes spread (and mutate), not necessarily because they are good for the individuals whose brains they infest, but simply because they are good replicators. Meme-vehicles are essentially physical – books, recordings, buildings, etc. – but, “memes still depend at least indirectly on one or more of their vehicles spending at least a brief, pupal stage in a remarkable sort of meme nest: a human mind.” (Dennett 206) Memes therefore are in competition for residence in a limited number of minds – memes may aid or block or be neutral to other memes.
It is definitely possible to see the meme story as culture in the brain, but lest we get confused, we should remind ourselves that culture is a symbolic process. A meme is an artificial chunk, chopped out of the process, removed from the mind, the only context in which elements of culture come alive. Attempting to explain culture by a law that supposedly governs the transmission of discrete units of culture necessarily does violence to the idea of a symbolic process. It seems to me that with the mental gymnastics required to extend natural selection to units of culture – constantly grasping for analogies from biology to provide justification – there would be little time to even attempt a true historical analysis.
In an article published last year in Behavioral and Brain Sciences Dennett and Ryan McKay look at ‘The Evolution of Misbelief.’ Dawkins’ cultural evolution says that memes, (and both true and false beliefs would be memes), are selected because they are good self-replicators and not necessarily because they enhance fitness. But here Dennett and McKay take a much more biological approach, working from the general assumption that “evolution has designed us to appraise the world accurately and to form true beliefs.” (M&D 494) Not surprisingly, they dismiss psychotic delusions rather quickly as “instances of biological dysfunction,” and spend a good deal of their time addressing “religious (mis)beliefs.” (M&D 493) When Yorick Wilks makes an insightful response to the paper, asking why the discussion of misbeliefs which are not genetically heritable is “taking place in the context of natural selection and Darwinian evolution,” they seem to duck behind the cover of the cultural evolution claim which they advanced with very little force or substance in the original article. (Wilks 539) If they can simultaneously apply the law of natural selection to two very different orders of phenomena – memes, as they call them, and genes – then it looks like they get to have their cake and eat it too. “Gene-culture co-evolution” looks to me like a seductive catch-all that explains very little.
While I feel I’ve only had time to briefly sketch the differences between mentalism and evolutionary psychology, and I especially regret not being able to address the specific structures of the mind as Greenfeld describes them, I felt it was important to first deal with this symbolic reality in general. The original title of this paper, ‘Evolutionary Psychology: A Stone Age Mindset,’ emerged simply out of the need to create a title for the conference program, but now that it’s all said and done I’m not sure it’s the most fitting. Still, it reflects my frustration with an approach that is stuck in the past, looking at humanity from tens of thousands to millions of years away from where we are now, telling us what we are based on what “science” seems to say we should be, dogmatically drawing authority from the name of Darwin, floundering out of its depth in a symbolic reality it has not even begun to explain. Humanity, a most worthy subject of study, deserves better.
Posted on April 18, 2010 - by David
A few weeks ago, the New York Times published an article on the “Next Big Thing” in English, discussing the growing movement of scholars looking to incorporate science, or more specifically, theories of evolutionary psychology, into the study of literature:
Jonathan Gottschall, who has written extensively about using evolutionary theory to explain fiction, said “it’s a new moment of hope” in an era when everyone is talking about “the death of the humanities.” To Mr. Gottschall a scientific approach can rescue literature departments from the malaise that has embraced them over the last decade and a half. Zealous enthusiasm for the politically charged and frequently arcane theories that energized departments in the 1970s, ’80s and early ’90s — Marxism, structuralism, psychoanalysis — has faded.
A student of literature myself, I was required on more than one occasion to read these ridiculous works of criticism and reference them in my own critical essays, so I can certainly feel Mr. Gottschall’s pain. But does rejecting the useless modes of interpretation that were popularized over the last few decades mean that Darwin becomes the authority on the modern novel?
At about 5 and half minutes in to this video, part 3 in a series of 6 interviewing scholars “on literature and science,” listening to Gottschall made it clear to me that some people in the humanities are pretty much fed up and ashamed of the failure of literary study to provide the kind of objective, enduring knowledge that science has been able to give us, and they feel it’s high time to start sharing in science’s success.
…the ideas of one generation of literary scholars can rarely survive the critique of the next generation of literary scholars, and it’s a very different model than what you find in the sciences, where again, in the sciences, they’re mostly wrong too, but there is also this slow accretion of information, knowledge, concepts, that most reasonable people have to admit are probably true, and so my hope is that we can retain the best aspects of our traditional modes and supplement them with new tools from the sciences.
Soon after the initial article was published, a debate titled ‘Can ‘Neuro Lit Crit’ Save the Humanities’ appeared on the New York Times website, with a number of authors and English professors contributing their opinions. There was nothing resembling unanimous agreement about the value of this new approach, but there was a general consensus that the humanities are suffering from a serious lack of funding and lack of interest. Without even considering the interaction of various social and institutional forces leading to this state of crisis in the humanities, I can say from experience that most people who don’t study literature (and plenty of us who do) are very skeptical about its usefulness. I remember this embarrassed, can’t-look-you-in-the-eyes-or-speak-clearly feeling that would wash over when someone asked me what I was studying and I had to admit that I was an English major. The response to my confession was usually, “so what do you wanna do, you wanna, like, be an English teacher?” and I would mumble something about wanting to write. What I wrote were poems, and a lot of my marginal work (i.e poems scribbled in the margins of the notebooks I was supposed to be filling with pertinent information from class) was dedicated to the frustration and disillusionment of studying literature.
So I can understand the element of personal crisis that leads a literary scholar to look for a more solid foundation to stand on. In last few minutes of the video posted above, Joseph Carroll describes how his frustration with the condition of literary scholarship drove him to a kind of intellectual breaking point:
“I need something more wholesome, more adequate, more coherent, closer to the truth, and then I went and read Darwin and I had a sort of cleansing vision of deep time, humans emerging out of millions of years of evolution, it just cut through all of the… intellectual confusion at superficial levels that prevailed in literary study, so I set about trying to reconstruct literary study… working from the ground up…using an evolutionary vision of human nature as the basis for reconstructing all the concepts that we need to understand literature”
So what does this kind of work actually look like?
One example is The Literary Animal: Evolution and the Nature of Narrative, a collection of essays from the evolutionary perspective published in 2005, with contributions from Gottschall and Carroll as well as several of the other authorities. I think this review, by Travis Landry of the University of Washington, published in Evolutionary Psychology, (an obviously friendly audience), is telling:
On the heels of these dueling forewords [by E.O. Wilson and Frederick Crews] comes the editors’ anecdotal introduction, highlighted by a rather unremarkable recounting of the misunderstood Darwinian graduate student, Jonathan Gottschall (“Jon’s Story”), who finally meets his open-minded, maverick mentor, David Sloan Wilson (“David’s Story”). They state the collection’s three guiding questions: What is literature about? What is literature for? What does it mean to apply a scientific perspective like evolutionary theory to a non-scientific subject like literary studies? An attempt to resolve these queries begins with part one, “Evolution and Literary Theory.” This section aims “to grapple with some of the problems and opportunities presented by the collapse of the constructivist foundations of contemporary literary theory” (4). Perhaps someone should tell the constructivists (apparently all literary critics who are not naturalists) about this “collapse.” In any event, the editors are quick to qualify: “a more restrained version of social constructivism is fully compatible with the emerging evolutionary models of human nature” (4). Such disclaimers, grounded in the reassurance that “the nature-nurture dichotomy is a false one” (4), represent a recurrent strategy used throughout The Literary Animal to ease imagined misgivings about takeover, but they do little to temper an all too often pedantic tone and the unmistakable, unapologetic imbalance of power, evident each time it boils down to which side holds the knowledge key.
While Landry does find some things to praise in The Literary Animal, he’s clearly turned off by the air of superiority running throughout the work, and he restates this criticism in the final paragraph of his review:
There is little doubt that this text contains enough quality ideas to merit an attentive read, and its intended public, both in the humanities and the sciences, should take advantage of this resource in order to become better informed about a legitimate discourse that does not seem likely to fade away anytime soon. Nonetheless, one repeatedly gets the sense that these naturalist critics would be better served without their ‘us against the world’ mentality. The empirical chest thumping and emblazoned rhetoric that permeate this work quickly wear thin and may ultimately alienate the very literary critics these scholars hope to convert. In the final analysis, greater humility and a more respectful voice are certain to be more persuasive and will ultimately allow the interpretive fruits of this evolutionary enterprise, which is strong enough to stand on its own, to do the talking.
Landry, I’m afraid, misses the point. All the talk about an evolutionary perspective complementing existing approaches, about taking both culture, (whatever they mean when they use that word), and biology into account seems to amount to little more than a strategy of temporary appeasement before the final takeover. If that sounds paranoid or harsh, consider what Joseph Carroll has to say about the brand new journal, the Evolutionary Review, of which he is a founding member and co-editor:
…the aim of the journal is to give evidence that evolutionary perspective, “this view of life,” one of Darwin’s phrases, is adequate to encompass every aspect of human concern…
…the idea is that the evolutionary perspective is an ultimate, encompassing, final, absolute, total perspective… this is what makes people most nervous outside the field… you start talking about thousand year reich, you know, and you think “well you’ve got global, imperialist ambitions intellectually,” and it’s true [chuckles], it’s absolutely true… there’s a wager, the wager is that the evolutionary perspective is adequate… as the central linking conceptual framework that forms a genuine scientifically established foundation of knowledge for everything in the social sciences and the humanities, we think that’s true…
This attitude renders empty all their talk about “consilience” and “emergent properties” (see pt. 4 of the “on literature and science” series). In short, culture is seen as determined rather than constrained by biology. It’s difficult for me to understand how the theory of evolution and ideas about the behavior of stone-age man are adequate for understanding the humanities, which Joseph Carroll himself calls “the highest level of emerging complexity.”
William Deresiewicz, in an essay called ‘Adaptation: on Literary Darwinism’ published in The Nation last year, gives a strong critique of this movement, providing good background on the history and nature of its goals, and summarizes the dismal state of affairs which allowed the evolutionary perspective to enter the discussion. By touching on the work of a number of the prominent figures in this emerging field, Deresiewicz is able to address some of the most glaring problems with the evolutionary psychological approach. It’s definitely worth reading; I can say that he states some of my own criticisms in more detail and in better context than I am able to do here. Towards the end of the essay, he gets to what is, as far as literary scholarship is concerned, perhaps the most important point:
Seeking to displace Theory, literary Darwinism may end by becoming it. Each is reductive. Each leads in outlandish directions that make sense only to initiates. Each has a penchant for hero worship. (For Dutton, the father of natural selection is not “Darwin,” but “Darwin himself.” Carroll makes a trinity of Darwin, Wilson and Pinker.) Each is predictable. If Marxist criticism is always about the rise of the bourgeoisie, literary Darwinism is always about mate selection or status competition. Each looks to literature only for confirmation of its beliefs. Shakespeare, it turns out, agrees with Darwin, as he once agreed with Freud and Frye. (Though if science is the exclusive standard of truth for the Darwinists, it’s not clear why it matters whom Shakespeare agrees with.) Authors who won’t get with the program–who don’t deal with mate selection or status competition, or refuse to solicit our attention in evolutionarily correct ways–are demoted in rank. (Darwinian aesthetics exhibits a strong antimodernist animus, as if it were unnatural to prefer Conrad to Kipling, or Rothko to Rockwell.) That so many of the greatest works of literary art–the Iliad, the Aeneid, the Divine Comedy, Don Quixote, Hamlet, King Lear, Paradise Lost, Faust, Moby-Dick, the novels of Dostoyevsky, Joyce, Woolf and Coetzee–are ultimately concerned not with mate selection or status competition, however seriously they might consider such matters, but with the human place in the cosmos; that such a commitment is precisely what begins to distinguish these works from the kinds of things that are better studied with polling data and cheek swabs; that the finest books demand a criticism that attends to what makes them unique, not what makes them typical: these are not possibilities that literary Darwinism envisions.
From what I can see, evolutionary psychology used in the study of literature functions in essentially the same way as the theories which Carroll and Gottschall are so critical of. It is just another game for clever people to play, but the authority of the name ‘Darwin’ and the use of buzz-words like “selection” “adaptation” and “fitness” create the illusion that this is somehow a more empirical, ‘scientific’ approach.
In part 2 of the series, Philip Kitcher, philosophy professor from Columbia, criticizes the general and haphazard way in which a few pieces of evolutionary theory get applied to humans:
“It always amazes me the ease with which people who have spent years of their lives as it might be working on some other organism, social insects for example, very well studied, E.O Wilson has an amazingly deep and detailed knowledge of the behavior of social insects, people think “well you know I’ve done social insects and now it’s just a matter of applying the same principles to human beings,” but you know we aren’t actually that similar to the social insects, there are quite a lot of differences and those need to be taken into account…”
I’m obviously very skeptical not only of this approach to literature, but of evolutionary psychology in general. If biology studies life, and physics studies the laws governing the physical world, what does evolutionary psychology study? It can’t study so-called “evolutionary man,” because he’s not around for us to talk to, nor has he left records by which we can know him. Furthermore, the assumption that the human mind is a product of evolution over millions, or at least hundreds of thousands of years means that historical comparison over the past few thousand years probably won’t tell us much. What emerges then is a universalistic view of the mind which looks at the complex, diverse, symbolic behavior of human individuals and societies and tries to pinpoint the more basic, animalistic drives that are working underneath to determine the behavior. Under the multitude of dramatically different cultures, they find a set of motives/traits common to all man, destroying the possibility of a culture possessing its own internal logic that may not have developed towards “evolutionary fitness.”
As I said earlier, the occasional reassurances that they are not trying to replace or override discussion of culture, but to complement it, seem pretty hollow. When they use the word culture, they are talking about the shared practices of a particular society, not the general symbolic process of culture, Culture with a capital C, if you will. If there is no accounting for the appearance of particular cultures other than as products of biological evolution, as adaptations to specific physical environments (i.e no theory of culture as an emergent phenomenon, something more than biology) then obviously Culture as such is reduced to evolution/genetics/the functioning of the brain, and with such a view, there is in fact no reason that evolutionary psychology should defer to, or even consider, any other approach to the humanities.
My own approach to literature changed when I began to study with Liah Greenfeld in 2004, as a sophomore English major at Boston University. Those of you who have been reading this blog are hopefully becoming acquainted with her work. The following is a very brief summary of some of the fundamental principles of her view which lead me to reject the position of evolutionary psychology.
- Humans lack a genetically given order necessary for survival
- We derive this order from society
- Society is structured symbolically, on the basis of culture– the process of symbolic transmission of human ways of life across generations .
- This symbolic process occurs simultaneously on the individual and collective levels, with individual human minds as the only the active elements of culture.
- Culture is an emergent phenomenon and a reality sui generis. As Greenfeld writes, “the neural processes by means of which the cultural process occurs serve only as boundary conditions outside of which it cannot occur, but are powerless to shape the nature and direction of the cultural process.”
True ‘consilience’ would take full account of the emergent nature of culture. As the characteristic which distinguishes humanity from all other forms of life, culture is the proper subject for the empirical study of humanity in all its aspects, and it is precisely such a science that Greenfeld is attempting to establish. Up until now, her published work has dealt most directly with modern culture, but her forthcoming book seeks to establish the theoretical groundwork and philosophical justification for the empirical study of humanity, while examining a particular phenomenon which she believes is culturally caused, (mental illness).
I think it’s evident to anyone who watches those videos that there is a highly personal aspect to the work these people are doing. Like myself, I bet they entered college with a passion for literature and a notion that they were embarking on a quest for deep truths – that they would, in fact, learn something about human nature. But the humanities, as an institution, could not live up to our hopes. So, Carroll and Gottschall and the rest of them turned to science, and it’s easy enough to see why. Science enjoys a privileged place in modern society, and though part of this can be explained culturally and historically, (Greenfeld has written extensively about the emergence of science as a social institution in 17th century England), the fact remains that science has given as more objective knowledge of reality than any other method of inquiry. In ‘Literature and Science as Social Institutions,’ Greenfeld writes, “However indirect and imperfectly systematic and effective, science, one has to conclude in all fairness, is the most direct, systematic, and effective way to objective, valid empirical knowledge available to mankind.” The thing is, evolutionary psychology is not science. It is a set of theories (and not a particularly coherent one) that some people are using to understand the world around them, in effect, to provide the mental order that nature neglected to encode in our genes. For me, it is not a satisfying view of the world. It does not help me understand the society I live in, it does not explain why I think the way I do, it does not ring true with my experience. Science, at its most basic, is a method, and when a particular science is developed and equipped to study specific aspect of empirical reality, valid and valuable knowledge can be gained. Because culture does not constitute merely a more complex level of a biologically given nature, but a qualitatively distinct layer of reality, a new science is called for. With the construction of the ‘New Humanities,’ these scholars literally want to take us back to the Stone Age. My goal is to help build the alternative.
These thoughts are being developed into a longer piece comparing evolutionary psychology to mentalism – the name given to the theory Liah Greenfeld has developed, to be presented at a student conference at Boston University on May 1, 2010. Details will be posted soon.