There is a number of quite interesting observations in Taylor and Mbense's article on the concept of anger in Zulu.
First of all, when you're angry, your heart can be squashed like one squashes something soft (p 197-98). This is conceptualized with the word xhifi as well as the word fithi. Both mean the same thing.
Also, they note that anger is associated with nausea in Zulu. One can thus vomit from anger and if you meet an annoying person, your heart says "sick" (p. 199).
Further, being black-hearted or having a black heart in Zulu has nothing to do with depression -- it means either having literal nausea or being annoyed and irritable (p. 200).
This comes from a folk theory according to which the cause of irritability is black bile. One can thus provoke someone by splashing him with bile (p. 200).
Lastly, in Zulu, as well as in Hungarian, extreme anger is associated with being brought to tears (p. 207)
Tuesday, September 27, 2011
Afterthoughts on "The ANGER IS HEAT question"
Based on articles by Kövecses (2000), Yu (1995), and Taylor and Mbense (1995), Caroline Gevaerts gives the following report on the existence of heat metaphors for anger in languages other than English:
It's difficult to say what to conclude from this table. Hungarian is definitely a biased pick, since Hungary has has a centuries-long exposure to Christian doctrines and scholarship. We should thus expect it to reflect the same humoral theory as English.
Chinese is, as Gevaerts notes, apparently a counterexample to the theory. The central metaphor, ANGER AS HEAT, does not seem to occur.
Anyway, here's my to-do list for today:
HEAT | FIRE | FLUID | |
---|---|---|---|
English | + | + | + |
Hungarian | + | ? | + |
Japanese | + | + | + |
Chinese | (–) | + | – |
Zulu | + | + | (–) |
Wolof | + | + | + |
Chickasaw | + | ? | – |
It's difficult to say what to conclude from this table. Hungarian is definitely a biased pick, since Hungary has has a centuries-long exposure to Christian doctrines and scholarship. We should thus expect it to reflect the same humoral theory as English.
Chinese is, as Gevaerts notes, apparently a counterexample to the theory. The central metaphor, ANGER AS HEAT, does not seem to occur.
Anyway, here's my to-do list for today:
- Finish research on blindness, deafness, and physical disability
- Make a list of the arguments I've written so far and think about the ordering
- Contact the two corpus linguists that Martin put me in contact with
- Read some more of Feldman's From Molecule to Metaphor (2006)
Monday, September 26, 2011
Charles Forceville: "Non-verbal and multimodal metaphor in a cognitivist framework" (2009)
Charles Forceville is a media scholar who works here in Amsterdam. In this book contribution, as elsewhere, he argues that the methods of metaphor research should be applied more broadly to film, visual arts, music, etc., rather than just to (written) text.
For my purposes, this text is also notable for citing yet another couple of sources who point out the circularity of the reasoning in cognitive linguistics.
These are (1) Gibbs and Colston (1995) and (2) Alan Cienki's contribution to Discourse and cognition: Bridging the gap (1998). It seems that Cienki has reiterated his criticism in The Cambridge handbook of metaphor and thought (2008), but I don't know for sure. I guess I'll find out when the library gets the book.
For my purposes, this text is also notable for citing yet another couple of sources who point out the circularity of the reasoning in cognitive linguistics.
These are (1) Gibbs and Colston (1995) and (2) Alan Cienki's contribution to Discourse and cognition: Bridging the gap (1998). It seems that Cienki has reiterated his criticism in The Cambridge handbook of metaphor and thought (2008), but I don't know for sure. I guess I'll find out when the library gets the book.
Noel Carroll: "Visual Metaphor" (1994)
This paper is noteworthy for consistently noting that (visual) metaphors can work in two directions: e.g., a woman's body is like a violin, or a violin is like a woman's body.
These are symmetrical metaphors. That is, they are single instances that can be read in two directions.
What I want to focus on is also pairs of mappings that relate two domains in both directions. This can be realized in a number of distinct instances.
Thus, by the normal standards of cognitive metaphor theory, be recognized as a conceptual and cognitive (rather than verbal and linguistic) operation.
These are symmetrical metaphors. That is, they are single instances that can be read in two directions.
What I want to focus on is also pairs of mappings that relate two domains in both directions. This can be realized in a number of distinct instances.
Thus, by the normal standards of cognitive metaphor theory, be recognized as a conceptual and cognitive (rather than verbal and linguistic) operation.
Alvaro Pascual-Leone et al.: "Paradoxical effects of sensory loss." (2011)
This article provides a pile of evidence for the fact that deprivation of sight---even for a period of a few days---increases perceptivity with respect to other senses, especially touch. The authors provocatively state that "it is the sighted world that seems not to have truly adapted to those without sight" (p. 15).
They also call attention to the fact that a blind person's perception of distance is different from a sighted person's. For sighted person, the senses of touch and hearing will tend to be secondary to the sense of vision, providing an essentially different layout of the environment.
They quote a character from an essay on blindness by Denis Diderot---the enlightenment author who edited the encyclopedia---as saying that he would rather have "really long arms" than functioning eyes (p. 20).
They also call attention to the fact that a blind person's perception of distance is different from a sighted person's. For sighted person, the senses of touch and hearing will tend to be secondary to the sense of vision, providing an essentially different layout of the environment.
They quote a character from an essay on blindness by Denis Diderot---the enlightenment author who edited the encyclopedia---as saying that he would rather have "really long arms" than functioning eyes (p. 20).
Caroline Gevaert: "The ANGER IS HEAT question" (2005)
In defense of Geeraerts and Gondelaers' (1995) hypothesis about the historical origin of the ANGER IS HEAT metaphor, Caroline Gevaert examines some Old English corpus material and concludes that the metaphor was indeed in all likelihood inherited from Latin scholarship.
As a part of her argument, she acknowledges that test subjects' body heat increases slightly (about .1 degree celsius) when they make an angry face, apparently supporting a physiological basis for ANGER IS HEAT. However, she then comments that this temperature increase also occurs when subjects mimic other emotions, suggesting that I was boiling with sadness should be equally natural (p. 197).
She also notes that the evidence from Chinese actually seems to contradict the theory, referring only to chilies (and, presumably, their red color) and not to heat or flames (p. 196). The Native American language Chikasaw similarly does not seem to support the hypothesis.
As a part of her argument, she acknowledges that test subjects' body heat increases slightly (about .1 degree celsius) when they make an angry face, apparently supporting a physiological basis for ANGER IS HEAT. However, she then comments that this temperature increase also occurs when subjects mimic other emotions, suggesting that I was boiling with sadness should be equally natural (p. 197).
She also notes that the evidence from Chinese actually seems to contradict the theory, referring only to chilies (and, presumably, their red color) and not to heat or flames (p. 196). The Native American language Chikasaw similarly does not seem to support the hypothesis.
Farzad Sharifian: "Conceptualizations of chesm 'eye' in Persian" (2011)
From my perspective, this paper is mostly notable for bringing out the tight relationship between the Perisan sufism metaphors employing the eye as a source domain. This observation is parallel to the hypothesis that our conceptualizations of anger as a heated fluid have their roots in medieval medical scholarship.
Sharifian concludes that "there is a close interaction among language, body, and culture," and that "figurative meanings based on the body do not draw their power from the assumption that there is but one natural way in which we interact physically with our environment" (p. 197).
Further evidence that even the constraints theory of metaphor is wrong comes from inconsistent conceptualizations of the same domain. Examples are:
Sharifian concludes that "there is a close interaction among language, body, and culture," and that "figurative meanings based on the body do not draw their power from the assumption that there is but one natural way in which we interact physically with our environment" (p. 197).
Further evidence that even the constraints theory of metaphor is wrong comes from inconsistent conceptualizations of the same domain. Examples are:
- DOWNHILL IS BAD vs. UPHILL IS BAD
- BAD IS NOT GOOD vs. BAD IS GOOD (in the sense of cool, impressive)
- In Danish, FAT IS IMPLAUSIBLE vs. THIN IS IMPLAUSIBLE
- IMPENETRABLE IS CONVINCING vs. TRANSPARENT IS INTELLIGIBLE
Zoltán Kövecses: "Anger" (1995)
Looking back at the case study on anger metaphors in Women, Fire, and Dangerous Things (1987), Zoltán Kövecses clarifies and elaborates his views on the respective roles of culture and body in the shaping of languages.
He reiterates that human physiology constrains metaphor systems (p. 195), but now also concedes that "there may be differences between cultures in both conceptualized and real physiology" (p. 193). Thus, upon reflection, he admits that concepts are "influenced by both culture and the human body" (p. 182).
The assertion that anger may differ physiologically across cultures is followed by a reference to a paper by Robert C. Solomon, "Getting Angry" (1984). From what I gather, the paper contains some anthropological reflection on William James' theory of emotions.
The paper is written partly as a response to Dirk Geeraerts and Stefan Gondelaers' contribution to the same volume, "Looking back at anger."
Geeraerts and Gondelaers' point is (according to Kövecses' summary) that the fluid concepts for anger in English are actually remnants of the medieval theory of the humors, in which moods were explained in terms of imbalances between bodily fluids.
Kövecses' paper seems quite sound and insightful. I would perhaps only add that separating body and culture appears increasingly difficult given that bodies differ, body images differ, and that behavior depends on both.
He reiterates that human physiology constrains metaphor systems (p. 195), but now also concedes that "there may be differences between cultures in both conceptualized and real physiology" (p. 193). Thus, upon reflection, he admits that concepts are "influenced by both culture and the human body" (p. 182).
The assertion that anger may differ physiologically across cultures is followed by a reference to a paper by Robert C. Solomon, "Getting Angry" (1984). From what I gather, the paper contains some anthropological reflection on William James' theory of emotions.
The paper is written partly as a response to Dirk Geeraerts and Stefan Gondelaers' contribution to the same volume, "Looking back at anger."
Geeraerts and Gondelaers' point is (according to Kövecses' summary) that the fluid concepts for anger in English are actually remnants of the medieval theory of the humors, in which moods were explained in terms of imbalances between bodily fluids.
Kövecses' paper seems quite sound and insightful. I would perhaps only add that separating body and culture appears increasingly difficult given that bodies differ, body images differ, and that behavior depends on both.
Saturday, September 24, 2011
More literature on blindness and metaphors
- Barbara Landau and Lila R. Gleitman: Language and Experience: Evidence from the blind child (1985)
- Charles Forceville: "Non-verbal and multimodal metaphor in a cognitivist framework" (2009)
- Alvaro Pascual-Leone et al.: "Paradoxical effects of sensory loss." (2011)
- Gili Hammer: "Blind Women and Invented Technologies" (2011)
- Maribel Tercedor Sánchez et al.: "The Depiction of Wheels by Blind Children" (2011)
Hurovitz et al.: "The Dreams of Blind Men and Women" (1999)
This a report of a study in which 15 partially or totally blind participants were asked to keep a dream diary for two months. Analysis showed that even some congentially blind subjects ocaasionally described their experiences in visual terms.
The article was published in the journal Dreaming, but it's also availble at the University of Santa Cruz website.
Pretty to touch
The study is of course open to all the usual problems with introspection, and all the cases of visual terminology are ambiguous. In many cases, it seems as if the congenitally blind subjects have simply adopted visual concepts like pretty with reference to other senses.
For instance, one subject said that smooth things are "prettier to touch" and explained:
Glad to see you
Another class of ambiguities stems from the use of visual terms that may or may not have been instances of the common metaphors SEEING IS UNDERSTANDING or SEEING IS KNOWING.
For instance, the authors quote a description from a congentially blind woman. She dreamt that
They also interpret the phrase seeing the baby occuring in another record as a metaphor for "knowing." If these are indeed correct readings, then blindness is does not seem to be a hindrance to the competet use of that metaphor.
The phenomenology of everthing else
I just need to quote this beautiful description of the modalities that occured most often in the dreams:
The article was published in the journal Dreaming, but it's also availble at the University of Santa Cruz website.
Pretty to touch
The study is of course open to all the usual problems with introspection, and all the cases of visual terminology are ambiguous. In many cases, it seems as if the congenitally blind subjects have simply adopted visual concepts like pretty with reference to other senses.
For instance, one subject said that smooth things are "prettier to touch" and explained:
I think if anyone prefers rough textures it's because they are seeing them besides feeling them and the material might look pretty to them.It is difficult to say whether this should count as a metaphor. One might say that this person is simply playing along in the language game of "pretty" with whatever sensual resources she has; but remember, so are we.
Glad to see you
Another class of ambiguities stems from the use of visual terms that may or may not have been instances of the common metaphors SEEING IS UNDERSTANDING or SEEING IS KNOWING.
For instance, the authors quote a description from a congentially blind woman. She dreamt that
she and her husband (also blind) visited Thomas Jefferson at Monticello she reported that Jefferson "was glad to meet us and he didn't care if we couldn't see."As far as I can understand, the authors read the word see in this sentence as meaning "notice." This is perhaps based on some contextual clues that were not quoted in the article.
They also interpret the phrase seeing the baby occuring in another record as a metaphor for "knowing." If these are indeed correct readings, then blindness is does not seem to be a hindrance to the competet use of that metaphor.
The phenomenology of everthing else
I just need to quote this beautiful description of the modalities that occured most often in the dreams:
The participants "felt" the warmth of the sun, the texture of a coat, the edge of a knife, the slope of the ground, vibrations, snow, or the soft fur of a dog. They "smelled" fire, tobacco, aftershave lotion, fresh air, food, or coffee. They noted the "taste" of a cigar, a cup of coffee, or an orange. These dream sensations seemed to reflect their use of or pleasure in these sense modalities in waking life.
Mike Thelwall: "Fk yea I swear" (2008)
This is a corpus-based study on swearing on UK Myspace profiles. From my perspective, the article is mostly interesting because it contains some valuable statistics on the uses of swear words like fuck, cunt, twat, and shit.
It was published in the journal Corpora, but a preprint is available Mike Thelwall's website.
Metaphors with taboo source domains
As one part of the study, Thelwall and a helper cateogized 427 swear words from their custom-tailored corpus of Myspace comments and profiles.
They used a category scheme borrowed from a similar study on the British National Corpus. It includes categories like "Predicative negative adjective," "Emphatic adjective," etc. This taxonomy was proposed by Tom McEnery and Richard Xiao in "Swearing in modern British English" (2004).
Thelwall and his helper found, out of the 427 cases, not a single case of metaphorical use of a swear word. Thus, fuck was never used in a sense that extended its sexual meaning, as in, I suppose, I'm going to take this delicious cake back to my room and fuck it. It's even hard to force such a metaphorical reading on this sentence.
Literal use of taboo terms
He did find some literal (sexual, religious, etc.) uses of some swear words, but they only constituted 3% of all cases.
Using my definition of "literal," I would probably have to categorize the emphatic use of curse words as literal, since that was their most frequent use in the corpus. This boils down to saying that when you process a phrase like fucking tired, you do not retrieve or need to retrieve the sexual meaning of fuck.
A word like bloody (used emphatically) might have a slightly higher tendency to evoke the "blood-stained" meaning, since it is less common as an emphatic adjective relative to its "literal" meaning.
It was published in the journal Corpora, but a preprint is available Mike Thelwall's website.
Metaphors with taboo source domains
As one part of the study, Thelwall and a helper cateogized 427 swear words from their custom-tailored corpus of Myspace comments and profiles.
They used a category scheme borrowed from a similar study on the British National Corpus. It includes categories like "Predicative negative adjective," "Emphatic adjective," etc. This taxonomy was proposed by Tom McEnery and Richard Xiao in "Swearing in modern British English" (2004).
Thelwall and his helper found, out of the 427 cases, not a single case of metaphorical use of a swear word. Thus, fuck was never used in a sense that extended its sexual meaning, as in, I suppose, I'm going to take this delicious cake back to my room and fuck it. It's even hard to force such a metaphorical reading on this sentence.
Literal use of taboo terms
He did find some literal (sexual, religious, etc.) uses of some swear words, but they only constituted 3% of all cases.
Using my definition of "literal," I would probably have to categorize the emphatic use of curse words as literal, since that was their most frequent use in the corpus. This boils down to saying that when you process a phrase like fucking tired, you do not retrieve or need to retrieve the sexual meaning of fuck.
A word like bloody (used emphatically) might have a slightly higher tendency to evoke the "blood-stained" meaning, since it is less common as an emphatic adjective relative to its "literal" meaning.
Tim Rohrer: "Image Schemata in the Brain" (2005)
In this article, Tim Rohrer argues that hand metaphors like I found his ideas hard to grasp are understood using the same part of the cortex that plans actual hand movements. He cites evidence from neuroimaging studies.
The article was published in From perception to meaning, which was edited by Beate Hampe and Joseph E. Grady. He just sent me some more of his papers yesterday, including the 2001 presentation of the neuroimaging study that forms an important part of his argumentation.
However, in the present text, Rohrer does not recount very many details about this fMRI study, so I can't say for sure how solid the evidence is. But my first impression after reading his text is that he does seem to have a pretty strong case.
Since I read the text yesterday, some questions have cropped up in my mind, though:
The article was published in From perception to meaning, which was edited by Beate Hampe and Joseph E. Grady. He just sent me some more of his papers yesterday, including the 2001 presentation of the neuroimaging study that forms an important part of his argumentation.
However, in the present text, Rohrer does not recount very many details about this fMRI study, so I can't say for sure how solid the evidence is. But my first impression after reading his text is that he does seem to have a pretty strong case.
Since I read the text yesterday, some questions have cropped up in my mind, though:
- Does this picture hold for anything else than hand metaphors?
- To what extent does this say anything about actual language proficiency? (as opposed to some parallel process not necessary for fluent conversation)
- He cites several lesion studies that show that people need an intact somatosensory cortex in order to understand words like "hand" and "leg." Can such a brain-damaged person understand hand metaphors?
- What about people that are born blind? They use eye metaphors, right?
Thursday, September 22, 2011
Sweetser and Fauconnier: "Cognitive Links and Domains" (1996)
This paper introduces the concept of "mental spaces." It also serves as an introduction to the various papers in Spaces, Worlds, and Grammar (1996), which is edited by the two authors.
The bulk of the paper is concerned with the kind of ambiguity you get from sentences like these:
On the first reading, the gray hair is a part of the fictive universe, as is its inspirational quality. On the second reading, only the inspirational quality is fictive. In modal logic, these are called the de re and de dicto readings.
Sweetser and Fauconnier draw a lot of diagrams and talk a lot of brain talk, but they don't seem to consider the option of doing a logical analysis of the situation. The closest they get is when they confidently (and in a parenthesis) reject it---the two readings readings are namely
Finding out what depth a noun phrase should be nested at would have been interesting, though. It would, for instance, allow them to compute the number of readings for a sentence. This amounts to, for every definite noun phrase, counting how many diamond operators there are with a scope that spans the noun phrase.
The bulk of the paper is concerned with the kind of ambiguity you get from sentences like these:
If Jack were older, his gray hair would inspire confidence. (p. 10)Such a sentence can either be analyzed with the conditional modality modifying both the noun phrase and the verb phrase, or only the verb phrase.
On the first reading, the gray hair is a part of the fictive universe, as is its inspirational quality. On the second reading, only the inspirational quality is fictive. In modal logic, these are called the de re and de dicto readings.
Sweetser and Fauconnier draw a lot of diagrams and talk a lot of brain talk, but they don't seem to consider the option of doing a logical analysis of the situation. The closest they get is when they confidently (and in a parenthesis) reject it---the two readings readings are namely
incorrectly viewed as logical properties of propositional-attitude sentences in many philosophical treatments. (p. 14)Their only argument that this is bad seems to be that their own account has some shadowy relation to mental phenomena like association.
Finding out what depth a noun phrase should be nested at would have been interesting, though. It would, for instance, allow them to compute the number of readings for a sentence. This amounts to, for every definite noun phrase, counting how many diamond operators there are with a scope that spans the noun phrase.
George Lakoff: "The contemporary theory of metaphor" (1993)
George Lakoff's contribution to the second, expanded version of Metaphor and Thought (1979/1993) is written like a corrective to the "old," "traditional," and "classical" metaphor theories of the dark ages preceding Metaphors We Live By (1980). Lakoff's article gives paradigmatic examples of the thinking, rhetoric, and evidence he applies.
Literal means understood-without-mappings
Lakoff defines "literal" in terms of his hypothetical entity, the conceptual mapping:
This definition is interesting, not only because it renders the concept of literal language dependent on Lakoff's theory. It also highlights the fact that what counts as literal and what counts as metaphorical is an empirical and psychological question. Lakoff should thus, according to his own theory, give an argument as to why the balloon went up does not require some pre-established bridge between various parts of the brain, and why the temperature went up does.
Note that he also implicitly hypothesizes that we do not need any conceptual infrastructure to comprehend surprising but presumably literal expressions such as the camel flew away or a wooden balloon.
If there is such a thing as conceptual mappings, it is not self-evident that they are only involved in what lakoff calls metaphorical expressions, and he gives no argument to that effect.
Literal language is identified by intuition
Later in the text, he relies even more explicitly on his intuition. He want to argue that the phrase ahead of is used metaphorically in the sentence
Since less clear-cut examples can be given, his extrapolation to all conceptualizations of magnitudes and scales is somewhat dubious.
Metaphor relates the abstract to the concrete
According to Lakoff, the function of metaphor is to conceptualize the abstract in terms of the concrete:
Conceptual mappings, a shy bird
One of the most interesting and controversial claims of cognitive metaphor theory is that there are no "dead" metaphors (see, e.g., pp. 227 and 245). We should be careful how we read this claim, though, to get its meaning and explanatory power right.
When asked how a particular metaphor is understood, we can pick an answer somewhere between two poles: In one extreme, we allege that the whole linguistic expression is stored in memory, along with its meaning, so that only recall and no "thinking" is required. In the other extreme, we claim that the metaphor requires a search from scratch for a good interpretation, perhaps in terms similarity, relevance, or both.
Lakoff and Johson's theory lies in between these two poles: They claim that the linguistic expressions are not stored, but that a non-linguistic infrastructure---"a huge system of thousands of cross-domain mappings" (p. 203)---is.
Thus, when I say that I'm boiling mad, you do not retrieve a stored meaning of the phrase. Instead, the metaphorical expression exploits an already existing mapping. This mapping works because activation in the concept "heated to the boiling point" will propagate into the concept "very, very angry." Thus, no stored phrase is needed; no search in conceptual space is needed.
Although there is nothing wrong with the picture of this neurological propagation, it fails to explain the linguistic data. The unfortunate effect is that clinging to a rigid structure behind metaphorical expressions (fixed mappings) is either wrong or a good old similarity-based account on a new bottle.
The mapping account: Motivation and idea
The mapping account of metaphor in its most basic form explains a metaphor with reference to a structure-preserving function from the source domain to the target domain. The explanatory power of the theory lies in the fact that this function can be employed by different linguistic terms, not just a fixed, finite set of phrases.
Lakoff himself exemplifies this mechanism by his stipulated function from the domain of travel to the domain of love. The theory then says that travel concepts map onto love concepts, but not only that: The relations that hold between any specific set of travel concepts are preserved by the function so that we reason as if they also hold between the love concepts.
So for instance, the images of IMPEDIMENTS and VEHICLE under this mapping are DIFFICULTIES and RELATIONSHIP. And since IMPEDIMENTS can cause the VEHICLE to stop (a relation in the source domain), so can DIFFICULTIES cause the RELATIONSHIP to stop (the corresponding relationship in the target domain).
So far, the theory is simple and true. It does seems to warrant the conclusion that we use these mappings, and using them for reasoning and not just talk. But then the problems begin.
Ontological gaps in the mapping
The first problem is that not all objects fit into the function.
It goes fine if we plug entities like DESTINATION, FUEL, etc. into metaphorical expressions---these are then discovered that to have been related to LIFE GOALS, EMOTIONS, etc. all along.
But other objects seem to have no equivalents in the target domain. GAS STATION, MAP, PASSPORTS, and numerous other entities are only feebly related, if at all, to entities in the domain of love.
A the quote above (from p. 245) shows, Lakoff is aware of this problem, and he recognizes that metaphors are only "partial" mappings. The functions can only be defined on a subset of the source domain
There is no explicit discussion of why that might be the case, given that the mappings are real, coherent objects, but the arguments I discuss below may throw some light on the mystery.
Epistemic gaps in the mapping
The second problem is the fact that not all source domain structure is in fact preserved in the target domain. For instance, as Lakoff notes,
Lakoff explains this by saying that the "inherent target domain structure automatically limits what can be mapped" (p. 216). He calls this, in super-sinister fashion, "the Invariance Principle," and he launches it as an empirical hypothesis, not a patch on his theory (p. 215).
It should be clear, though, that this principle takes the piss out of the theory to a quite considerable extent. Remember that the empirical justification for introducing the invisible conceptual mappings was the fact that they could explain an array of facts without invoking conventionality or learning of particular phrases; and a part of its attraction lay in the fact that it warranted the hypothesis that these mappings governed thought as well as speech.
Modifying this strong hypothesis with a cautious "principle" that can be invoked every time a mapping begins to look like nothing but a handy label on a bunch of contingent conventions significantly reduces the attractions of the theory. If mapping can't predict anything and only explain some things some times, then they should indeed be an endangered species.
The cyclicity of the language-thought-language argument
A last point that resonated well with McGlone's criticism is the fact that Lakoff sometimes seems to use his own hypotheses as evidence for his theory. An example concerns the mapping SEEING IS TOUCHING, which accounts for metaphors like their eyes met. Lakoff writes:
Apparently, we already have such an extensive knowledge of argumentation that we can filter out siege his argument from attack his argument. So why should the metaphor have anything to say with respect to our actions?
At best, the metaphor can fill out some roles that were not already filled or structured by our prior knowledge of the target domain -- but this is a much weaker statement than the one Lakoff (and Johnson) actually made.
Literal means understood-without-mappings
Lakoff defines "literal" in terms of his hypothetical entity, the conceptual mapping:
those concepts that are not comprehended via conceptual metaphor might be called "literal." (p. 205)He gives the sentence the balloon went up as an example of a literal utterance. It is thus not, according to Lakoff, comprehended through the use of a mapping. A sentence like the line goes from A to B does, on the other hand, does use a mapping (see also p. 215).
This definition is interesting, not only because it renders the concept of literal language dependent on Lakoff's theory. It also highlights the fact that what counts as literal and what counts as metaphorical is an empirical and psychological question. Lakoff should thus, according to his own theory, give an argument as to why the balloon went up does not require some pre-established bridge between various parts of the brain, and why the temperature went up does.
Note that he also implicitly hypothesizes that we do not need any conceptual infrastructure to comprehend surprising but presumably literal expressions such as the camel flew away or a wooden balloon.
If there is such a thing as conceptual mappings, it is not self-evident that they are only involved in what lakoff calls metaphorical expressions, and he gives no argument to that effect.
Literal language is identified by intuition
Later in the text, he relies even more explicitly on his intuition. He want to argue that the phrase ahead of is used metaphorically in the sentence
John is way ahead of Bill in intelligence. (p. 214)To do so, he states:
To say that there is no metaphorical mapping from paths to scales is to say that "ahead of" is not fundamentally spatial and characterized with respect to heads; it is to claim rather that "ahead" is very abstract, neutral between space and linear scales, and has nothing to do with heads. This would be a bizarre analysis. (p. 214-15)Again here, there is some confusion as to what counts as definition, and what counts as data. Lakoff clearly has the intuition that the meaning of ahead of is traceable to its spatial meaning, but he does not provide any criterion for literality or centrality other than this case-by-case intuition.
Since less clear-cut examples can be given, his extrapolation to all conceptualizations of magnitudes and scales is somewhat dubious.
Metaphor relates the abstract to the concrete
According to Lakoff, the function of metaphor is to conceptualize the abstract in terms of the concrete:
as soon as one gets away from concrete physical experience and starts talking about abstractions or emotions, metaphorical understanding is the norm. (p. 205)
Metaphor allows us to understand a relatively abstract or inherently unstructured subject matter in terms of a more concrete, or at least highly structured subject matter. (245)
Metaphors are mappings across conceptual domains.This entails that we can picture the the metaphorical relations as a partial order on the set of domains. In this order, concrete, physical experience would be minimal elements, and all other domains could be placed somewhere higher up in the lattice corresponding to the order.
Such mappings are asymmetric and partial. [...]
Mappings are not arbitrary, but grounded in the body and in everyday experience and knowledge. (p. 245)
Conceptual mappings, a shy bird
One of the most interesting and controversial claims of cognitive metaphor theory is that there are no "dead" metaphors (see, e.g., pp. 227 and 245). We should be careful how we read this claim, though, to get its meaning and explanatory power right.
When asked how a particular metaphor is understood, we can pick an answer somewhere between two poles: In one extreme, we allege that the whole linguistic expression is stored in memory, along with its meaning, so that only recall and no "thinking" is required. In the other extreme, we claim that the metaphor requires a search from scratch for a good interpretation, perhaps in terms similarity, relevance, or both.
Lakoff and Johson's theory lies in between these two poles: They claim that the linguistic expressions are not stored, but that a non-linguistic infrastructure---"a huge system of thousands of cross-domain mappings" (p. 203)---is.
Thus, when I say that I'm boiling mad, you do not retrieve a stored meaning of the phrase. Instead, the metaphorical expression exploits an already existing mapping. This mapping works because activation in the concept "heated to the boiling point" will propagate into the concept "very, very angry." Thus, no stored phrase is needed; no search in conceptual space is needed.
Although there is nothing wrong with the picture of this neurological propagation, it fails to explain the linguistic data. The unfortunate effect is that clinging to a rigid structure behind metaphorical expressions (fixed mappings) is either wrong or a good old similarity-based account on a new bottle.
The mapping account: Motivation and idea
The mapping account of metaphor in its most basic form explains a metaphor with reference to a structure-preserving function from the source domain to the target domain. The explanatory power of the theory lies in the fact that this function can be employed by different linguistic terms, not just a fixed, finite set of phrases.
Lakoff himself exemplifies this mechanism by his stipulated function from the domain of travel to the domain of love. The theory then says that travel concepts map onto love concepts, but not only that: The relations that hold between any specific set of travel concepts are preserved by the function so that we reason as if they also hold between the love concepts.
So for instance, the images of IMPEDIMENTS and VEHICLE under this mapping are DIFFICULTIES and RELATIONSHIP. And since IMPEDIMENTS can cause the VEHICLE to stop (a relation in the source domain), so can DIFFICULTIES cause the RELATIONSHIP to stop (the corresponding relationship in the target domain).
So far, the theory is simple and true. It does seems to warrant the conclusion that we use these mappings, and using them for reasoning and not just talk. But then the problems begin.
Ontological gaps in the mapping
The first problem is that not all objects fit into the function.
It goes fine if we plug entities like DESTINATION, FUEL, etc. into metaphorical expressions---these are then discovered that to have been related to LIFE GOALS, EMOTIONS, etc. all along.
But other objects seem to have no equivalents in the target domain. GAS STATION, MAP, PASSPORTS, and numerous other entities are only feebly related, if at all, to entities in the domain of love.
A the quote above (from p. 245) shows, Lakoff is aware of this problem, and he recognizes that metaphors are only "partial" mappings. The functions can only be defined on a subset of the source domain
There is no explicit discussion of why that might be the case, given that the mappings are real, coherent objects, but the arguments I discuss below may throw some light on the mystery.
Epistemic gaps in the mapping
The second problem is the fact that not all source domain structure is in fact preserved in the target domain. For instance, as Lakoff notes,
you can give someone a kick, even if that person doesn't have it afterward, and [...] you can give someone information, even if you don't lose it. (p. 216)Thus "giving" a kick and "giving" information does not have the same structure as "giving" a present.
Lakoff explains this by saying that the "inherent target domain structure automatically limits what can be mapped" (p. 216). He calls this, in super-sinister fashion, "the Invariance Principle," and he launches it as an empirical hypothesis, not a patch on his theory (p. 215).
It should be clear, though, that this principle takes the piss out of the theory to a quite considerable extent. Remember that the empirical justification for introducing the invisible conceptual mappings was the fact that they could explain an array of facts without invoking conventionality or learning of particular phrases; and a part of its attraction lay in the fact that it warranted the hypothesis that these mappings governed thought as well as speech.
Modifying this strong hypothesis with a cautious "principle" that can be invoked every time a mapping begins to look like nothing but a handy label on a bunch of contingent conventions significantly reduces the attractions of the theory. If mapping can't predict anything and only explain some things some times, then they should indeed be an endangered species.
The cyclicity of the language-thought-language argument
A last point that resonated well with McGlone's criticism is the fact that Lakoff sometimes seems to use his own hypotheses as evidence for his theory. An example concerns the mapping SEEING IS TOUCHING, which accounts for metaphors like their eyes met. Lakoff writes:
This metaphor is made real in the social practice of avoiding eye "contact" on the street (p. 243)The example is quite similar to the ARGUMENT IS WAR example in Metaphors We Live By:
It is important to see that we don't just talk about arguments in terms of war. We can actually win or lose arguments. We see the person we are arguing with as an opponent. We attack his positions and we defend our own. [...] It is in this sense that the ARGUMENT IS WAR metaphor is one that we live by in this culture; its structures the actions we perform in arguing. (p. 4)Given what Lakoff has said about the "Invariance Principle," this suddenly seems a lot less likely. If the target domain can really override the relations of the source domain, why should the mapping then provide any new information?
Apparently, we already have such an extensive knowledge of argumentation that we can filter out siege his argument from attack his argument. So why should the metaphor have anything to say with respect to our actions?
At best, the metaphor can fill out some roles that were not already filled or structured by our prior knowledge of the target domain -- but this is a much weaker statement than the one Lakoff (and Johnson) actually made.
Wednesday, September 21, 2011
Thomas Kuhn: "Metaphor in Science" (1979)
There's a nice little paper by Thomas Kuhn in the book Metaphor and Thought, edited by Andrew Ortony. The paper in an exercise in Wittgensteinian pragmatism with special applications to new, "metaphorical" usage patterns.
Kuhn begins by endorsing an observation:
His main example throughout the paper is the word planet:
The learning algorithm is in any event probably more efficient if it includes negative as well as positive evidence. Kuhn briefly mentions that wars, for instance, are not games, irrespective of the similarities they might have (p. 536).
The paper also explicitly refers to his other nice disccusion of meaning and categorization, the article "Second thoughts on paradigms," which is printed in The Essential Tension (1977).
That's the one in which he writes that "anything is similar to, and also different from, anything else" (p. 307) and argues that you can't learn the meaning of the term duck without acquiring some knowledge and beliefs about ducks as well.
Kuhn begins by endorsing an observation:
However metaphor functions, it neither presupposes nor supplies a list of the respects in which the subjects juxtaposed by metaphor are similar. (p. 533)Rather, he says, a metaphor works by "creating or calling forth the similarities." (p. 533)
His main example throughout the paper is the word planet:
The moon belonged to the family of planets before Copernicus, not afterwards; the earth to the family of planets afterwards, but not before. Eliminating the moon and adding the earth to the list of individuals that could be juxtaposed as paradigms for the term "planet" changed the list of features salient to determining the referents of that term. Removing the moon to a contrasting family increased the effect. (p. 540)His idea thus rests on a theory of meaning that sometimes sounds a bit like a Bayesian learning algorithm, with the social environment providing a classified set of frequently occuring examples:
Exposed to tennis and football as paradigms for the term "game," the language learner is invited to examine the two (and soon, others as well) in an effort to discover the characteristics with respect to which they are alike (p. 537).After having learned a term this way, the learner will probably have an easier time categorizing soccer than fencing or professional boxing, as Kuhn notes (p. 536).
The learning algorithm is in any event probably more efficient if it includes negative as well as positive evidence. Kuhn briefly mentions that wars, for instance, are not games, irrespective of the similarities they might have (p. 536).
The paper also explicitly refers to his other nice disccusion of meaning and categorization, the article "Second thoughts on paradigms," which is printed in The Essential Tension (1977).
That's the one in which he writes that "anything is similar to, and also different from, anything else" (p. 307) and argues that you can't learn the meaning of the term duck without acquiring some knowledge and beliefs about ducks as well.
Fauconnier and Turner: The Way We Think (2002)
Gilles Fauconnier and Mark Turner have written a number on texts on "conceptual blending." I have read their article "Rethinking Metaphor" from Gibbs (ed.): The Cambridge Handbook of Metaphor and Thought (2008) and their book The Way We Think (2002).
Both texts are easy-to-read introductions to their theory of conceptual blends, and both of them exhibit a shocking lack of academic conext and depth.
Contrast With Mappings
The two main differences between the cognitive mapping account and the conceptual blending account of metaphor are the following:
The theory does, however, not explain why or when one domain gets more weight, or under which conditions new objects may appear in the blend. Some hand-waving is done in chapter 16 of The Way We Think, but this mostly amounts to throwing around the words "relevant" and "purpose" a little.
The Lack of Explanations
Conceptual blends explain surprisingly little. The contrast between the awe that the authors display, faced with their own postulated entity, is only matched by how whimsical and cursory the examples and applications are.
Alarm bells should thus go off when we are told, in just two pages, that the cognition behind conceptual blends is "mysterious," "powerful," "complex," once again "complex", "exceptional," "remarkable," once again "remarkable", "elegant," once again "powerful,", "marvelous," and, once again, "marvelous" (p. xi-xii). Since "complex" is often nothing more than another word for "not covered by my theory," this should warn us that The Way We Think is exceptionally vague in its ideas.
And it is indeed. Already early on in the book, we are told, with reference to blending:
Nobody knows how people do it. (p. 20)
Later it is added that
In light of that, it does seem slightly odd to find highly specific assertions like "Mental spaces are built up dynamically in working memory" later in the book, boldly thrown out there without the slightest shred of evidence or even a reference (p. 103). Thus, in spite of seemingly recognizing that their claims are virtually untestable, the authors do not seem to be afraid of spewing out new postulates and hypothesizing new cognitive operations.
Neither do they attempt to compare their analyses to real alternatives. Once in a while, a strawman is refuted with an undocumented reference (such as the case of Grice, p. 69). But more relevant reference points---like Ronald Langacker, James Pustejovsky, Ronald Kaplan, Joan Bresnan, Dan Sperber, or Deidre Wilson---are nowhere to be found.
The Arbitrariness of Analysis
Even without going into the lack of psychological evidence for the claims in The Way We Think, there seems to be one huge problem that faces every single analysis in the book, namely the question: Why?
Let's take an example from analysis of the character of the Grim Reaper.
According to Fauconnier and Turner's analysis, on of the "input spaces" in this mental image is a frame that they call "Causal Tautology" (p. 291-92). This space is a highly general schema for understanding an event without any visible cause, and it works by imposing some generic cause upon an event that could otherwise not be explained. By imposing this very general frame on the event of a certain dying person, we create the a generic cause called "Death." By mixing this blend with a killer and a reaper, we create the Grim Reaper.
The question this analysis raises is then: Why not the opposite? Here's another little story about this event: A particular person is dying, and we mix this event with a generic Killer frame, thus creating an unfilled killer role in our understanding of the scene. By mixing this with a Harvest space, we further fill the killer role with a reaper.
We thus have two different stories that can explain the same phenomenon. How do Fauconnier and Turner know that their account is the right one? They give no argument, perform no experiments, deduce no observable consequences, and compare it with no alternatives. If we should accept any story that fits the facts, then why not choose a more colorful one with some goblins and elves, or perhaps repressed desires and penis envy?
The Reaper and The Grave Digger
More examples can be given of the many possible analyses that Fauconnier and Turner seem to pick randomly from, without any underlying system or principle.
For instance, when they explain why the Reaper is insusceptible to persuasion, they state that this property is inherited from actual literal death (p. 293). Why not somewhere else?
For instance, since the character arose out of a blend with a Killer space, then why not claim that the Reaper inherited this particular property from the killer? Killers are usually pretty bend on their activities and pretty difficult to reason with. Is there anyway to tell the difference, or is all this just a story we're telling ourselves?
Similarly, in their analysis of the sentence "You're digging your own grave," Fauconnier and Turner state that the causality of the digging causing the death comes from the target domain, since normally, deaths causes grave-digging and not the other way around (p. 132-33). But why not some other story?
We could equally well say that graves and deaths are fused in a conceptual blend, since a grave represents a dead person, and a dead person is the end product of a dying event. We could then let the relation of representation be compressed into a relation of causation and, voilà!, we have a genuine Fauconnier/Tunerian conceptual blend that explains the same data.
All of these examples point in the direction of the same problem: Fauconnier and Turner have invented a theory that can find structure and meaning in anything. Since their system is not kept within any set of well-defined boundaries, the world now appears to them to be one large pool of positive evidence. Nothing could ever go wrong.
Both texts are easy-to-read introductions to their theory of conceptual blends, and both of them exhibit a shocking lack of academic conext and depth.
Contrast With Mappings
The two main differences between the cognitive mapping account and the conceptual blending account of metaphor are the following:
- Conceptual blending allows nesting: After you have conceptualized the day as a rotation, you can conceptualize the rotating day as the face of a clock. This allows more complicated blends to be build up in a stepwise fashion.
- In a blend, source and target domains are on more equal footing than in a mapping: The conceptual mapping theory claims that we think about the target domain as if it were the source domain. The conceptual blending claims that we think of certain slice of the world as if it were both the source and the target domain at once, like a double-exposure photograph.
The theory does, however, not explain why or when one domain gets more weight, or under which conditions new objects may appear in the blend. Some hand-waving is done in chapter 16 of The Way We Think, but this mostly amounts to throwing around the words "relevant" and "purpose" a little.
The Lack of Explanations
Conceptual blends explain surprisingly little. The contrast between the awe that the authors display, faced with their own postulated entity, is only matched by how whimsical and cursory the examples and applications are.
Alarm bells should thus go off when we are told, in just two pages, that the cognition behind conceptual blends is "mysterious," "powerful," "complex," once again "complex", "exceptional," "remarkable," once again "remarkable", "elegant," once again "powerful,", "marvelous," and, once again, "marvelous" (p. xi-xii). Since "complex" is often nothing more than another word for "not covered by my theory," this should warn us that The Way We Think is exceptionally vague in its ideas.
And it is indeed. Already early on in the book, we are told, with reference to blending:
Nobody knows how people do it. (p. 20)
Later it is added that
[...] blending is not deterministic. (p. 55)
[...] it would be nonsense to predict that from two inputs a certain blend must result or that a specific blend must arise at such-and-such a place. (p. 55)
[...] constructing blended meaning is no simple task. (p. 69)Apparently, then, predicting a blend can only be done on a case-by-case basis by a qualified individual. We should thus not expect much in the way of a strong theory of conceptual blends.
In light of that, it does seem slightly odd to find highly specific assertions like "Mental spaces are built up dynamically in working memory" later in the book, boldly thrown out there without the slightest shred of evidence or even a reference (p. 103). Thus, in spite of seemingly recognizing that their claims are virtually untestable, the authors do not seem to be afraid of spewing out new postulates and hypothesizing new cognitive operations.
Neither do they attempt to compare their analyses to real alternatives. Once in a while, a strawman is refuted with an undocumented reference (such as the case of Grice, p. 69). But more relevant reference points---like Ronald Langacker, James Pustejovsky, Ronald Kaplan, Joan Bresnan, Dan Sperber, or Deidre Wilson---are nowhere to be found.
The Arbitrariness of Analysis
Even without going into the lack of psychological evidence for the claims in The Way We Think, there seems to be one huge problem that faces every single analysis in the book, namely the question: Why?
Let's take an example from analysis of the character of the Grim Reaper.
According to Fauconnier and Turner's analysis, on of the "input spaces" in this mental image is a frame that they call "Causal Tautology" (p. 291-92). This space is a highly general schema for understanding an event without any visible cause, and it works by imposing some generic cause upon an event that could otherwise not be explained. By imposing this very general frame on the event of a certain dying person, we create the a generic cause called "Death." By mixing this blend with a killer and a reaper, we create the Grim Reaper.
The question this analysis raises is then: Why not the opposite? Here's another little story about this event: A particular person is dying, and we mix this event with a generic Killer frame, thus creating an unfilled killer role in our understanding of the scene. By mixing this with a Harvest space, we further fill the killer role with a reaper.
We thus have two different stories that can explain the same phenomenon. How do Fauconnier and Turner know that their account is the right one? They give no argument, perform no experiments, deduce no observable consequences, and compare it with no alternatives. If we should accept any story that fits the facts, then why not choose a more colorful one with some goblins and elves, or perhaps repressed desires and penis envy?
The Reaper and The Grave Digger
More examples can be given of the many possible analyses that Fauconnier and Turner seem to pick randomly from, without any underlying system or principle.
For instance, when they explain why the Reaper is insusceptible to persuasion, they state that this property is inherited from actual literal death (p. 293). Why not somewhere else?
For instance, since the character arose out of a blend with a Killer space, then why not claim that the Reaper inherited this particular property from the killer? Killers are usually pretty bend on their activities and pretty difficult to reason with. Is there anyway to tell the difference, or is all this just a story we're telling ourselves?
Similarly, in their analysis of the sentence "You're digging your own grave," Fauconnier and Turner state that the causality of the digging causing the death comes from the target domain, since normally, deaths causes grave-digging and not the other way around (p. 132-33). But why not some other story?
We could equally well say that graves and deaths are fused in a conceptual blend, since a grave represents a dead person, and a dead person is the end product of a dying event. We could then let the relation of representation be compressed into a relation of causation and, voilà!, we have a genuine Fauconnier/Tunerian conceptual blend that explains the same data.
All of these examples point in the direction of the same problem: Fauconnier and Turner have invented a theory that can find structure and meaning in anything. Since their system is not kept within any set of well-defined boundaries, the world now appears to them to be one large pool of positive evidence. Nothing could ever go wrong.
Monday, September 19, 2011
The debate over the "Basic Metaphor of Infinity"
Volume 31, issue 6 of Behavioral and Brain Sciences (from 2008) is a special issue on the psychology of mathematics. The contribution to the issue by Lance J. Rips, Amber Bloomfield, and Jennifer Asmuth contains a criticism of, among other things, George Lakoff and Rafael Núñez' theory of number acquisition. There are a number of replies as well as a counterreply in the same issue.
Competing Inferences and Wrong Learning
The problem Rips et al. have with Lakoff and Núñez' theory is that it hypothesizes a cognitive apparatus that, although it does indeed predict the phenomena in question, could equally be used to predict the opposite patterns. So, Lakoff and Núñez can explain how children learn that addition is commutative or that there are infinitely many natural numbers; but their theory could explain the opposite just as well.
For instance, with respect to the "closure" (as they call it) of addition, they ask,
In other words, real-world experience is not unambiguous---it doesn't crystallize into one single, coherent pattern with no competing alternatives. There is no reason why one pattern should end up winning out all alternatives.
The Generalizations of the Brain
Lakoff replies:
But here's a reading: Maybe Lakoff is saying that we don't have any basic experience (n < 4) of sets that we couldn't add more objects to? So because there's no experience of failure within that limited domain, we conclude that the process can go on indefinitely?
Categorizing Experience: Small and Big Counterexamples
There's problem with this idea, though; it seems to entail that any process, if it's long enough, will be conceptualized as infinite. We should then learn that, for instance, counting a pile of coins will be an unending process.
Conversely, suppose we do learn about the bounds of very long but finite processes (such as counting the leaves on a tree). Then Lakoff needs to explain why we do not generalize real-world limitations on indefinite processes (such as walking along a road until we get tired). Either we learn from actual physical bounds, or we don't. Lakoff seems to want it both ways, depending on what he finds in actual mathematics in the particular case.
His best bet here, I guess, is to insist on the unity of a process like counting---any counterexample to one instance of a counting process will automatically teach us something about all counting processes. This will allow him to fold the infinity of the number line into the infinity of possible future experiences with counting. Or, put differently, fold the infinity within the model into the infinity of models.
So, for instance, we do experience examples of bounds on counting within the sensible horizon. We can count three coins and be done. But if the process is indefinite, we do not encounter such example. We never walk 3 steps and then feel that we have exhausted the possibilities of taking further steps. We might be physically blocked, but that's more like being interrupted during counting than it's like encountering a limit.
This would do the trick, as far as I can see. But of course, it would come at the price of presupposing a perfectly functioning ability to recognize different activities as instances of the same process as well as the ability to separate contingent from essential obstacles.
Competing Inferences and Wrong Learning
The problem Rips et al. have with Lakoff and Núñez' theory is that it hypothesizes a cognitive apparatus that, although it does indeed predict the phenomena in question, could equally be used to predict the opposite patterns. So, Lakoff and Núñez can explain how children learn that addition is commutative or that there are infinitely many natural numbers; but their theory could explain the opposite just as well.
For instance, with respect to the "closure" (as they call it) of addition, they ask,
given everyday limits on the disposition of objects, why don't people acquire the opposite “nonclosure” property – that collections of objects cannot always be collected together – and project it to numbers? (p. 636)In the same vein, they object to Lakoff and Núñez' idea of the "Basic Metaphor of Infinity":
Although there may be a potential metaphorical mapping from iterated physical processes to infinite sets of numbers, it is at least as easy to imagine other mappings from iterated processes to finite sets. Why would people follow the first type of inference rather than the second? (p. 636)Thus, the Basic Metaphor of Infinity cannot save abstract arithmetic, since it's open to the exact same problem as arithmetic was in the first place.
In other words, real-world experience is not unambiguous---it doesn't crystallize into one single, coherent pattern with no competing alternatives. There is no reason why one pattern should end up winning out all alternatives.
The Generalizations of the Brain
Lakoff replies:
This is true of direct experience in the world, but not of the neural circuitry learned on the basis of repeated successful “small” cases of object collection, taking steps in a given direction, and so on. Our theory holds despite such physical limitations on large collections in the world. (p. 658)I am not entirely sure how to read this objection. Apparently, neither do Rips et al. because in their counterreply, they state that they "don't see how transposing the problem from mind to brain helps solve it."
But here's a reading: Maybe Lakoff is saying that we don't have any basic experience (n < 4) of sets that we couldn't add more objects to? So because there's no experience of failure within that limited domain, we conclude that the process can go on indefinitely?
Categorizing Experience: Small and Big Counterexamples
There's problem with this idea, though; it seems to entail that any process, if it's long enough, will be conceptualized as infinite. We should then learn that, for instance, counting a pile of coins will be an unending process.
Conversely, suppose we do learn about the bounds of very long but finite processes (such as counting the leaves on a tree). Then Lakoff needs to explain why we do not generalize real-world limitations on indefinite processes (such as walking along a road until we get tired). Either we learn from actual physical bounds, or we don't. Lakoff seems to want it both ways, depending on what he finds in actual mathematics in the particular case.
His best bet here, I guess, is to insist on the unity of a process like counting---any counterexample to one instance of a counting process will automatically teach us something about all counting processes. This will allow him to fold the infinity of the number line into the infinity of possible future experiences with counting. Or, put differently, fold the infinity within the model into the infinity of models.
So, for instance, we do experience examples of bounds on counting within the sensible horizon. We can count three coins and be done. But if the process is indefinite, we do not encounter such example. We never walk 3 steps and then feel that we have exhausted the possibilities of taking further steps. We might be physically blocked, but that's more like being interrupted during counting than it's like encountering a limit.
This would do the trick, as far as I can see. But of course, it would come at the price of presupposing a perfectly functioning ability to recognize different activities as instances of the same process as well as the ability to separate contingent from essential obstacles.
Wednesday, September 14, 2011
The Gendlin-Johnson debate
Eugene Gendlin is a philosopher and psychoanalyst who's been writing on the interaction between logical thought and intuitive thought since the 1970s.
He argues that we should view intuitive thinking is a resource that may inform logical thinking, but not be simulated by it. He thus recommend a kind of thinking that crosses back and forth between intuitive thought and formalizations of this intuitive thought.
In 1997, he and Mark Johnson engaged in a debate on metaphor. This sprang out of Gendlin's essay "Crossing and Dipping" (1997), but the debate took place in a volume of critical essays on Gendlin's philosophy (Kleinberg-Levin: Language Beyond Postmodernism, 1997).
Gendlin on Metaphor
With respect to metaphor, Gendlin stated in the essay that he sees the meaning of a metaphor---indeed, any sentence---as something that can be explicated, but only on a case-by-case basis and with reference to intuitive understanding.
This means that we may understand and explicate a given metaphor, but not in general formalize the mechanism that produces this understanding. He explicitly cites Wittgenstein for this idea. The resultant theory resembles both the thought of Donald Davidson and of Derrida.
Consider for instance the way he tries to show how language can produce meaning without necessarily relying on a preexisting system:
Gendlin seems only partly to appreciate the fact that metaphors can be more or less easily understood. He would probably recognize uncertainty on the formal level, but not on the intuitive.
He further stresses the fact that understanding a metaphor requires quite a lot of knowledge, not only about the source domain, but about the target domain as well. A "use-family," as he calls the source concept, can easily be applied wrongly by an incompetent hearer.
A metaphor thus has a "precise new meaning" (p. 174), but this meaning can only be retrieved by "crossing," not by computation (p. 172).
Problems with the Ordering
Gendlin criticizes several key issues in Lakoff and Johnson's theory. For instance, he rejects the idea that the concreteness ordering has bottom elements, or maybe that it is an antisymmetric order. He thus writes of Johnson that
To exemplify this, he produces a typical cognitive-semantic interpretation of the prices rose. This bases the metaphor on the mapping MORE IS UP, which allegedly arises from our experience with piles and the like.
He then provocatively states:
Gendlin extends this critique Johnson's way of tackling metaphors in general:
Taking such an analysis as evidence of anything is therefore quite dubious. Gendlin says this in terms of "revers[ing] the order" of explanation (Sec. I) or "reading concepts back" (p. 169). Haser and McGlone puts it in terms of circularity.
The Limits of Prediction
In Gendlin's reply to Mark Johnson (pp. 173-74) as well as in his essay (Sec. II), Gendlin gives examples of the limits of metaphorical inferences.
Johnson seems to admit that he has no systematic theory that can account for the precise layout of these limits:
What none of these authors seem to consider, though, is that the meaning of expressions may not be entirely settled in the head of one individual. It is true that prior knowledge will inform the best guess of any hearer, but as metaphors fossilize, a social decision process is also going on. This points in the direction of a role for convention.
He argues that we should view intuitive thinking is a resource that may inform logical thinking, but not be simulated by it. He thus recommend a kind of thinking that crosses back and forth between intuitive thought and formalizations of this intuitive thought.
In 1997, he and Mark Johnson engaged in a debate on metaphor. This sprang out of Gendlin's essay "Crossing and Dipping" (1997), but the debate took place in a volume of critical essays on Gendlin's philosophy (Kleinberg-Levin: Language Beyond Postmodernism, 1997).
Gendlin on Metaphor
With respect to metaphor, Gendlin stated in the essay that he sees the meaning of a metaphor---indeed, any sentence---as something that can be explicated, but only on a case-by-case basis and with reference to intuitive understanding.
This means that we may understand and explicate a given metaphor, but not in general formalize the mechanism that produces this understanding. He explicitly cites Wittgenstein for this idea. The resultant theory resembles both the thought of Donald Davidson and of Derrida.
Consider for instance the way he tries to show how language can produce meaning without necessarily relying on a preexisting system:
Even conjunctions can say something when they come here: I promise to and your many viewpoints, rather than to but them. Once some words have worked in a slot, the slot can also speak alone: I will try to . . . . our discussion. (sec. II)This is indeed creative as well as intelligible tokens of language use, although they come with a quite considerable amount of uncertainty.
Gendlin seems only partly to appreciate the fact that metaphors can be more or less easily understood. He would probably recognize uncertainty on the formal level, but not on the intuitive.
He further stresses the fact that understanding a metaphor requires quite a lot of knowledge, not only about the source domain, but about the target domain as well. A "use-family," as he calls the source concept, can easily be applied wrongly by an incompetent hearer.
A metaphor thus has a "precise new meaning" (p. 174), but this meaning can only be retrieved by "crossing," not by computation (p. 172).
Problems with the Ordering
Gendlin criticizes several key issues in Lakoff and Johnson's theory. For instance, he rejects the idea that the concreteness ordering has bottom elements, or maybe that it is an antisymmetric order. He thus writes of Johnson that
he sometimes sounds as if he were speaking literally about the physical motion domain as if it were original or "basic." (p. 169)He similarly criticizes Lakoff and Johnson's claim that there is an identifiable set of elements in the ordering that serve as the ultimate basis of our conceptual system.
To exemplify this, he produces a typical cognitive-semantic interpretation of the prices rose. This bases the metaphor on the mapping MORE IS UP, which allegedly arises from our experience with piles and the like.
He then provocatively states:
But I think that prices "rise" because the numbers get larger, and we count up from 1. (p. 171)This calls the empirical evidence for the cognitive analysis into question. Why would our concrete experience with numbers---including talking about them---not bear any weight on the issue? Could years of schooling not make any difference to whether we saw something as concrete?
Gendlin extends this critique Johnson's way of tackling metaphors in general:
He imports a cognitive scheme in which he formulates the "correlations," and then selects the fewest that could account for the variety of instances. But what he calls "basic" or "experiential" correlation seems no different in character from all the rest, which he calls "resulting" or "subsequent" (p. 171)This is very similar in style to some of the other critiques of Lakoff and Johnson. Often the components of their analyses seem to come out of nowhere, with no independent motivation, and do exactly the right thing at the right time.
Taking such an analysis as evidence of anything is therefore quite dubious. Gendlin says this in terms of "revers[ing] the order" of explanation (Sec. I) or "reading concepts back" (p. 169). Haser and McGlone puts it in terms of circularity.
The Limits of Prediction
In Gendlin's reply to Mark Johnson (pp. 173-74) as well as in his essay (Sec. II), Gendlin gives examples of the limits of metaphorical inferences.
Johnson seems to admit that he has no systematic theory that can account for the precise layout of these limits:
What cognitive semantics cannot capture in its generalizations, however, is the affective dimension of this experiential grounding of meaning. We can point to it, but we cannot include in our mappings and generalizations the felt sense that is part of what the metaphor means to us, not can we include the way it works in our experience. (p. 167-68)However, calling this an "affective" issue belies the fact that this actually leads to wrong inferences, not just to anemic descriptions.
What none of these authors seem to consider, though, is that the meaning of expressions may not be entirely settled in the head of one individual. It is true that prior knowledge will inform the best guess of any hearer, but as metaphors fossilize, a social decision process is also going on. This points in the direction of a role for convention.
Tendahl and Gibbs: "Complementary perspectives on metaphor" (2008)
Tendahl and Gibbs argue in this (excessively long) paper that cognitive metaphor theory and relevance theory have something to learn from each other.
From Relevance Theory to Cognitive Metaphor Theory
Cognitive metaphor theory already relies on a notion of "relevant" aspects of a source domain. Understanding a metaphor, like any disambiguation process, requires a hearer to identify the plausible readings of an utterance.
Relevance theory consequently treats metaphors like it treats all sentences. It hypothesizes that we produce the reading that has minimal distance to the logical form of the sentence, and that has maximal relevance in given the context.
It is left relatively obscure how this mental algorithm decides in what order to examine the candidates; how it quantifies the relevance and plausibility of a reading; how it weighs those two properties; and when it finds a reading acceptable.
Occasional hints are given, like "The initial context usually consists of the proposition that has been processed most recently" (p. 1848). It never gets more specific than that, though. However, if these parameters were set, relevance theory could probably be (part of) an implementable parsing algorithm. It would differ from, say, Winograd's 1972 parser in that it uses relevance rather than truth as a measure of success.
A nice facet of this argument is that a relevance-theoretical account of metaphor doesn't have to treat all metaphors as dead, or all as live. It might be that some metaphors (e.g. "kick the bucket") simply contribute more to communication when they're processed as lexical items than when they're processed "deeply" (p. 1851).
Put in another way, relevance theory can pack the concept of conventionality into the concept of "early candidates." Since both speaker and hearer know this, they can use that common knowledge to optimize communication, i.e., they can choose to be conservative.
From Cognitive Metaphor Theory to Relevance Theory
Conversely, Tendahl and Gibbs argue that relevance theory can't quite cope with metaphors without help from the cognitive account (pp. 1847-48).
If I understand their point correctly, it is that internal search algorithm needs conceptual schemas like MIND AS CONTAINER and HEAD FOR MIND in order to arrive at the proper reading of He's full of ideas within reasonable time. In other words, if we don't have image schemas (or conceptual maps or conceptual blends or whatever), then we will not be sufficiently biased or guided our production of candidate readings.
Plus or minus the cognitive language, this seems fair. I would probably suggest that the effect has more to do with pattern recognition than with searching through a semantic space. But still, the observation that we need skewed priors seems valid.
"Science says"
There's a really annoying aspect of Tendahl and Gibbs' paper that I really need to comment upon. Consider these quotes:
What's worse, the actual content of the claims is---to my knowledge---actually wrong. There is at least some evidence to that effect.
From Relevance Theory to Cognitive Metaphor Theory
Cognitive metaphor theory already relies on a notion of "relevant" aspects of a source domain. Understanding a metaphor, like any disambiguation process, requires a hearer to identify the plausible readings of an utterance.
Relevance theory consequently treats metaphors like it treats all sentences. It hypothesizes that we produce the reading that has minimal distance to the logical form of the sentence, and that has maximal relevance in given the context.
It is left relatively obscure how this mental algorithm decides in what order to examine the candidates; how it quantifies the relevance and plausibility of a reading; how it weighs those two properties; and when it finds a reading acceptable.
Occasional hints are given, like "The initial context usually consists of the proposition that has been processed most recently" (p. 1848). It never gets more specific than that, though. However, if these parameters were set, relevance theory could probably be (part of) an implementable parsing algorithm. It would differ from, say, Winograd's 1972 parser in that it uses relevance rather than truth as a measure of success.
A nice facet of this argument is that a relevance-theoretical account of metaphor doesn't have to treat all metaphors as dead, or all as live. It might be that some metaphors (e.g. "kick the bucket") simply contribute more to communication when they're processed as lexical items than when they're processed "deeply" (p. 1851).
Put in another way, relevance theory can pack the concept of conventionality into the concept of "early candidates." Since both speaker and hearer know this, they can use that common knowledge to optimize communication, i.e., they can choose to be conservative.
From Cognitive Metaphor Theory to Relevance Theory
Conversely, Tendahl and Gibbs argue that relevance theory can't quite cope with metaphors without help from the cognitive account (pp. 1847-48).
If I understand their point correctly, it is that internal search algorithm needs conceptual schemas like MIND AS CONTAINER and HEAD FOR MIND in order to arrive at the proper reading of He's full of ideas within reasonable time. In other words, if we don't have image schemas (or conceptual maps or conceptual blends or whatever), then we will not be sufficiently biased or guided our production of candidate readings.
Plus or minus the cognitive language, this seems fair. I would probably suggest that the effect has more to do with pattern recognition than with searching through a semantic space. But still, the observation that we need skewed priors seems valid.
"Science says"
There's a really annoying aspect of Tendahl and Gibbs' paper that I really need to comment upon. Consider these quotes:
[C]ognitive linguistic research has argued that many idioms have specific figurative meanings that are partly motivated by people’s active metaphorical knowledge.(p. 1849)
[T]here is a significant body of work suggesting that most idioms are not understood as dead metaphors, and have meanings that are understood in relation to active conceptual metaphors. (p. 1850)
Many studies have shown that conceptual metaphors can be active in the online interpretation of utterances and in the creation of meaning. (p. 1853)None of these examples are followed by a reference, nor by an argument. Imagine the forest of skeptical tags that such language would solicit if I wrote like that on a Wikipedia page.
What's worse, the actual content of the claims is---to my knowledge---actually wrong. There is at least some evidence to that effect.
Tuesday, September 13, 2011
Kertész and Rákosi: "Cyclic vs. Circular argumentation in the Conceptual Metaphor Theory"
This is a somewhat far-fetched scholastic exercise written in an attempt to defend Lakoff and Johnson's theory from charges of circular reasoning. The whole paper relies on a highly idiosyncratic philosophy of science and rhetoric.
The two texts that are cited as criticisms of Lakoff and Johnson are:
The two texts that are cited as criticisms of Lakoff and Johnson are:
- Verena Haser's Metaphor, Metonymy, and Experientialist Philosophy: Challenging Cognitive Semantics (2005)
- "Concepts as Metaphors," Matthew McGlone's chapter in Sam Glucksberg: Understanding Figurative Language: From Metaphors to Idioms (2001)
Their premise (‘imagine a culture . . .’) can be spelt out as follows: Suppose that people in a certain culture view arguments in a different way than we do (i.e., not in terms of war, but in terms of a dance). Their conclusion says that in such a culture, people would ‘view arguments differently’ (Lakoff and Johnson 1980: 5).’’McGlone (p. 95) charges Lakoff and Johnson with using linguistic data to illegitimately infer something about thought:
How do we know that people think of theories in terms of buildings? Because people often talk about theories using building-related expressions. Why do people often talk about theories using building-related expressions? Because people think about theories in terms of buildings.As Keysar et al. point out, the good part of Lakoff and Johnson's argument is that they point out that a number of linguistic expressions "cohere." It is a fair enough conjecture that the source of this coherence is something "in the head," but no amount of linguistic evidence alone can settle the issue.
Steen et al.: "Metaphor in Usage" (2010)
This is a report of a project done at the VU here in Amsterdam under the supervision of Gerard J. Steen. It's an effort to tag the tokens in the British National Corpus as metaphorical or not metaphorical.
Section 1.1 of the paper includes some very handy references to various work critical of cognitive metaphor theory from the perspective of psychology (p. 766), comparative linguistics (p. 767), and linguistics in general (p. 767).
The methodology behind the tagging procedure requires that the human annotators (six PhD students) compare the meaning of a disambiguated word to any "more basic contemporary meaning" of that word (p. 769).
They write that basic meanings "tend to be" characterized by being
Section 1.1 of the paper includes some very handy references to various work critical of cognitive metaphor theory from the perspective of psychology (p. 766), comparative linguistics (p. 767), and linguistics in general (p. 767).
The methodology behind the tagging procedure requires that the human annotators (six PhD students) compare the meaning of a disambiguated word to any "more basic contemporary meaning" of that word (p. 769).
They write that basic meanings "tend to be" characterized by being
– more concrete; what they evoke is easier to imagine, see, hear, feel,This seems to involve a certain amount confusion of criteria. They even state immediately after:
smell, and taste.
– related to bodily action.
– more precise (as opposed to vague).
– historically older. (p. 769)
Basic meanings are not necessarily the most frequent meanings of the lexical unit. (p. 769)This claim, however, comes without quantitative evidence (or footnote).
Joseph E. Grady: "Theories Are Buildings Revisited" (1997)
This paper is a proposed solution to the fact that there are gaps in metaphors (*The theory has French windows). The proposed solution is to treat THEORIES ARE BUILDINGS as a "unification" of two simpler metaphors.
Grady's Criticism: Unpredicted Gaps and Lacking Basis
Grady has two empirical problems with Lakoff and Johnson's claims about the THEORIES ARE BUILDINGS metaphor. Once concerns gaps, the other concerns the lack of concrete experience that could found the metaphor.
Regarding the gaps, Grady notes that the foundation of a theory, a solid fact, a shaky argument etc. are OK, but he then (p. 270) gives the examples
Regarding the basis, he notes that there is no concrete experience that links THEORIES and BUILDINGS in the same sense that, e.g., MORE is associated with some visible feature going UP.
This is of course true, but we should remember that there isn't any experience linking INCREASED UNEMPLOYMENT and UP, either. Or in general, there is no concrete experience linking any abstract domain to anything at all, since we cannot per definition have concrete experience of something abstract.
Perhaps Grady would respond to this by treating The unemployment went up as a compound metaphor. In that case, his theory would probably soon turn out to overstretch quite a lot, since almost nothing would then count as a simple metaphor.
Grady's Proposal: Splitting the Metaphors
We can account for the gaps in THEORIES ARE BUILDINGS as well as its lack of basis by treating it as a compound motivated by two independent metaphors, Grady claims (p. 273). It would then consist of ORGANIZATION IS PHYSICAL STRUCTURE along with PERSISTING IS REMAINING ERECT.
He explains the operation that combines the two metaphors as a "unification" in the sense of lexical-functional grammar (p. 275). That's not quite in order, since a unification per definition can only produce more entailments, and he really wants to cancel some rather than add some. If he wants his theory to fulfill any purpose, he must look at the intersection of two functions, not the union.
But given this point, let's try his theory on. Our brains should now translate a theory into a physical structure and add that this physical structure is standing up if the theory is sound. The theory can then be described by any phrase that could be used equally well to describe a tall statue, the Empire State Building, a standing person, or a flagpole.
This explains metaphorical expressions such as The theory was shot down, His point still stands, The argument was supported by facts, and it predicts the infelicity of *The theory has French windows, *There's plenty of furniture in his points, and *a two-bedrooms argument.
Problems With His Proposal
Unfortunately, that's not all. There are two problems with Grady's theory, one having to do with undergeneration, and one with overgeneration.
First undergeneration: Flagpoles and standing people don't have foundations, so on my reading, Grady's theory ought to predict that THEORIES would have foundations, either. Since this is clearly wrong, either the intersection is the wrong operation to apply, or else the set of physical structures should be specified more.
Since grady cites architects as a necessary part of a PHYSICAL STRUCTURE, he seems to exclude people and flagpoles from the category, although statues of Saddam Hussein or piles of books might possibly still be included. There seems to be no clear rule for deciding what counts as the essential features of a PHYSICAL STRUCTURE, but apparently an architect are on the list, but a location is not (*His theory had a nice location).
Then overgeneration: If ERECT PHYSICAL STRUCTURES have architects, foundations, and frameworks, they can collapse, topple over, and be supported, and they can be solid, shaky, or stable. These features of PHYSICAL STRUCTURES are included because they are relevant to their standing up.
By any reasonable measure, then, they should also have been constructed by some team of (more or less competent) construction workers, they should be made out of a certain (more or less durable) material, they should be constructed by some (more or less reasonable) method. All of these can be eminently relevant to standing up, too.
However, that seems to a stretch. There are plenty of structural features of a building that don't translate directly into theory-language:
Believing Too Hard in Systematicity
Grady's claim is that his theory provides an in-principle account of the difference between intelligible and unintelligible metaphors. As a matter of empirical fact, though, his criterion for (immediate) intelligibility does not seem to coincide with actual (immediate) intelligibility.
His own answer to the undergeneration is that sometimes, other metaphors might be triggered. The THEORIES ARE PHYSICAL STRUCTURES metaphor can then be combined with other metaphors such as, perhaps, BEHIND IS HIDDEN, to yield examples like the following:
Grady seems to think that consistency is the key (p. 286). But just like hierarchical models of metaphors will always create penguins-can-fly entailments, so will consistency criterions in metaphor theory always eventually come back to the problem that anything is like anything and that any phrase is potentially meaningful.
Grady claims that his theory draws the line between felicitous metaphors and infelicitous metaphors systematically and once and for all. I don't think there is such a system to be found -- his approach will at best gloss over the fact that frequency and conventionality plays a larger part than he or Lakoff and Johnson are willing to admit.
Given the many back doors in his theory, so to speak, it seems that his theory either allows anything to be intelligible (because anything potentially might be rationalized) or barely anything (because concepts can't live on abstract meaning alone).
Grady's Criticism: Unpredicted Gaps and Lacking Basis
Grady has two empirical problems with Lakoff and Johnson's claims about the THEORIES ARE BUILDINGS metaphor. Once concerns gaps, the other concerns the lack of concrete experience that could found the metaphor.
Regarding the gaps, Grady notes that the foundation of a theory, a solid fact, a shaky argument etc. are OK, but he then (p. 270) gives the examples
- ?This theory has French Windows.
- ?The tenants of her theory are behind in their rent.
Regarding the basis, he notes that there is no concrete experience that links THEORIES and BUILDINGS in the same sense that, e.g., MORE is associated with some visible feature going UP.
This is of course true, but we should remember that there isn't any experience linking INCREASED UNEMPLOYMENT and UP, either. Or in general, there is no concrete experience linking any abstract domain to anything at all, since we cannot per definition have concrete experience of something abstract.
Perhaps Grady would respond to this by treating The unemployment went up as a compound metaphor. In that case, his theory would probably soon turn out to overstretch quite a lot, since almost nothing would then count as a simple metaphor.
Grady's Proposal: Splitting the Metaphors
We can account for the gaps in THEORIES ARE BUILDINGS as well as its lack of basis by treating it as a compound motivated by two independent metaphors, Grady claims (p. 273). It would then consist of ORGANIZATION IS PHYSICAL STRUCTURE along with PERSISTING IS REMAINING ERECT.
He explains the operation that combines the two metaphors as a "unification" in the sense of lexical-functional grammar (p. 275). That's not quite in order, since a unification per definition can only produce more entailments, and he really wants to cancel some rather than add some. If he wants his theory to fulfill any purpose, he must look at the intersection of two functions, not the union.
But given this point, let's try his theory on. Our brains should now translate a theory into a physical structure and add that this physical structure is standing up if the theory is sound. The theory can then be described by any phrase that could be used equally well to describe a tall statue, the Empire State Building, a standing person, or a flagpole.
This explains metaphorical expressions such as The theory was shot down, His point still stands, The argument was supported by facts, and it predicts the infelicity of *The theory has French windows, *There's plenty of furniture in his points, and *a two-bedrooms argument.
Problems With His Proposal
Unfortunately, that's not all. There are two problems with Grady's theory, one having to do with undergeneration, and one with overgeneration.
First undergeneration: Flagpoles and standing people don't have foundations, so on my reading, Grady's theory ought to predict that THEORIES would have foundations, either. Since this is clearly wrong, either the intersection is the wrong operation to apply, or else the set of physical structures should be specified more.
Since grady cites architects as a necessary part of a PHYSICAL STRUCTURE, he seems to exclude people and flagpoles from the category, although statues of Saddam Hussein or piles of books might possibly still be included. There seems to be no clear rule for deciding what counts as the essential features of a PHYSICAL STRUCTURE, but apparently an architect are on the list, but a location is not (*His theory had a nice location).
Then overgeneration: If ERECT PHYSICAL STRUCTURES have architects, foundations, and frameworks, they can collapse, topple over, and be supported, and they can be solid, shaky, or stable. These features of PHYSICAL STRUCTURES are included because they are relevant to their standing up.
By any reasonable measure, then, they should also have been constructed by some team of (more or less competent) construction workers, they should be made out of a certain (more or less durable) material, they should be constructed by some (more or less reasonable) method. All of these can be eminently relevant to standing up, too.
However, that seems to a stretch. There are plenty of structural features of a building that don't translate directly into theory-language:
- *the stainless steel frame of his theory
- *a tall, thin theory
- *That's a cheap material to build a theory from!
- *Take extra care when building your theory in an earthquake zone
Believing Too Hard in Systematicity
Grady's claim is that his theory provides an in-principle account of the difference between intelligible and unintelligible metaphors. As a matter of empirical fact, though, his criterion for (immediate) intelligibility does not seem to coincide with actual (immediate) intelligibility.
His own answer to the undergeneration is that sometimes, other metaphors might be triggered. The THEORIES ARE PHYSICAL STRUCTURES metaphor can then be combined with other metaphors such as, perhaps, BEHIND IS HIDDEN, to yield examples like the following:
- [...] this does not mean introducing quantum theory on a "back door" into classical theory.
Grady seems to think that consistency is the key (p. 286). But just like hierarchical models of metaphors will always create penguins-can-fly entailments, so will consistency criterions in metaphor theory always eventually come back to the problem that anything is like anything and that any phrase is potentially meaningful.
Grady claims that his theory draws the line between felicitous metaphors and infelicitous metaphors systematically and once and for all. I don't think there is such a system to be found -- his approach will at best gloss over the fact that frequency and conventionality plays a larger part than he or Lakoff and Johnson are willing to admit.
Given the many back doors in his theory, so to speak, it seems that his theory either allows anything to be intelligible (because anything potentially might be rationalized) or barely anything (because concepts can't live on abstract meaning alone).
Monday, September 12, 2011
Reuven Tsur: "Lakoff's Roads Not Taken" (1999)
This is another paper that criticizes Lakoff's metaphor theory for being "flat"---or in Tsur's terms, having "little literary subtlety" (sec. 6).
Tsur's point is basically that Lakoff conflates quick decoding with slow interpretation. This conflates revolutionary language and conventional chatter, and consequently, "its literary application may be harmful" (sec. 4).
Metaphors: Fast or Slow?
Lakoff sees metaphors as instances of cognitive maps. This has the consequence that expressions like life's road or My career was at a crossroads receive stable meanings.
Tsur frowns at this description because he feels it conflates meaning (the product of reading) with understanding (the process of reading). This essentially accommodates metaphorical understanding to literal understanding: Both occur instantly and on the basis of standard tools.
This instant decoding, however, "is not necessarily a competent response to a piece of literature" (sec. 2). In particular, not all sentences are equally pregnant, significant, and ambiguous, and a theory that predicts certainty and intelligibility will hide this fact.
To illustrate this, he uses a sentence from Oedipus the King:
His theory is that this sense of significance kicks in because the meeting of the three roads seems unmotivated. We therefore shift to the slower process of "delayed conceptualization," and this produces more readings and more uncertainty. (And the pieces, by the way, fall into place as it is later revealed that it was Oedipus himself who killed the old king).
Uncertainty, Confidence, and Openness
In a sense, we can see Tsur's theory like this: A reader reads words, understands their meaning, and everything is going fine; but then something---a textual clue or an obnoxious English teacher---pushes the reader out of equilibrium, and the text suddenly seems riddled with loose ends. This tension triggers a search process, and the tension is relieved when a locally most consistent reading is found.
This seems reasonable and consistent with Davidson's observation that, once we start thinking about it, there is not end to what a metaphor might mean. Man is a wolf might be an instantiation of PEOPLE ARE SAVAGE ANIMALS, and it might be an instantiation of PEOPLE ARE PACK ANIMALS, and the metaphor can easily borrow some meanings from both mappings at once.
Tsur wraps this in a quasi-cognitive language by saying that a metaphor is "an efficient coding of information" because it "increases the number of meanings encoded in one spatial image" (sec. 3).
The point, however, is still that there is no algorithm for understanding. Once we're in the land of deep reading, the increased expressivity is bought at the priced of decreased certainty.
Contradiction and Falsehood as Metaphor Triggers
In fact, Tsur himself seems to be saying (following Monroe Beardsley) that this mode of reading is always triggered by "indirectly self-contradictory or obviously false" statements. Tsur is thus in line with Grice on this matter.
Unfortunately, truth is and consistency is not the whole story, as Donald Davidson's example No man is an island shows. However, the Gricean analysis still holds if we allow other conversational oddities to trigger a "deep" reading (e.g. irrelevance).
Lakoff, though, seems to reject all Gricean analysis. That's problematic, since reading-time measures seem to suggest that original metaphors and conversational implicatures are understood in much the same way. (Tsur cites Rachel Giora for this claim.)
Tsur's point is basically that Lakoff conflates quick decoding with slow interpretation. This conflates revolutionary language and conventional chatter, and consequently, "its literary application may be harmful" (sec. 4).
Metaphors: Fast or Slow?
Lakoff sees metaphors as instances of cognitive maps. This has the consequence that expressions like life's road or My career was at a crossroads receive stable meanings.
Tsur frowns at this description because he feels it conflates meaning (the product of reading) with understanding (the process of reading). This essentially accommodates metaphorical understanding to literal understanding: Both occur instantly and on the basis of standard tools.
This instant decoding, however, "is not necessarily a competent response to a piece of literature" (sec. 2). In particular, not all sentences are equally pregnant, significant, and ambiguous, and a theory that predicts certainty and intelligibility will hide this fact.
To illustrate this, he uses a sentence from Oedipus the King:
"Lauis was slain where three highroads meet" (sec. 2).As he says, this sentence's "'air' of significance" is not explained by standard mappings like LIFE IS A JOURNEY. Something else must be going on, both in terms of product and process.
His theory is that this sense of significance kicks in because the meeting of the three roads seems unmotivated. We therefore shift to the slower process of "delayed conceptualization," and this produces more readings and more uncertainty. (And the pieces, by the way, fall into place as it is later revealed that it was Oedipus himself who killed the old king).
Uncertainty, Confidence, and Openness
In a sense, we can see Tsur's theory like this: A reader reads words, understands their meaning, and everything is going fine; but then something---a textual clue or an obnoxious English teacher---pushes the reader out of equilibrium, and the text suddenly seems riddled with loose ends. This tension triggers a search process, and the tension is relieved when a locally most consistent reading is found.
This seems reasonable and consistent with Davidson's observation that, once we start thinking about it, there is not end to what a metaphor might mean. Man is a wolf might be an instantiation of PEOPLE ARE SAVAGE ANIMALS, and it might be an instantiation of PEOPLE ARE PACK ANIMALS, and the metaphor can easily borrow some meanings from both mappings at once.
Tsur wraps this in a quasi-cognitive language by saying that a metaphor is "an efficient coding of information" because it "increases the number of meanings encoded in one spatial image" (sec. 3).
The point, however, is still that there is no algorithm for understanding. Once we're in the land of deep reading, the increased expressivity is bought at the priced of decreased certainty.
Contradiction and Falsehood as Metaphor Triggers
In fact, Tsur himself seems to be saying (following Monroe Beardsley) that this mode of reading is always triggered by "indirectly self-contradictory or obviously false" statements. Tsur is thus in line with Grice on this matter.
Unfortunately, truth is and consistency is not the whole story, as Donald Davidson's example No man is an island shows. However, the Gricean analysis still holds if we allow other conversational oddities to trigger a "deep" reading (e.g. irrelevance).
Lakoff, though, seems to reject all Gricean analysis. That's problematic, since reading-time measures seem to suggest that original metaphors and conversational implicatures are understood in much the same way. (Tsur cites Rachel Giora for this claim.)
Friday, September 9, 2011
Ray Jackendoff and David Aaron's review of More Than Cool Reason (1991)
In 1991, George Lakoff and Mark Turner's book More Than Cool Reason: A Field Guide to Poetic Metaphor (1989) was reviewed by Ray Jackendoff and (the religious studies scholar) David Aaron.
The review was generally favorable, but also included an emphatic and very elaborate criticism of Lakoff and Turner's theory. The main points of this criticism are:
The test for metaphoriticy that Jackendoff and Aaron propose involves explicating the metaphor. Compare the following two examples (p. 326):
I suspect that this test coincides pretty much with just asking an informant directly whether a word is used literally or metaphorically. They don't report having asked anybody else about their example sentences, and presumably, they haven't. In some cases, this leads to slightly dubious results, even though I think the test has some value (see, e.g., their discussion of Death carried him away, p. 330).
Literal belief or conventional expression?
There's a really deep issues buried in Jackendoff and Aaron's discussion of literal applications of words under conditions of false belief. They complain that
Later, they raise a similar object with respect to seeing expressions such as a loyal dog as metaphorical:
Both of these examples touch on the issue that the membrane between language use and world view in general is soft and transparent. There is no linguistic fact of the matter that could resolve this issue, neither is there a cognitive; the correct vocabulary to discuss these matters in is an anthropological one. To get on the right track with respect to these issues, we need to put or Wittgenstein hat on and ask, "How would a culture look if it really believed that life and blood was the same thing?"
The sticky issue of 'basis'
In More Than Cool Reason as well as in other publications, Lakoff (and Turner) hypothesize a basic vocabulary of literal terms, here called "autonomous concepts." Jackendoff and Aaron quote three revealing passages that tries to define this region of conceptual space:
That's not my reason, though; I reproduce them here to show how ill-defined the class of "semantically autonomous" concepts is. Note, for instance, how the definition wobbles between different domains: In the original 1980 formulation of their theory, Lakoff and Johnson only accepted bodily experience such as pressure, cold, or weight as basic. Now, "social" experiences such as possession has crept into the list, but still sticks out like the ad hoc addition that it is.
Even more strikingly, what we have experience with "through our culture" is now also an item in the conceptual base along with "social" experience. Of course, this accounts for war and ownership being on the list; but unfortunately for Lakoff and Turner, it also includes almost every single target domain that they have ever claimed to be abstract: Death, marriage, arguments, animals, computers, etc., etc.
Perhaps a few domains are still left outside the realm of "cultural" or "social" experience, such as abstract Newtonian time or musical structure, but the line seems nearly impossible to draw, and it definitely includes way more than it was supposed to.
Metaphors and "affect"
Just one last comment, because it's such a nice topic: Jackendoff and (primarily?) Aaron are disappointed that
In any case, this theme suggests that there is more to the metaphor than a straightforward transferal of properties (be it one-to-many, many-to-one, or one-to-one). And, I would say, this depth partly stems from the fact that there is more to the source domain than meets the eye: An egg, a branch, a knife, or a snake is not just a "computable" object, it's a real, cultural item whose meaning can't be controlled. This essentially makes creative metaphor use an anarchic phenomenon that wildly overgenerates associations.
Jackendoff and Aaron explain:
To the extent that we have something to say about "understanding" such practices, this goes far beyond what Lakoff and Turner's somewhat square theory can contain. Appropriately, Jackendoff and Aaron also admit:
The review was generally favorable, but also included an emphatic and very elaborate criticism of Lakoff and Turner's theory. The main points of this criticism are:
- The book is insufficiently sourced; it fails to situate itself properly within the fields of linguistics and literary studies (Secs. 2 and 3).
- It identifies concept applications as metaphors far beyond reasonable limits, as indicated by a linguistic tests of metaphoricity devised by the two reviewers (Sec. 4).
- The book includes no theory of concept acquisition or learning; Aaron and (primarily?) Jackendoff suspect that such a theory would probably have to be much more nativist than Lakoff and Turner would like (Sec. 5).
- Their theory fails to account for the aesthetic and "affective" aspects of poetic metaphor, and their analyses consequently come off as quite "flat" (Sec. 6).
The test for metaphoriticy that Jackendoff and Aaron propose involves explicating the metaphor. Compare the following two examples (p. 326):
- Of course, machines aren't people---but if they were, you might say that my computer died on me.
- *Of course, animals aren't people---but if they were, you might say that my dog ran down the street.
The incongruity of treating dogs as humans is acknowledged, but the relevance of this mapping to the expression my dog ran down the street is totally unclear. (p. 327)The issue is thus whether we must necessarily think of running in human terms before we can unpack what it means for a dog to run.
I suspect that this test coincides pretty much with just asking an informant directly whether a word is used literally or metaphorically. They don't report having asked anybody else about their example sentences, and presumably, they haven't. In some cases, this leads to slightly dubious results, even though I think the test has some value (see, e.g., their discussion of Death carried him away, p. 330).
Literal belief or conventional expression?
There's a really deep issues buried in Jackendoff and Aaron's discussion of literal applications of words under conditions of false belief. They complain that
there is no parameter available in L&T's system to make the distinction between literal beliefs and conventionalized I-metaphors [= metaphors that pass their test] (p. 330)So for instance life flowed out of him is probably metaphorical to a modern reader. But in ancient Hebrew culture, such expressions were literal, Jackendoff and Aaron state, since the life of an organism was then believed to be the blood of the organism (p. 327). (By the way, this examples is strikingly similar to Julian Jaynes' discussion of the breath-metaphor in ancient Hindu culture.)
Later, they raise a similar object with respect to seeing expressions such as a loyal dog as metaphorical:
[T]he attribute 'loyal to X' has a major component something like 'willingly stays with X when X is in trouble'; this component applies equally in characterizing people or dogs as loyal. (p. 331)And in a footnote, they add that if dogs were indeed like computers or cars, then their loyalty would have to be construed as metaphorical. But dogs do in fact do things "willingly," and they are in fact "sentient," and therefore, the term loyal can be literally applied to them.
Both of these examples touch on the issue that the membrane between language use and world view in general is soft and transparent. There is no linguistic fact of the matter that could resolve this issue, neither is there a cognitive; the correct vocabulary to discuss these matters in is an anthropological one. To get on the right track with respect to these issues, we need to put or Wittgenstein hat on and ask, "How would a culture look if it really believed that life and blood was the same thing?"
The sticky issue of 'basis'
In More Than Cool Reason as well as in other publications, Lakoff (and Turner) hypothesize a basic vocabulary of literal terms, here called "autonomous concepts." Jackendoff and Aaron quote three revealing passages that tries to define this region of conceptual space:
Semantically autonomous concepts [...] are grounded in the habitual and routine bodily and social patterns we experience, and in what we learn of the experience of others. (p. 113)
[D]epartures, journeys, plants, fire, sleep, days and nights, heat and cold, possessions, burdens, and locations are not themselves metaphorically understood, [...] but rather by virtue of their grounding in what we take to be our forms of life, or habitual and routine bodily and social experiences. (p. 59)
We acquire cognitive models in at least two ways: by our own direct experience and through our culture. Thus, people who have never seen millstones can nonetheless learn, via their culture, that they are used in mills to grind grain, and that they are the enormous round flat stones that rotate about an axis. (p. 66)Jackendoff and Aaron's point in exhibiting these quotes is to show that Lakoff and Turner's theory can't really account for the emergence of abstract concepts (because it isn't Chomskyan enough, i.e., it doesn't include preprogrammed learning algorithms).
That's not my reason, though; I reproduce them here to show how ill-defined the class of "semantically autonomous" concepts is. Note, for instance, how the definition wobbles between different domains: In the original 1980 formulation of their theory, Lakoff and Johnson only accepted bodily experience such as pressure, cold, or weight as basic. Now, "social" experiences such as possession has crept into the list, but still sticks out like the ad hoc addition that it is.
Even more strikingly, what we have experience with "through our culture" is now also an item in the conceptual base along with "social" experience. Of course, this accounts for war and ownership being on the list; but unfortunately for Lakoff and Turner, it also includes almost every single target domain that they have ever claimed to be abstract: Death, marriage, arguments, animals, computers, etc., etc.
Perhaps a few domains are still left outside the realm of "cultural" or "social" experience, such as abstract Newtonian time or musical structure, but the line seems nearly impossible to draw, and it definitely includes way more than it was supposed to.
Metaphors and "affect"
Just one last comment, because it's such a nice topic: Jackendoff and (primarily?) Aaron are disappointed that
the books fails to make sufficient contact with aesthetic concerns that distinguish poetic metaphor from ordinary metaphor. (p. 336)They complain that the Lakovian theory depicts a metaphor as a machine that produces knowledge when in fact it should be something more like a machine that produces pictures or aesthetic experience (in any sense of the word). Thus, in comprehending a metaphor,
the entities of the source domain are vividly present to us [...] The cognitive effect is not unlike that in dreams, where we can experience a person who carries one individual's appearance but at the same time is 'known' to be someone else. (p. 334)They footnote this comparison with a reference to Donald Davidson, but to me, this suggest much more a kind of "schizo-analysis" in the key of Deleuze, or specifically, the "double bookkeeping" or "double exposure" of schizophrenia, discussed by Louis Sass and others.
In any case, this theme suggests that there is more to the metaphor than a straightforward transferal of properties (be it one-to-many, many-to-one, or one-to-one). And, I would say, this depth partly stems from the fact that there is more to the source domain than meets the eye: An egg, a branch, a knife, or a snake is not just a "computable" object, it's a real, cultural item whose meaning can't be controlled. This essentially makes creative metaphor use an anarchic phenomenon that wildly overgenerates associations.
Jackendoff and Aaron explain:
[T]he superimposition operation itself has important effects. The most obvious is the affect contributed by using one entity as a symbol for another. This phenomenon is much more general than metaphor; it appears, for example, in the widespread use of ritual objects as symbols for religious abstractions. The object, just by virtue of being a symbol, is infused with a deep meaningfulness and immediacy that extends to actions in which the object is used. (p. 335)Again, Louis Sass' discussion of the schizophrenic's sense of deep and inexplicable meaningfulness comes to my mind. Think for instance of the bizarre effect that obtains in Nuer tradition of sacrificing a cucumber as if it were a cow if you don't have a cow to spare (as discussed by Evans-Pritchard in the 1960s and by Ralf Norrman in his contribution to the second volume of The Motivated Sign).
To the extent that we have something to say about "understanding" such practices, this goes far beyond what Lakoff and Turner's somewhat square theory can contain. Appropriately, Jackendoff and Aaron also admit:
We have no explanation to offer for this affect, but it is clearly a part of human cognitive and emotional life (p. 335).That's hard not to agree with. Now they only have to realize that this has consequences for ordinary language use as well.
Keysar, Shen, Glucksberg, Horton: "Conventional Language: How Metaphorical Is It?" (2000)
This article reports an set of experiments on conventionalized metaphors. The experiments support the claim that stock metaphors such as He defended his argument are not in any real psychological sense understood by means of (literal) defense.
The experiments rest on the assumption that if the literal meanings of such metaphors are in fact "accessed" or "activated" during the reading of the metaphor, then they will be more readily available immediately after the reading (as in the priming effect known from other contexts). This availability is estimated by measuring the reading time of a sentence that uses the same source domain (e.g., defense) in a non-trivial way.
For instance, consider these three primings and stimuli:
This, however, is not the case---at least not if reading time is used to measure the priming effect. Reading the target stimulus (weaning her latest child) is significantly slower when subjects are primed as in the first example compared to when they are primed as in the second.
This, in essence, is the result reported in the article. As the authors say:
The experiments rest on the assumption that if the literal meanings of such metaphors are in fact "accessed" or "activated" during the reading of the metaphor, then they will be more readily available immediately after the reading (as in the priming effect known from other contexts). This availability is estimated by measuring the reading time of a sentence that uses the same source domain (e.g., defense) in a non-trivial way.
For instance, consider these three primings and stimuli:
- prolific researcher, conceiving ideas ===> weaning her latest child
- fertile researcher, giving birth to ideas ===> weaning her latest child
This, however, is not the case---at least not if reading time is used to measure the priming effect. Reading the target stimulus (weaning her latest child) is significantly slower when subjects are primed as in the first example compared to when they are primed as in the second.
This, in essence, is the result reported in the article. As the authors say:
Thus, while his criticism was right on target might not require use of a mapping between argument and war, his criticism was a guided cruise missile might very well do so. (p. 580)By the way, the article appeared Journal of Memory and Language 43 (2000).
Thursday, September 8, 2011
Gibbs: "Introspection and Cognitive Linguistics" (2006)
A short and nice article about the pitfalls of linguistics without experiments. Concludes with a handy list of recommendations for empirical practices that may help root out far-fetched and thin evidence.
It also contains a short discussion of dead metaphors (p. 144-45). Gibbs acknowledges that speakers and hearers may in fact not process a metaphor like flip your lide from scratch every time they hear it. He cites his own book The Poetics of Mind (1994) for this claim, as well as Lakoff's Women, Fire, and Dangerous Things (1987).
He also refers to an experiment with idioms that were deliberately picked to be obscure ("the goose hangs high" and the like). This experiment seems to confirm the intuition that anything can be rationalized in retrospect, and that we don't need to compute meanings in a compositional way to understand phrases. The study is reported in the article "Intuitions about the transparency of idioms: Can one keep a secret by spilling the beans?" (1995).
According to Gibbs, his qualms about introspection are shared by people like Lera Boroditsky, Sam Glucksberg, Gregory Murphy, and John Vervaeke (misspelled as "Veraeke") and John Kenndy (sic!).
It also contains a short discussion of dead metaphors (p. 144-45). Gibbs acknowledges that speakers and hearers may in fact not process a metaphor like flip your lide from scratch every time they hear it. He cites his own book The Poetics of Mind (1994) for this claim, as well as Lakoff's Women, Fire, and Dangerous Things (1987).
He also refers to an experiment with idioms that were deliberately picked to be obscure ("the goose hangs high" and the like). This experiment seems to confirm the intuition that anything can be rationalized in retrospect, and that we don't need to compute meanings in a compositional way to understand phrases. The study is reported in the article "Intuitions about the transparency of idioms: Can one keep a secret by spilling the beans?" (1995).
According to Gibbs, his qualms about introspection are shared by people like Lera Boroditsky, Sam Glucksberg, Gregory Murphy, and John Vervaeke (misspelled as "Veraeke") and John Kenndy (sic!).
Vervaeke and Green: "Women, Fire, and Dangerous Theories"
This 1997 paper is a critique of George Lakoff's theory of categorization from the perspective of traditional truth-conditional semantics. Lakoff's theory, as presented in Women, Fire, and Dangerous Things (1987), is in itself a critique of both the traditional view and prototype theories of categorization.
Vervaeke and Green are angry. They have a number of problems with Lakoff's style of argumentation, and as the title says, they find his theories "dangerous." In fact, they find that Lakoff is arguing for "cultural relativism" (p. 70), and that his theories have "unacceptable implications for the pursuit of science itself" (p. 77) as well as support "the dangerous result of providing strong grounds for skepticism" (p. 63).
Most of their criticisms regard specific arguments and studies that Lakoff employs, and these are irrelevant to, and largely logically independent of, my critique. I should try to focus on the positive side of their argument, i.e., the alternative model of categorization they assume.
They never explicitly put their own cards on the table, but both the style and the content of their arguments suggest that they think of categories as lists of necessary and sufficient conditions for membership. There is a number of problems with this account, but they briefly sketch a counterargument for two of these problems, prototype effects and bizarre class boundaries.
Prototype effects as distances, input errors, or ambiguities
A prototype effect is the phenomenon that deciding on category membership is sometimes more easy than other times. For instance, a chair can effortlessly be categorized as a piece of furniture, but a rug or a clock is more problematic. The extent of this effect can be measured in terms of reaction times, disagreement between subjects, or by making subjects explicitly rate the difficulty.
Prototype theories explain this by assuming that the further away from a category's central prototype we are, the more cognitive effort is required to decide on its membership. (In fact, they should assume that the problems occur in a belt of borderline cases, not in a large "outside" area; but I haven't seen anyone make that observation in print.)
Lakoff explains these effects by hypothesizing an "idealized cognitive model" that structures the division between furniture and non-furniture. The then assumes that the problems occur because the parameters used by the model to make the membership decisions are absent or ambiguous in the problematic cases.
Thus, a duck is not quite a farm animal but also not quite a non-farm animal. It doesn't quite fit the hole through which we feed our idealized model candidates for farm-animalness.
Vervaeke and Green seems to think that classical semantics will quite suffice:
OK, if I'm really charitable, I could say that the prototype effect should occur with equal intensity in the "All" set, the "Some-But-Not-All set", and the "No" set. But I would suspect that the question "Is a screwdriver a murder weapon?" would be met with reaction times quite distinct from those of "Is a duck a farm animal?"
"External properties" and "social properties"
More centrally, though, to prototype theory are examples like "Is a priest a bachelor?" or "Is a penguin a bird?" Vervaeke and Green work around these examples by postulating to distinct sets of criteria for membership: "Obejctive or external properties" and "social properties" (p. 67)
Having feathers, flying, and singing, then counts as "social properties" of birds, I guess. Vervaeke and Green can now explain the increased reaction times in the case of priests:
If they are, then definitions seem to be dynamic and change as we acquire experience with the world, and that certainly is outside the realm of classical semantics. If they aren't, then apparently definitions aren't a very good model of how people actually decide on category membership. In fact, the "social properties" of a category would then amount to a codification of the prototype, while the "external properties" would amount to a codification of the most liberal application of the category. Thus, Vervaeke and Green would just have reiterated prototype theory under a different name.
The hypernym ordering: is-a vs. is-a-kind-of
Another strategy that Vervaeke and Green use to save the classic conception of category membership is to accuse Lakoff and others of conflating the is-a hierarchy with the is-a-kind-of hierarchy (p. 71).
If I understand this correctly, their point is that the is-a relation obtains as a matter of contingent fact (all firetrucks happen to be red), while the is-a-kind-of relations builds on essential characteristics (the purpose of a firetruck is per definition to extinguish fires).
Allegedly, this should explain why penguins can't fly even though they are birds:
But if we really and consistently pursue that strategy, we end up killing all inferences that involve contingent factors, not just penguin-type cases. For instance, if I get 0 points on an exam (as a matter of contingent fact), then I fail (per definition)---we are indeed able to make at least some such inferences, and the theory should reflect that.
Symbolic and nonsymbolic categories
The second explicit argument that Vervaeke and Green make concerns Lakoff's example of the Dyirbal word balan, which means roughly "women, fire, and dangerous things."
Lakoff uses this example to illustrate that there are sometimes quite elaborate chains of perceived similarity between one end of a words meaning and the other. This is not what we would expect if categories were learned by a gradual acquisition of a list of necessary and sufficient properties (which, presumably, would be given in something like conjunctional normal form).
Vervaeke and Green diminish the significance of this observation by introducing a distinction between "symbolic" and "nonsymbolic" category systems (p. 71), very similar to their Fregean distinction between "external" and "social" properties. Symbolic systems are "driven by rational constraints such as consistency," while nonsymbolic systems are a more fuzzy business used to talk about God, the Trinity, and other irrational stuff (p. 71). Such systems "glory in the existence of contradiction and unresolvable conceptual mystery"
(p. 71).
Using this distinction, Vervaeke and Green then state their suspicion that balan is in fact not part of the neat logical vocabulary of Dyirbal, but only part of the messy nonsymbolic vocabulary. Their theory can be saved if it is reformulated as "Real categories---neat, logical, consistent categories---are stored and processed as propositional definitions."
It seems fairly clear that this is a garbage can argument very much like the rejection of "wrong" sentences as data in syntactic theories of the '60s. Any theory can of course be true if you get to choose on a case-by-case basis what counts as proper data.
Vervaeke and Green are angry. They have a number of problems with Lakoff's style of argumentation, and as the title says, they find his theories "dangerous." In fact, they find that Lakoff is arguing for "cultural relativism" (p. 70), and that his theories have "unacceptable implications for the pursuit of science itself" (p. 77) as well as support "the dangerous result of providing strong grounds for skepticism" (p. 63).
Most of their criticisms regard specific arguments and studies that Lakoff employs, and these are irrelevant to, and largely logically independent of, my critique. I should try to focus on the positive side of their argument, i.e., the alternative model of categorization they assume.
They never explicitly put their own cards on the table, but both the style and the content of their arguments suggest that they think of categories as lists of necessary and sufficient conditions for membership. There is a number of problems with this account, but they briefly sketch a counterargument for two of these problems, prototype effects and bizarre class boundaries.
Prototype effects as distances, input errors, or ambiguities
A prototype effect is the phenomenon that deciding on category membership is sometimes more easy than other times. For instance, a chair can effortlessly be categorized as a piece of furniture, but a rug or a clock is more problematic. The extent of this effect can be measured in terms of reaction times, disagreement between subjects, or by making subjects explicitly rate the difficulty.
Prototype theories explain this by assuming that the further away from a category's central prototype we are, the more cognitive effort is required to decide on its membership. (In fact, they should assume that the problems occur in a belt of borderline cases, not in a large "outside" area; but I haven't seen anyone make that observation in print.)
Lakoff explains these effects by hypothesizing an "idealized cognitive model" that structures the division between furniture and non-furniture. The then assumes that the problems occur because the parameters used by the model to make the membership decisions are absent or ambiguous in the problematic cases.
Thus, a duck is not quite a farm animal but also not quite a non-farm animal. It doesn't quite fit the hole through which we feed our idealized model candidates for farm-animalness.
Vervaeke and Green seems to think that classical semantics will quite suffice:
A far more elegant account, however, might hold that there is simply a quantifier ambiguity in the question, "Are ducks farm animals?" If the intended question is, "Are all ducks farm animals?" the answer is unequivocally "No." If it is, "Are some ducks farm animals?" the answer is unequivocally "Yes." (p. 67)That's interesting---especially since it implies that prototype effects should occur equally with all objects and not just with borderliners.
OK, if I'm really charitable, I could say that the prototype effect should occur with equal intensity in the "All" set, the "Some-But-Not-All set", and the "No" set. But I would suspect that the question "Is a screwdriver a murder weapon?" would be met with reaction times quite distinct from those of "Is a duck a farm animal?"
"External properties" and "social properties"
More centrally, though, to prototype theory are examples like "Is a priest a bachelor?" or "Is a penguin a bird?" Vervaeke and Green work around these examples by postulating to distinct sets of criteria for membership: "Obejctive or external properties" and "social properties" (p. 67)
Having feathers, flying, and singing, then counts as "social properties" of birds, I guess. Vervaeke and Green can now explain the increased reaction times in the case of priests:
There is a match to external properties, and there is a failure of match to social properties, but the participant is unsure about the relevance of these social properties and so hesitates in responding. (p. 67)This definitely calls for some explanation. For instance, are social properties actual parts of the definition?
If they are, then definitions seem to be dynamic and change as we acquire experience with the world, and that certainly is outside the realm of classical semantics. If they aren't, then apparently definitions aren't a very good model of how people actually decide on category membership. In fact, the "social properties" of a category would then amount to a codification of the prototype, while the "external properties" would amount to a codification of the most liberal application of the category. Thus, Vervaeke and Green would just have reiterated prototype theory under a different name.
The hypernym ordering: is-a vs. is-a-kind-of
Another strategy that Vervaeke and Green use to save the classic conception of category membership is to accuse Lakoff and others of conflating the is-a hierarchy with the is-a-kind-of hierarchy (p. 71).
If I understand this correctly, their point is that the is-a relation obtains as a matter of contingent fact (all firetrucks happen to be red), while the is-a-kind-of relations builds on essential characteristics (the purpose of a firetruck is per definition to extinguish fires).
Allegedly, this should explain why penguins can't fly even though they are birds:
For example, a styrofoam cup is a cup, and a cup is a kind of tableware. So a styrofoam cup is, thereby, a kind of tableware? Most people would say not. (p. 71; my emphasis)I really can't see how this could fly (no pun intended). Certainly, we can block inferences as much as we like by hypothesizing two different modal operators (one being a fine-grained version of the other).
But if we really and consistently pursue that strategy, we end up killing all inferences that involve contingent factors, not just penguin-type cases. For instance, if I get 0 points on an exam (as a matter of contingent fact), then I fail (per definition)---we are indeed able to make at least some such inferences, and the theory should reflect that.
Symbolic and nonsymbolic categories
The second explicit argument that Vervaeke and Green make concerns Lakoff's example of the Dyirbal word balan, which means roughly "women, fire, and dangerous things."
Lakoff uses this example to illustrate that there are sometimes quite elaborate chains of perceived similarity between one end of a words meaning and the other. This is not what we would expect if categories were learned by a gradual acquisition of a list of necessary and sufficient properties (which, presumably, would be given in something like conjunctional normal form).
Vervaeke and Green diminish the significance of this observation by introducing a distinction between "symbolic" and "nonsymbolic" category systems (p. 71), very similar to their Fregean distinction between "external" and "social" properties. Symbolic systems are "driven by rational constraints such as consistency," while nonsymbolic systems are a more fuzzy business used to talk about God, the Trinity, and other irrational stuff (p. 71). Such systems "glory in the existence of contradiction and unresolvable conceptual mystery"
(p. 71).
Using this distinction, Vervaeke and Green then state their suspicion that balan is in fact not part of the neat logical vocabulary of Dyirbal, but only part of the messy nonsymbolic vocabulary. Their theory can be saved if it is reformulated as "Real categories---neat, logical, consistent categories---are stored and processed as propositional definitions."
It seems fairly clear that this is a garbage can argument very much like the rejection of "wrong" sentences as data in syntactic theories of the '60s. Any theory can of course be true if you get to choose on a case-by-case basis what counts as proper data.
Wednesday, September 7, 2011
Tuesday, September 6, 2011
Critiques of cognitive metaphor theory -- literature
I've been doing a superficial search for literature that criticizes cognitive metaphor theory. This is what I'll begin with:
- Vervaeke and Green: "Women, Fire, and Dangerous Theories: A Critique of Lakoff's Theory of Categorization" (1997)
- Markus Tendahl and Raymond W. Gibbs Jr.: "Complementary perspectives on metaphor: Cognitive linguistics and relevance theory" (2008).
- Gibbs: "Introspection and Cognitive Linguistics" (2006)
- Keysar and Glucksberg: "Conventional Language: How Metaphorical Is It?" (2000)
- Zlatev: "Embodiment, Language, and Mimesis," in Ziemke, Zlatev, and Frank (eds.): Body, language, and mind, Volume 1: Embodiment (2007)
- González-Márquez (ed.): Methods in Cognitive Linguistics (2006), especially Gibbs' chapter.
- Kristiansen (ed.): Cognitive Linguistics: Current Applications and Future Perspectives (2006), again, especially Gibbs and Perlman's chapter.
- Overton and Palermo (eds.): The Nature and Ontogenesis of Meaning (1994), maybe in particular chapter 5.
- The debate between Mark Johnson and Eugene Gendlin. This may include Gendlin's paper "Crossing and Dipping" (1995) but definitely Mark Johnson's chapter in Language Beyond Postmodernism (1997) and Gendlin's reply to that in the same volume.
- Sharifian, Dirven, Yu, and Niemeier "Culture and language: Looking for the 'mind' inside the body" in Culture, Body, and Language: Conceptualizations of Internal Body Organs across Cultures and Languages (edited by the article authors) might have some good examples of the non-universality of the body.
- There are two papers in the discussion section of Pragmatics & Cognition 7(2) that might be relevant: Reuven Tsur: "Lakoff's roads not taken" and Teresa Bejarano: "Prelinguistic metaphors?".
- Regarding the asymmetry of metaphors, Mark Tendahl and Ray Gibbs make a generic reference to Fauconnier and Turner: The Way We Think: Conceptual Blending and The Mind’s Hidden Complexities (Basic Books, New York, 2002). I'll have to look into that and see if it's relevant to bidirectional metaphor.
- According to himself, Markus Tendahl has a discussion of the somewhat problematic claim that metaphors preserve inference structure (all inference structure!) in his PhD thesis. This might be relevant to my story about systematic gaps in metaphor systems.
More on Glazer and Rubinstein on debates
By the way, I computed the entire set of winning strategies that goes with the decision rule that Glazer and Rubinstein present on p. 255 of Game Theory and Pragmatics.
The rule regards a case with five witnesses numbered 1 through 5. Debater 1 first cites one witness that supports his case, and debater 2 then cites a witness that supports her case. The decision rule then specifies the pairs of arguments after which the listener will guess that debater 2's case reflects the pool of witness better. Using the specific rule on p. 255, the listener does so after these pairs of arguments:
Note that only 8 out of the 5 x 4 = 20 possible histories favour debater 2. This reflects the fact that the game is sequential, and that debater 2 moves last. She just has an information advantage.
The rule could also be specified by labeling the leaves of a tree. In that case we should read each pair as paths from the root to a leaf, and we should label all these leafs "2" and the rest "1."
Given this decision rule, we can compute the set of arguments E that are winning strategies for debater 1 in a given state s. It turns out that this function E = E(s) has the following values:
As Glazer and Rubinstein note, this decision rule leads the listener to three mistakes: In the states <1,1,2,2,2> and <2,2,1,1,2>, debater 1 has a winning strategy (cite witness 1 and 3, respectively) even though a majority of the witnesses support debater 2's case; in the state <1,2,1,2,1>, he has no winning strategy even though a majority of the witness support his case.
The rule regards a case with five witnesses numbered 1 through 5. Debater 1 first cites one witness that supports his case, and debater 2 then cites a witness that supports her case. The decision rule then specifies the pairs of arguments after which the listener will guess that debater 2's case reflects the pool of witness better. Using the specific rule on p. 255, the listener does so after these pairs of arguments:
- <1,2>, <2,3>, <2,5>, <3,4>, <4,2>, <4,5>, <5,1>, <5,4>
Note that only 8 out of the 5 x 4 = 20 possible histories favour debater 2. This reflects the fact that the game is sequential, and that debater 2 moves last. She just has an information advantage.
The rule could also be specified by labeling the leaves of a tree. In that case we should read each pair as paths from the root to a leaf, and we should label all these leafs "2" and the rest "1."
Given this decision rule, we can compute the set of arguments E that are winning strategies for debater 1 in a given state s. It turns out that this function E = E(s) has the following values:
- E(<1,1,1,1,1>) = {1, 2, 3, 4, 5}
- E(<1,1,1,1,2>) = {1, 3}
- E(<1,1,1,2,1>) = {1, 2}
- E(<1,1,1,2,2>) = E(<1,1,2,1,2>) = E(<1,1,2,2,1>) = E(<1,1,2,2,2>) = {1}
- E(<1,1,2,1,1>) = {1, 4, 5}
- E(<1,2,1,1,1>) = {3, 5}
- E(<1,2,2,1,1>) = E(<2,1,1,1,2>) = E(<2,2,1,1,1>) = E(<2,2,1,1,2>) = {3}
- E(<1,2,2,1,1>) = {5}
- E(<2,1,1,2,1>) = {2}
- E(<2,1,2,1,1>) = {4}
As Glazer and Rubinstein note, this decision rule leads the listener to three mistakes: In the states <1,1,2,2,2> and <2,2,1,1,2>, debater 1 has a winning strategy (cite witness 1 and 3, respectively) even though a majority of the witnesses support debater 2's case; in the state <1,2,1,2,1>, he has no winning strategy even though a majority of the witness support his case.
Labels:
Ariel Rubinstein
,
debate
,
game theory
,
Jacob Glazer
Glazer and Rubinstein on debates
There's a really funny and interesting paper in Game Theory and Pragmatics about the pragmatics of debate (chapter 9, by Jacob Glazer and Ariel Rubinstein). I think their results might benefit both from being reformulated in a information-theoretical language and in epistemic logic, but I'm not quite sure how.
In their model, a debate is a game played by three people; debater 1, debater 2, and listener. The debaters can refer to one witness each to support their case, and the goal of listener is then to guess whether a majority of the pool of possible witnesses actually support the case of debater 1 or the case of debater 2. The task of the listener is to devise a guessing pattern such that debater 1 has a winning strategy if a majority of the witnesses support his case, and debater 2 has a winning strategy if the majority supports her case.
Maybe this can be seen as a cryptography problem? Listener doesn't know the state of the world, and the preference of the debaters is, in a sense, to keep it things way. But in cryptography, there is only an encoder and a spy, and the map from states of the world to messages must be injective. In the debating game, arguments do not need to be different in different states of the world, but there are other constraints such as truthfulness and length. I don't know exactly how far this metaphor can be pushed.
In their model, a debate is a game played by three people; debater 1, debater 2, and listener. The debaters can refer to one witness each to support their case, and the goal of listener is then to guess whether a majority of the pool of possible witnesses actually support the case of debater 1 or the case of debater 2. The task of the listener is to devise a guessing pattern such that debater 1 has a winning strategy if a majority of the witnesses support his case, and debater 2 has a winning strategy if the majority supports her case.
Maybe this can be seen as a cryptography problem? Listener doesn't know the state of the world, and the preference of the debaters is, in a sense, to keep it things way. But in cryptography, there is only an encoder and a spy, and the map from states of the world to messages must be injective. In the debating game, arguments do not need to be different in different states of the world, but there are other constraints such as truthfulness and length. I don't know exactly how far this metaphor can be pushed.
Subscribe to:
Posts
(
Atom
)