Lewis' definition of "convention" (p. 78) crucially relies on the concept of common knowledge as well as rationality: For a equilibrium R to be a convention, it must be common knowledge to everyone that the majority conforms to R.
So mere behavioral adaption without a mutual ascription of rationality doesn't count as "convention" (cf. p. 59). Every player has to think that every other player is rational and has the same knowledge as him- or herself. You can't think that you're the only one who's not a robot and still call it a "convention" according to Lewis.
He also excludes solutions to trivial coordination games (p. 78, item 5). There has to be at least two distinct equilibria available so that the fixation in one or the other becomes truly contingent. He doesn't seem to entertain the possibility that there could be something like degrees of contingency.
Showing posts with label Information Games. Show all posts
Showing posts with label Information Games. Show all posts
Wednesday, March 14, 2012
Tuesday, March 13, 2012
Lewis on bias and analogy in Convention (1969)
David K. Lewis' short book Convention: A Philosophical Study (1969) is an attempt to explain meaning in terms of equilibria in coordination games. The concepts that do the most of the heavy lifting are precedent, analogy, and salience.
The Ambiguity of Precedent
Lewis is aware that no two situations are ever alike, and that agent thus have to engage in some kind of extrapolation:
Finding A Relevant Precedent
We then have a two-dimensional similarity space (me/you differences and caller/receiver differences). If these dimensions have the same weight, then both analogies yield the same average fit:
The big question is then how convention is possible at all, if everything is similar to everything else in some respects. Some sense of immediate similarity must be picked up along the way or have been there all along.
Lewis doesn't seem to have any psychological theory of salience or of similarity between new and old situations. However, one could construct a similarity measure based on possible courses of action, so that situations are similar when their elements can be handled in a similar way.
For instance, electricity can be much like water because many of our intuitions about how it behaves are reliable. Thus source elements such as "source," "direction," "pressure," etc. can be paired with certain target elements without needing much behavioral adjustment.
Similarly, source elements as "front" and "back" can be paired with the screen-side and the wall-side of a television in either of two ways. These pairings will on average require less behavioral adjustment than, say, pairing "head" and "feet" with screen-side and wall-side. A few candidate analogies are thus consistent with our prior knowledge, although no single analogy takes absolute priority.
The Ambiguity of Precedent
Lewis is aware that no two situations are ever alike, and that agent thus have to engage in some kind of extrapolation:
Suppose not that we are given the original problem again, but rather that we are given a new coordination problem analogous somehow to the original one. Guided by whatever analogy we notice, we tend to follow precedent by trying for a coordination equilibrium in the new problem which uniquely corresponds to the one we reached before. (p. 37)A consequence of this is ambiguity. He immediately continues:
There might be alternative analogies. If so, there is room for ambiguity about what would be following precedent and doing what we did before. Suppose that yesterday I called you on the telephone and I called back when we were cut off. We have a precedent in which I called back and a precedent—the same one—in which the original caller called back. But this time you are the original caller. No matter what I do this time, I do something analogous to what we did before. Our ambiguous precedent does not help us. (p. 37)Or, that is exactly the big question: Whether and when precedent decides or even strictly determines what a construction means in some new situation.
Finding A Relevant Precedent
We then have a two-dimensional similarity space (me/you differences and caller/receiver differences). If these dimensions have the same weight, then both analogies yield the same average fit:
The big question is then how convention is possible at all, if everything is similar to everything else in some respects. Some sense of immediate similarity must be picked up along the way or have been there all along.
Lewis doesn't seem to have any psychological theory of salience or of similarity between new and old situations. However, one could construct a similarity measure based on possible courses of action, so that situations are similar when their elements can be handled in a similar way.
For instance, electricity can be much like water because many of our intuitions about how it behaves are reliable. Thus source elements such as "source," "direction," "pressure," etc. can be paired with certain target elements without needing much behavioral adjustment.
Similarly, source elements as "front" and "back" can be paired with the screen-side and the wall-side of a television in either of two ways. These pairings will on average require less behavioral adjustment than, say, pairing "head" and "feet" with screen-side and wall-side. A few candidate analogies are thus consistent with our prior knowledge, although no single analogy takes absolute priority.
Friday, March 2, 2012
Clark: "Responding to Indirect Speech Acts" (1979)
I haven't read all of this (quite long) paper, but it's intriguing because it employs a quite unconventional methodology: Herbert Clark makes inferences about people's processing of indirect speech acts by looking at how they respond to them verbally, in particular whether they respond to the literal meaning, the conveyed meaning, or both.
The method is this: You decide on a stimulus questions, for instance Could you tell what time it is?; then you look up a dozen shops in the phone directory and call them; you ask them the question, and you record the exact wording of their response. Nice and simple.
The result that comes out of this exercise is that people frequently respond to both the literal and the derived meaning of a query, in that order. So for instance, you might ask someone Could you tell me what time it is? and that person might respond Yes, it's four o'clock.
Clark's conclusion from this data is that both the literal and the derived meaning of the question must enter the hearer's mind. I'm not sure whether that actually follows from the data (there are other competing explanations), but the observation that people actually say Yes is indeed worth taking seriously for a psycholinguistic theory.
Parrot Responses
One of the reasons that Clark has to doubt his own conclusion is that people in fact also respond Yes when the response to the literal request is in fact No. For instance, you may ask me Would you mind telling me what time it is?, and I may respond Yes, it's four o'clock. Only a very small minority responds with a No (p. 447-48).
This of course undermines Clark's use of the data slightly, since it may imply that people only respond Yes as a matter of verbal habit or perhaps to convey a more general sense of acceptance or affirmation—and not because they actually process the literal meaning of the request.
Clark himself explains the problem away by expanding the Gricean two-stage theory into a three-stage theory: He supposes that the question is gradually broken down according to the following progression:
That doesn't seem quite right—but OK, it's a theory.
A Reanalysis By Gibbs
Raymond Gibbs explains the same data by assuming that people include the Yes because it "is conventionally thought of as being polite" rather because they interpret the question literally as well as figuratively (Poetics of Mind, p. 89).
His support for this claim comes from an experiment in which he forced subjects into a literal reading of a question of the form Can't you ... ? This turns out to be difficult, and people take longer time to do this than to read the same question when it functions as an indirect request.
This data is quite dubious, both because the sentences are quite odd (Can't you be friendly?) and because the stories are quite badly written and don't unequivocally exclude an indirect request reading of the question in the so-called "literal" context.
However, Gibbs follows up with the following comment:
Conventionality and Frequency
The intuition that Gibbs has about the frequencies is not entirely unreasonable, although the story becomes a little bit more complex when we look at actual empirical frequencies. I've done a quick count based on the MICASE corpus and found the following estimates:
The numbers in the two first rows are based on a search in the "highly interactive" section of the corpus, and the numbers in the two last rows are based on a search in the entire corpus.
The phrase can you is so common in the corpus that I just picked 55 occurrences at random and categorized those. All other numbers are based on exhaustive search within the ranges specified above.
The category "Unclear" covers cases where both a question reading and a request reading are compatible with the phrase, for instance:
It's interesting that there are in fact a relatively large overlap in the direct and the indirect function of the sentences. These are the cases where there is a actual way out for the hearer of the request, such as Could you say anything about that? The existence of such real ambiguities are of course what motivates the use of indirect speech acts as politeness strategies in the first place.
The method is this: You decide on a stimulus questions, for instance Could you tell what time it is?; then you look up a dozen shops in the phone directory and call them; you ask them the question, and you record the exact wording of their response. Nice and simple.
The result that comes out of this exercise is that people frequently respond to both the literal and the derived meaning of a query, in that order. So for instance, you might ask someone Could you tell me what time it is? and that person might respond Yes, it's four o'clock.
Clark's conclusion from this data is that both the literal and the derived meaning of the question must enter the hearer's mind. I'm not sure whether that actually follows from the data (there are other competing explanations), but the observation that people actually say Yes is indeed worth taking seriously for a psycholinguistic theory.
Parrot Responses
One of the reasons that Clark has to doubt his own conclusion is that people in fact also respond Yes when the response to the literal request is in fact No. For instance, you may ask me Would you mind telling me what time it is?, and I may respond Yes, it's four o'clock. Only a very small minority responds with a No (p. 447-48).
This of course undermines Clark's use of the data slightly, since it may imply that people only respond Yes as a matter of verbal habit or perhaps to convey a more general sense of acceptance or affirmation—and not because they actually process the literal meaning of the request.
Clark himself explains the problem away by expanding the Gricean two-stage theory into a three-stage theory: He supposes that the question is gradually broken down according to the following progression:
- Would you mind telling me what time it is?
- Will you tell me what time it is?
- Tell me what time it is!
That doesn't seem quite right—but OK, it's a theory.
A Reanalysis By Gibbs
Raymond Gibbs explains the same data by assuming that people include the Yes because it "is conventionally thought of as being polite" rather because they interpret the question literally as well as figuratively (Poetics of Mind, p. 89).
His support for this claim comes from an experiment in which he forced subjects into a literal reading of a question of the form Can't you ... ? This turns out to be difficult, and people take longer time to do this than to read the same question when it functions as an indirect request.
This data is quite dubious, both because the sentences are quite odd (Can't you be friendly?) and because the stories are quite badly written and don't unequivocally exclude an indirect request reading of the question in the so-called "literal" context.
However, Gibbs follows up with the following comment:
In general, people are biased toward the conventional interpretations of sentences even when these conventional meanings are nonliteral or figurative. Certain sentence forms, such as Can you … ? and May I … ?, conventionally seem to be used as indirect requests. Listeners' familiarity with these sentence forms, along with the context, helps them immediately comprehend the indirect meaning of these indirect requests. People may not automatically compute both the literal and indirect meanings of indirect speech acts. (Poetics of Mind, p. 91)This seems more reasonable and in fact brings him much closer to the keyword theory of Cacciari and Tabossi (cf. Idioms (1995), chapters 2 and 11).
Conventionality and Frequency
The intuition that Gibbs has about the frequencies is not entirely unreasonable, although the story becomes a little bit more complex when we look at actual empirical frequencies. I've done a quick count based on the MICASE corpus and found the following estimates:
Sentence form | Literal | Indirect | Unclear | Other |
Can you …? | 17 | 17 | 12 | 9 |
Could you …? | 12 | 23 | 13 | 18 |
May I …? | 0 | 13 | 0 | 6 |
Would you mind …? | 0 | 9 | 0 | 0 |
The numbers in the two first rows are based on a search in the "highly interactive" section of the corpus, and the numbers in the two last rows are based on a search in the entire corpus.
The phrase can you is so common in the corpus that I just picked 55 occurrences at random and categorized those. All other numbers are based on exhaustive search within the ranges specified above.
The category "Unclear" covers cases where both a question reading and a request reading are compatible with the phrase, for instance:
- can you remember that?
- can you cut it up so that everybody gets a piece?
- can you predict you know what it's gonna be?
It's interesting that there are in fact a relatively large overlap in the direct and the indirect function of the sentences. These are the cases where there is a actual way out for the hearer of the request, such as Could you say anything about that? The existence of such real ambiguities are of course what motivates the use of indirect speech acts as politeness strategies in the first place.
Wednesday, September 14, 2011
Tendahl and Gibbs: "Complementary perspectives on metaphor" (2008)
Tendahl and Gibbs argue in this (excessively long) paper that cognitive metaphor theory and relevance theory have something to learn from each other.
From Relevance Theory to Cognitive Metaphor Theory
Cognitive metaphor theory already relies on a notion of "relevant" aspects of a source domain. Understanding a metaphor, like any disambiguation process, requires a hearer to identify the plausible readings of an utterance.
Relevance theory consequently treats metaphors like it treats all sentences. It hypothesizes that we produce the reading that has minimal distance to the logical form of the sentence, and that has maximal relevance in given the context.
It is left relatively obscure how this mental algorithm decides in what order to examine the candidates; how it quantifies the relevance and plausibility of a reading; how it weighs those two properties; and when it finds a reading acceptable.
Occasional hints are given, like "The initial context usually consists of the proposition that has been processed most recently" (p. 1848). It never gets more specific than that, though. However, if these parameters were set, relevance theory could probably be (part of) an implementable parsing algorithm. It would differ from, say, Winograd's 1972 parser in that it uses relevance rather than truth as a measure of success.
A nice facet of this argument is that a relevance-theoretical account of metaphor doesn't have to treat all metaphors as dead, or all as live. It might be that some metaphors (e.g. "kick the bucket") simply contribute more to communication when they're processed as lexical items than when they're processed "deeply" (p. 1851).
Put in another way, relevance theory can pack the concept of conventionality into the concept of "early candidates." Since both speaker and hearer know this, they can use that common knowledge to optimize communication, i.e., they can choose to be conservative.
From Cognitive Metaphor Theory to Relevance Theory
Conversely, Tendahl and Gibbs argue that relevance theory can't quite cope with metaphors without help from the cognitive account (pp. 1847-48).
If I understand their point correctly, it is that internal search algorithm needs conceptual schemas like MIND AS CONTAINER and HEAD FOR MIND in order to arrive at the proper reading of He's full of ideas within reasonable time. In other words, if we don't have image schemas (or conceptual maps or conceptual blends or whatever), then we will not be sufficiently biased or guided our production of candidate readings.
Plus or minus the cognitive language, this seems fair. I would probably suggest that the effect has more to do with pattern recognition than with searching through a semantic space. But still, the observation that we need skewed priors seems valid.
"Science says"
There's a really annoying aspect of Tendahl and Gibbs' paper that I really need to comment upon. Consider these quotes:
What's worse, the actual content of the claims is---to my knowledge---actually wrong. There is at least some evidence to that effect.
From Relevance Theory to Cognitive Metaphor Theory
Cognitive metaphor theory already relies on a notion of "relevant" aspects of a source domain. Understanding a metaphor, like any disambiguation process, requires a hearer to identify the plausible readings of an utterance.
Relevance theory consequently treats metaphors like it treats all sentences. It hypothesizes that we produce the reading that has minimal distance to the logical form of the sentence, and that has maximal relevance in given the context.
It is left relatively obscure how this mental algorithm decides in what order to examine the candidates; how it quantifies the relevance and plausibility of a reading; how it weighs those two properties; and when it finds a reading acceptable.
Occasional hints are given, like "The initial context usually consists of the proposition that has been processed most recently" (p. 1848). It never gets more specific than that, though. However, if these parameters were set, relevance theory could probably be (part of) an implementable parsing algorithm. It would differ from, say, Winograd's 1972 parser in that it uses relevance rather than truth as a measure of success.
A nice facet of this argument is that a relevance-theoretical account of metaphor doesn't have to treat all metaphors as dead, or all as live. It might be that some metaphors (e.g. "kick the bucket") simply contribute more to communication when they're processed as lexical items than when they're processed "deeply" (p. 1851).
Put in another way, relevance theory can pack the concept of conventionality into the concept of "early candidates." Since both speaker and hearer know this, they can use that common knowledge to optimize communication, i.e., they can choose to be conservative.
From Cognitive Metaphor Theory to Relevance Theory
Conversely, Tendahl and Gibbs argue that relevance theory can't quite cope with metaphors without help from the cognitive account (pp. 1847-48).
If I understand their point correctly, it is that internal search algorithm needs conceptual schemas like MIND AS CONTAINER and HEAD FOR MIND in order to arrive at the proper reading of He's full of ideas within reasonable time. In other words, if we don't have image schemas (or conceptual maps or conceptual blends or whatever), then we will not be sufficiently biased or guided our production of candidate readings.
Plus or minus the cognitive language, this seems fair. I would probably suggest that the effect has more to do with pattern recognition than with searching through a semantic space. But still, the observation that we need skewed priors seems valid.
"Science says"
There's a really annoying aspect of Tendahl and Gibbs' paper that I really need to comment upon. Consider these quotes:
[C]ognitive linguistic research has argued that many idioms have specific figurative meanings that are partly motivated by people’s active metaphorical knowledge.(p. 1849)
[T]here is a significant body of work suggesting that most idioms are not understood as dead metaphors, and have meanings that are understood in relation to active conceptual metaphors. (p. 1850)
Many studies have shown that conceptual metaphors can be active in the online interpretation of utterances and in the creation of meaning. (p. 1853)None of these examples are followed by a reference, nor by an argument. Imagine the forest of skeptical tags that such language would solicit if I wrote like that on a Wikipedia page.
What's worse, the actual content of the claims is---to my knowledge---actually wrong. There is at least some evidence to that effect.
Subscribe to:
Posts
(
Atom
)