Tuesday, July 23, 2013

Sereno, Pacht, and Rayner: "The Effect of Meaning Frequency on Processing Lexically Ambiguous Words" (1992)

This experiment measured how long people took to read target words of the following kind:
  • The dinner party was proceeding smoothly when, just as Mary was serving the port, one of the guests had a heart attack. (Ambiguous word used in low-frequent sense)
  • The dinner party was proceeding smoothly when, just as Mary was serving the soup, one of the guests had a heart attack. (Unambiguous high-frequent word)
  • The dinner party was proceeding smoothly when, just as Mary was serving the veal, one of the guests had a heart attack. (Unambiguous low-frequent word)
The idea is here that the the two control words (soup and veal) are matched in frequency to the senses of the ambiguous word ("harbor" and "wine"). The word soup thus has approximately the same frequency as the "harbor" meaning of port, and the word veal approximately the same as the "wine" meaning.

Skipping to the conclusion, it then turns out that the ambiguous word (here, port) takes more time to process than either of the unambiguous words. That seems fairly natural, since a reader would not only have to remember the (infrequent) meaning of the word, but also subsequently resolve an ambiguity issue. This may take some time.

Four Stories About Comprehension

One can imagine the psychological process of reading and understanding an ambiguous snippet of text in several different ways. Sereno, Pacht, and Rayner cite four models in particular:
  1. One model claims that "only the contextually appropriate meaning is activated in the lexicon," that is, context dictatorially determines access (p. 296).
  2. Another claims that "all meanings are accessed automatically," that is, context has no influence at all (p. 296).
  3. A third model liberally accepts that "access of the alternative meanings is influenced by the frequency of each meaning and also by the context" (p. 296).
  4. Lastly, a fourth model claims that "the language processing mechanism automatically attempts to access all meanings of an ambiguous word in order of their frequency. […] Incomplete access procedures are terminated when one or more meanings that have been accessed are successfully integrated with prior context." (p. 296–97)
I have deliberately stripped the names off these hypotheses to avoid the implication that there is some precise, complex, and quantitative machinery hidden behind them. There is, in fact, just these verbal descriptions, and that's it.

The hypotheses are neither exhaustive nor exclusive. There is some textual clues that the authors regard models 3 and 4 as subspecies of model 2 but also some indications of the opposite.

The Stories and The Evidence

However, even at this level of resolution, the first hypothesis does not hold up to the evidence: If we were capable of discarding irrelevant word senses — instantly, before we had even looked them up — then the first model would predict equal reading times for the ambiguous and the low-frequent words. This is not the case.

The remaining models may or may not do better, depending on more specific assumptions.

The authors argue that the third model is consistent with their findings, since the longer fixations on ambiguous words can be explained by "competition between the dominant meaning and the subordinate" (p. 299). However, since we are now in the business of invoking auxiliary hypotheses, I don't see why this new concept of competition could not have been evoked to defend the first theory, too.

The fourth model also passes the test on the grounds that "the dominant sense is accessed first but is not successfully integrated because context supports the subordinate sense" (p. 299). This failed attempt at integration of word sense and context could then plausibly explain why the whole process takes longer than a simple look-up of the correct word sense.

Conclusions?

As I see it, the main lesson we can draw from this experiment is that ambiguity is costly. We can rephrase this message in terms of various informal "models" of the disambiguation process, but that doesn't really add much, to my mind. Only the models that were grotesque caricatures anyway can be excluded with any confidence.

But perhaps the first of the models cited above — the authors identify it as the "selective access model" but do not pin it to anybody in particular — had some problems to begin with. Specifically, how exactly would a person be able to discard the meaning of a word before retrieving it from memory? Without recall, there can't be any conflict, and hence no discarding.

I thus think that arguing against the selective access model is a bit of a windmill fight. On the most rigid reading of the slogan "only the contextually appropriate meaning is activated in the lexicon," the brain would literally need to be capable of time travel; on a more charitable reading, the theory is not necessarily inconsistent with the data at hand.

No comments :

Post a Comment