Thursday, June 20, 2013

Meier, Robinson, and Clore: "Why Good Guys Wear White" (2004)

OK, now I'm officially confused.

In their 2007 study, Meier and Robinson found that affective cues prime spatial position, but not other way around:

TARGET (affect) →  SOURCE (position)

In this 2004 study — which contains the same references, the same statistical methods, the same ideas — Meier, Robinson, and Clore show that color primes affect, but not the other way around:

TARGET (affect)  ←  SOURCE (color)

In other words, they show two completely opposite things, and no one seems to have pointed this out.

Experimental Design

In the 2004 paper, the set-up is the following:

They show a number of words on a computer screen. Some are shown in a dark font, some in a bright; some words are positive, some negative. For instance, you could be shown the word candy in a dark gray font, or the word fickle in a bright gray font.

In the first batch of experiments, the subjects were told to determine the valence of the words as quickly as possible; this turns out to be easier when the color and the valence are congruent (e.g., the word sincere in a bright gray color).

In the second part, the subjects were told to determine the color of the font. This is easy, and word valence didn't make a difference.

Here's my own little mock-up of such a test material, so you can try it out yourself — say as quickly whether the following words are positive or negative:

gentle

bitter

cruel

trust

If you're like the average person in this study, then you're about 2% or 3% slower at making the judgment about the last two items (although you might be much faster at recognizing positive words in general).

As the authors say, this effect can be compared to that of the Stroop experiment, although the conflict here is indirect — there is no literal contradiction between trust and "black" (p. 86).

Order and Association

The authors interpret the asymmetry between color and word valence in terms of "a race model in which physical cues … are available before stimulus valence is" (p. 85).

If this is true, then the order is: First, you notice the white letters; then, the whiteness primes your valence judgment; and then, you finish reading the word. This would then mean that the appreciation of the color and its associated valence is "obligatory," and that people can't ignore it while making semantic judgments (p. 86).

This, however, does not show that we think about the abstract concept in terms of the concrete: If anything, it shows the opposite.

It thus seems that cognitive metaphor theorists can choose between two empirical results when they dig through the psychological literature — one that supports that idea of understanding-in-terms-of, and another that supports a "deep" metaphorical comprehension process. This would be wonderful if it weren't for the fact that the two papers both forcefully argue against the point made in the other.

Daston: Classical Probability in the Englightenment (1988)

I've been wanting to read Lorraine Daston's book on the early history of probability for about eight years now, but there was always something else that seemed more important. Recently, however, I finally got around to checking it out of the library along with Abraham De Moivre's The Doctrine of Chances (1718).

There are two things that I have found interesting about the narrative of the book so far: First, Daston's claim that, despite appearances, gambling in fact was not the real engine behind the emergence of probability theory; and second, her discussion of the controversy surrounding the proper definition of expectation.

Gambling or Law?

With respect to the claim about gambling, she cites contract law (and to some extent, criminal law) as a plausible interpretation of what the people of the enlightenment were really talking about when they talked about "gambling." Specifically, she cites the problem of determining a fair price for an aleatory contract (such as an insurance policy) as an important new question that gave meaning to the young subject.

As the name suggests, contracts of this kind were from the onset conceptualized as a kind of gamble. All the dice-throwing in the classical text about probability may thus have been a kind of crypto-legal pedagogy (just like a math problem about dividing a cake might really aim at teaching you something about accounting).

Reasonable Expectation

With respect to the second point, she discusses at length Daniel Bernoulli's solution to the St. Petersburg problem: Assume that the utility of money decreases linearly as a function of your capital.

This leads to a logarithmic utility function and thus aligns Bernoulli's solution with Kelly gambling. It corresponds to the intuition that bankruptcy is qualitatively different from other levels of bankroll, and that one should not adopt a gambling strategy that assigns a positive (or even high) probability to going bankrupt.

A similar and much simpler example of the same ambiguity comes up in the comparison of extreme gains that have very low probability relative to modarately large gains that have substantial probability. Gambling a lot of money on lotteries of the first kind tends to lead to bankruptcy with quite high probability.

Wilson and Gibbs: "Real and Imagined Body Movement Primes Metaphor Comprehension" (2007)

One of the recurring problems with experimental tests of cognitive metaphor theory is that it is exceedingly difficult to disentangle lexical priming from semantic priming. For instance, the notion of dragging is not only semantically related to "boredom" and "delay," but also discursively related to it: About 4 out of the 10 results for dragging out in the BNC are time-related metaphors.

In an attempt to circumvent this problem, Nicole L. Wilson and Ray Gibbs have performed two experiments in which a non-verbal movement served as the prime for a reading task. Specifically, they had their subjects learn to make certain movements like a stretching motion on cue, and then had them read small phrases like stretch for understanding. It turns out that performing these actions decreases reading time.

Are the Clichés Really Clichés?

The phrases they used were the following (p. 725):
  • Stamp out fear
  • Push the argument
  • Swallow your pride
  • Sniff out the truth
  • Spit out the facts
  • Shake off a feeling
  • Grasp a concept
  • Chew on an idea
  • Stretch for understanding
These phrases are not all equally standard. This is a bit problematic because Wilson and Gibbs explicitly use the data to argue against "phrasal lexicon" and "clichés or dead metaphors" accounts (p. 723). It would thus have been more convincing if they had used actual clichés instead of constructed and not completely natural phrases.

These differences can be quantified by counting co-occurrences. To do so, I've taken all the verb/noun pairs above and looked for cases in which they co-occur in the BNC.

For instance, I took all the forms of the verb shake (shake, shakes, shook, shaken) and paired them with all the forms of the noun feeling (feeling, feelings). I then checked whether any combination of a word form from the first list co-occurred with one from the second list up to 20 words apart, and in any order.

Compiling such counts gives the following table:

v
n
#(n)
#(v)
#(v, n)
P(n | v)
P(v | n)
grasp
concept
280
2485
8988
11.27%
3.12%
chew
idea
72
1116
31876
6.45%
0.23%
swallow
pride
112
2585
2913
4.33%
3.84%
sniff
truth
40
1165
8397
3.43%
0.48%
spit
fact
40
1371
41801
2.92%
0.10%
shake
feeling
228
9109
17559
2.50%
1.30%
stretch
understanding
40
6239
9552
0.64%
0.42%
push
argument
48
10703
12006
0.45%
0.40%
stamp
fear
8
3086
14578
0.26%
0.05%

So it turns out that we quite often grasp concepts, but we rarely if ever stamp out fear.

We should thus expect such a phrase to be experienced as much more "fresh," or alternatively, much more awkward. It would be interesting to check whether these statistics correlate in any way with the priming effect, but there's no way to do so directly, because Wilson and Gibbs do not report reading times for individual reading times in the experiment.

Wednesday, June 19, 2013

Meier and Robinson: "Why the Sunny Side Is Up" (2004)

This Psychological Science paper by Brian P. Meier and Michael D. Robinson is a report on three experiments which collectively show that
  1. reading positive and negative words facilitates the recognition of letters that are placed high or a low, respectively,
  2. but looking up and down does facilitate the comprehension of positive and negative words.
In the terminology of Judea Pearl, we can thus say that activation of GOOD causes activation of UP, and BAD of DOWN:


Another result that came out of the experiments was that people recognize positive words faster than negative words, regardless of condition. Meier and Robinson do not discuss this effect explicitly, and I'm also not sure what to make of it.

From GOOD to UP, Not from UP to GOOD

The paper opens with a discussion of an experiment which shows that people are faster at judging whether a word is "positive" or "negative" if it's placed in a position on the computer screen which is congruent with the meaning. When the word polite is flashed at the top of the screen, for instance, you're faster at pressing the button corresponding to "positive" than when it flashes at the bottom.

This shows a correlation, but does not disentangle the question of causation. To do this, the two variables need to be manipulated independently of each other. This was achieved in two follow-up experiments which placed a spatial judgment task before or after a valence judgment task.

In the first of these follow-up experiments, the GOOD-to-UP direction was set up by a design looking roughly as follows:


That is, first the subject made a judgment about the valence of a word, and then about the identity of a letter which was presented in either a high or a low position. The last screen showed the word INCORRECT if the subject failed at the second task (as I've assumed in this example). This study found a significant facilitation effect.

The second follow-up experiment set up the UP-to-GOOD direction in a design that may be pictured thus:
 

Here, the subject first had to identify where a spatial cue was, and then to judge the valence of a word. In this task, they found no facilitation effect.

Is This Consistent With Cognitive Metaphor Theory?

In their brief conclusion, Meier and Robinson write:
These findings suggest that, when making evaluations, people automatically assume that objects that are high in visual space are good, whereas objects that are low in visual space are bad. (p. 247)
But there's an issue buried here which is usually evaded in cognitive metaphor theory: In which direction do the connections between source and target domain run? Meier and Robinson's results as well as the notion of "understanding in terms of" suggest that the connections run from target domain to source domain.

This analysis, however, leaves cognitive metaphor theory with little explanatory value with respect to word semantics. Consider for instance this paradigmatic instance of a "conceptual" metaphor:
  • I'm feeling up.
According to the standard analysis, I understand that the word up here means "good" because the domain of spatial position translates into the domain of moods.

But this is exactly the opposite association of what we need. Meier and Robinson's show that we have a completely literal understanding of spatial position, and that looking upwards doesn't associate to feeling good:
People can use their senses to determine whether an object is up or down, white or black. There is no need to borrow metaphor to achieve an understanding of vertical position. Because of these considerations, we view it as unlikely that physical cues, in the absence of an evaluative context, activate evaluations. (p. 245)
There is thus no neurological reason why the word up should activate the meaning "good."

While Meier and Robinson thus offer excellent evidence in favor of a system of analogies in thought, it is not obvious that these should be responsible for the way we understand metaphorical language—only that they can be seen during our comprehension of literal language. To my mind, this creates a huge problem for cognitive metaphor theory, in particular in its more recent brain-talky incarnation.

Rapp et at.: "Neural correlates of metaphor processing" (2004)

This study used fMRI to investigate which parts of the brain that are involved in the processing of metaphors of the form A is B.

Contrary to expectation, it turned out that all the extra activity that the metaphors induced (compared to the literal control sentences) was in the left hemisphere. A previous PET study had found activation in the right hemisphere, but this may have been due to the more complex syntax they used in their materials.

The authors assert that they used 120 sentences, but they do not include their materials in the paper. This obviously makes the paper much less useful from the point of view of linguistics.

The only two sentences they cite are the following (p. 397):
  • Die Worte des Liebhabers sind Harfenklänge (metaphorical)
  • Die Worte des Liebhabers sind Lügen (literal)
It's difficult to capture just exactly how poetically tone-deaf this sounds, but think A lover's voice is the sweetest sound, and you're roughly in the right ballpark.

At any rate, we now know that you need to pump slightly more blood to the left-hand side of your brain if I call your lover's words sweet music than if I call them lies. Interpret at your own risk.

Wednesday, June 12, 2013

Kuperberg: "Neural mechanisms of language comprehension" (2006)

Suppose I put you in front of a computer screen that flashes a word every second:
  • The … cheese … was … eaten … by …
While you're reading, I record the electrical activity off the scalp of your head with an EEG scanner. Possibly, I also ask you to do something with the sentence when you're done reading, like judge its plausibility, or answer a question about it.

Once you've gotten used to this task, I perform a manipulation: Without warning, I insert a weird or unexpected word:
  • The … cheese … was … eaten … by …the … cat
When this happens, you'll obviously have to work harder than usual to make sense of this unexpected stimulus. This means more brain activity, and more brain activity means more electrical charge.

Typical Responses: The N400 and the P600

There are, in particular, two specific ways that the electrical activity at the scalp of your head changes measurably when this happens: You may exhibit an excess of negative electrical energy about 400 milliseconds the unexpected word, or an excess of positive electrical about 600 milliseconds after.

These two events are called the N400 and the P600. They can occur together, separately, or not at all.

The N400 was first described in 1980 by Martha Kutas and Steven A. Hillyard. They explained it as a a kind of "second look" effect and found that it was provoked by "semantic incongruity" (p. 204).


The P600 was described in 1992 by Lee Osterhout and Phillip J. Holcomb. They were explicitly interested in teasing apart syntactic from semantic effects, and they found that the P600 appeared specifically after syntactic anomalies like The librarian trusted to buy the books.

This was great news for the Chomskyan theory of language: At last, solid evidence that semantics and syntax are independent. And what could be more convincing to a linguist than "brain stuff"?


Delineation Problems

But of course, the story is a bit more complicated than that. In a wonderful paper from 2006, Gina R. Kuperberg reviews the large and growing pool of experimental findings related to the N400 and the P600.

Her conclusion is that the two electrical responses are the trace of two different processes, "one that links incoming semantic information with existing information stored in semantic memory, and another that combines relationships between people, objects and actions to construct new meaning" (p. 45).

If we want to evaluate a synthesis like that, we need to keep to separate issues apart: First, can we predict when the two different waveforms will come up? And second, if we can predict this, by what cues?

This dichotomy reflects the familiar problem of, on one hand, assessing whether people have stable intuitions about grammaticality, and, on the other hand, trying to articulate those intuitions in an adequate grammatical formalism. We can get both of these tasks wrong, independently of each other.

So here's what I want to do: I'll just give you a huge list with examples, and then you'll get a sense of where the N400 shows up, and where the P600 shows up. If it looks as if there is a system to this, then we can throw some grammatical vocabulary at this system; but first we need to get a sense of what the system is.

A Bunch of Examples

This section contains all of the examples that Kuperberg cites in her review. They come from a wide range of different sources, so I can't appropriately cite every one of them. I'll just repeat her examples without attribution.

The N400 was originally described as a reaction to semantic anomalies. It shows up in contrastive pairs like the following:
  • It was his first day at work (baseline)
  • He spread the warm bread with socks (strong N400)
It shows up strongly when you read sentences that are completely unambiguous as to what they are saying, but just say something really weird:
  • The honey is being murdered (strong N400)
It's also visible when words are semantically permissible, but less expected:
  • He mailed the letter without a stamp (baseline)
  • He mailed the letter without a thought (moderate N400)
In fact, the N400 also appears when a sentence expresses completely legitimate assertions which just happen to be inconsistent with experience:
  • Dutch trains are white (strong N400; they are in fact yellow)
The P600, on the other hand, was originally described as sensitive to syntactic violations. This conclusion was based on contrasts like the following:
  • The broker hoped to sell the stock (baseline)
  • The broker persuaded to sell the stock (strong P600)
Similarly, we find contrasts such as these:
  • The doctor believed the patient was lying (baseline)
  • The doctor charged the patient was lying (strong P600)
This morphosyntactic account of the causes of the P600 is also consistent with the fact that it responds to grammatical incongruence and weird word orders:
  • The spoiled child throw the toys on the floor (strong P600)
  • The expensive very tulip (strong P600)
  • Jennifer rode a gray huge elephant (P600; compare huge gray)
Somewhat strangely, though, the predictability of the word that carries the incongruence can affect how strong the P600 effect is:
  • Sie bereist den Land … (strong P600; Land is expected, but should be das)
  • Sie befährt den Land … (milder P600)
The P600 can also be provoked by sentences in which the subject and object seem to be swapped or replaced by a wrong word:
  • Every morning at breakfast the eggs would eat … (P600)
  • Every morning at breakfast the eggs would plant … (P600)
This contrasts with the N400 effect that is visible when the word is merely unexpected:
  • Every morning at breakfast the boys would plant … (N400)
A similar example is the following:
  • The hearty meal was devoured … (baseline)
  • The hearty meal was devouring … (P600)
  • The dusty tabletops were devouring … (N400)
Or, again:
  • Tyler cancelled the subscription (baseline)
  • Tyler cancelled the birthday (N400)
  • Tyler cancelled the tongue (N400 + P600)
This also has the consequence that when a cat flees a mouse, or when a javelin throws an athlete, you see a strong P600 effect rather than a N400:
  • De kat die voor de muizen vluchtte … (P600)
  • De speer heeft de athleten geworpen … (P600)
However, if the javelin summarizes the athletes, you get both a P600 and a N400 effect:
  • De speer heeft de athleten opgesomd … (P600)
So it seems that when there is some sort of normal relationship between verb and object, but the sentence expresses the wrong one, both effects occur simultaneously:
  • The trees that in the park played … (P600 + N400)
  • The apple that in the tree climbed … (P600 + N400)
This doesn't depend on the distinction between subject and object, as can be seen by using a passive alternation:
  • To make good documentaries cameras must interview … (P600)
  • To make good documentaries cameras must be interviewed … (P600)
Another example of this contrast comes up if you let an elephant do various things to a tree: Topple it, prune it, or spoil it like a child:
  • … dat de olifanten de bomen omduwden … (baseline)
  • … dat de olifanten de bomen snoeiden … (P600)
  • … dat de olifanten de bomen verwenden … (N400)
When a detective "stains" a banker instead of interrogating him, both effects also occur:
  • … dass der Kommissar den Banker abhörte … (baseline)
  • … dass der Kommissar den Banker abbeizte … (N400 + P600)
Also, when the verb understood gets an inanimate object as its agent, a P600 effect is visible:
  • At long last, the man's pain was understood by the doctor (baseline)
  • At long last, the man's pain was understood by the hypochondriac (weak N400)
  • At long last, the man's pain was understood by the violinist (N400)
  • At long last, the man's pain was understood by the medicine (strong N400 + P600)
  • At long last, the man's pain was understood by the pens (strong N400 + P600)
A very nice example that also brings out the nature of the P600 waveform is the following:
  • The novelist that the movie inspired … (P600)
This sentence is, strictly speaking, perfectly grammatical: A movie can inspire a novelist. However, it's much more probable to hear someone talk about a novel that inspired a movie, and something thus seems to have gone wrong with the sentence.

Interestingly, context can also heavily influence whether something counts as bizarre or as scrambled:
  • [In a story about traveling] … the woman told the suitcase … (P600)
  • [In a story about something else] … the woman told the suitcase … (N400)
Misspellings also seem to trigger a P600 effect if the context offers an obvious candidate for the correct word:
  • In that library the pupils borrow bouks … (P600)
  • The pillows were stuffed with bouks … (no P600)
I don't whether such examples also cause a N400 effect, but presumably, they do.

Sense-Making and Decoding

As may be apparent from the way I presented these examples, I'm not quite comfortable with the way that both the traditional accounts and Kuperberg's paper talks about "what the brain does" and "what the P600 picks up on." I think that we can largely make sense of the N400 and the P600 waveforms in terms of what kind of repair practices the sentences suggest.

More precisely, you can react to an unexpected sentence in two ways: Either, you can suspect that it was scrambled or otherwise corrupted due to noise, or you can believe that it came over uncorrupted, but just expresses a really weird idea. You might be pushed into the first hypothesis if there is a really obvious nearby expression that makes much more sense, and into the second if there isn't.

Looking at it this way quite accurately explains the differences between the two effects, I think, and essentially doesn't invoke any grammatical notions. Instead, we can think of comprehension as a kind of Bayesian decoding process along the following lines:
  1. Recover the intended message from the received codeword.
  2. Find a reasonable interpretation of the intended message.
When the received codeword corresponds unproblematically to a message, the first step is fast, and we can proceed directly to the interpretation within the first 500 milliseconds.

If you then afterwards find that the intended message is really weird, you can either go back and check whether there really wasn't corrupted, or you can just work harder trying to interpret it. The first response will yield a P600 effect, and the second an N400.

An advantage of this story is also that it accounts for the strange fact that "semantics" should be processed before "syntax." The order of the N400 and the P600 may be due to the fact that reconstructing a likely message (like so many backwards-reasoning tasks), is much more computationally expensive interpreting one. Kuperberg also hints at this possibility by attributing the P600 to "combinatory processing."

Of course, I also like this story because it doesn't drive a wedge in between syntax and semantic when there doesn't have to be one. But you can disagree with me on that.

Monday, June 10, 2013

Mohanty: Lattice Path Counting and Applications (1979)

This book contains quite a lot of interesting material on the combinatorics of paths through two-dimensional grids.

Unfortunately, it's also quite stingy with the explanations, and way too many important things are just brushed off as simple, trivial, obvious, and easy. This makes it quite difficult to use if you're not already familiar with the subject.

The Ballot Problem and Path Counting

One of the central problems that have kicked off the field around the turn of the last century was the ballot problem. In one version, this can be formulated as follows:
If candidate A and B have received n and m votes in an election, respectively, what is then the probability that the losing candidate will lead at some point during the ballot counting process?
To answer this question, we have to come up with a way to count all the possible ways in which the ballots can be sorted, and then count the subset in which the losing candidate leads the count at some point.

Both of these tasks are solved more easily when we think of the counting as a unit-step path through a rectangle of dimensions n x m, beginning at (0, 0) and ending at (n, m). Such a path will consist of n + m steps, corresponding to the fact that every ballot has to be counted exactly once.


Putting the ballots in some order thus corresponds to deciding when you want to walk upwards rather than to the right. Since there are n ballots supporting candidate A and a grand total of n + m, the number of choices is thus the binomial coefficient C(n + m, n) = C(n + m, m). This correspondence is also explained in detail in the first chapter of Victor Bryant's textbook.

Conjugation and Misleading Counts

The other figure we need is more difficult to count, but the lattice path representation helps a lot. Assuming that n > m, we are looking for the number of paths which at some point touches the line x = y – 1, i.e., y = x + 1. These paths corresponds to ballots counts in which the losing candidate leads by at least one vote at least once.


The crucial trick to finding the number of such paths is to define the notion of the conjugate path of a path touching the line. You obtain the conjugate of a path that touches the line y = x + 1 at a point p by replacing the section preceding p by its mirror image around y = x + 1.

This operation always transforms the starting point (0,0) into the point (–1,1), and it transforms steps up to steps to the right and vice versa. It thus defines a new path from (–1,1) to (n,m).


Because all such paths will have to cross the line y = x + 1 at some point, they all correspond to a line that starts at (0,0) and terminates at (n,m). As a consequence, the operation of conjugation creates a one-to-one map between the paths from (0,0) to (n,m) which touch the line and the paths from (–1,1) to (n,m).

In this way, the second problem is thus reduced to the first, and the number of paths touching the line turns out to be C(n + 1 + m – 1, m – 1) = C(n + 1 + m – 1, n + 1). If we divide this by the total number of paths and do a bit of cancellation of factorials, we find that the proportion of ballot counts that are at some point misleading is m/(n + 1).

Plugging in some of the extremes also gives the expected results: When n and m are roughly equal and very large, this proportion is very close to (and smaller than) one half; when the winning candidate leads by a very large number of votes, the proportion is very close to 0.

Of course, this trick in no ways hinges on the fact that the line crosses the y-axis at y = 1. In fact, Mohanty's construction only discusses the general case y = xt, using negative integral values for t.

A Bit of Archeology

Mohanty refers, as it seems everybody in the field does, to a series of popular mathematics papers by Howard D. Grossman. The papers were published in Scripta Mathematica between 1946 and 1954 under the title "Fun with lattice points" (O the joys of being a mathematician).

And then the troubles begin — Scripta Mathematica has gone south, and I can't find copies of it online or in any of the university libraries I have access to.

Grossman himself apparently published many articles on "popular" mathematics (which later generations could then discover were in fact "serious" math in a cheerful wrapping).

However, he doesn't seem to have a Wikipedia page, and I don't know anything about him. In the papers that I do have access to, he is simply "Howard Grossman, New York City" or, more elaborately,
Howard D Grossman
100 La Salle Street, 13 B,
New York 27, New York, U.S.A.
OK, so apparently, he lived in Morningside Gardens, New York. This points in the direction of Columbia University, but the style and direction of his papers point in the direction of high school teaching. I don't know.