Monday, October 29, 2012

Horn and Kato: Introduction to Negation and Polarity (2000)

My "negative polarity" reading list currently includes
So far, I've read a chunk of Ladusaw's thesis and the introduction to Negation and Polarity.

The Dull Edge of Negation

Horn and Kato quote an interesting observation by Otto Jespersen (1917) about the historical trajectory of negation marking:
The history of of negative expressions in various languages makes us witness the following curious fluctuation: the original negative adverb is first weakened, then found insufficient and therefore strengthened, generally through some additional word, and this in its turn may be felt as the negative proper and may then in course of time be subject to the same development as the original word. (Jespersen 1917, p. 4, quoted by Horn and Kato on p. 3)
The pragmatic choice of word is thus always in a kind of arms race with itself because of its feedback to semantics.

We see the same phenomenon with curses, politeness markers, and slang words – all of which regularly have to be discarded because their original force wears off. A similar thing happens with taboo concepts like disease, stupidity, or madness, which regularly have to be renamed because the last generation of terms has lost its neutral and clinical value.

The Historical Emergence of Negative Polarity

With respect to negation, Horn and Kato dub this phenomenon "Jespersen's cycle" and claim that it "plays a central role in the development of negative polarity and negative concord" (p. 3).

I am not quite sure how they imagine the mechanics of this development looks, and neither of their contributions to the volume seem to focus specifically on this (etymological) question. However, somebody at the ACLC recently told me that it's only by recent convention that the Dutch word hoeven ("need") has become ungrammatical in positive contexts, and this seems to support their claim.

Another case that might support this case is the existence of very obviously conventional negative polarity items like lift a finger, hurt a fly, and so on. As with many insults and politeness markers, these were presumably scalar implicatures before they were turned into conventional lexical items.

Notice also the syntactic parallel between such cases and the traditional examples of negative polarity:
  • Would you mind helping me?
  • *You would mind helping me, wouldn't you?
  • You wouldn't mind helping me, would you?
compared to
  • Do you want anything?
  • *You want anything, don't you?
  • You don't want anything, do you?
I don't know how far this analogy could be stretched, but there does seem to be something buried here.

Gunkel: The Legends of Genesis (1901)

If I understand this correctly, the origin of this text is the following: In 1901, Hermann Gunkel had published a book simply entitled Genesis, and often referred to as a his "commentary" on the Book of Genesis.

However, as far as I could tell from a quick skip through the German original, it is in fact a translation rather than a commentary. I don't know if Gunkel produced the translation himself.

It is, however, a very heavily annotated translation. Almost every single line of the book is equipped with a footnote, and the footnotes frequently take up more than half of the page. So perhaps it isn't entirely unfair to consider it a work of interpretation in its own right after all.

At any rate, this new annotated translation was prefaced by a quite substantial essay, and this essay was translated into English as The Legends of Genesis. Unlike the German original, this text could be checked out of the library here in Amsterdam, and I've read most of it by now.

The Framing of Text Genres

Gunkel is one of those dead white men which everybody seems to cite, but no one seems to read (a bit like Durkheim in sociology). In Hebrew Bible scholarship, his name is strongly connected with the concept of "Sitz im Leben," the situation in which a particular genre of poem or narrative was recited.

Actually, I think he only used the exact phrase "Sitz im Leben" two or three times in his life, but it is not entirely inaccurate to say that his interest in the topic went much, much beyond that. His preface to the translation of Genesis is rife with observations about the possible real-life uses and contexts of the various genres we find in the Hebrew scriptures.

This gets most obvious in the chapter on "the literary form of the legends" (pp. 37-87). This is the part of the essay in which he most intensely speculates about the social function that the stories must have had in the everyday life of the ancient Hebrews:
Accordingly, we should attempt in considering Genesis to realise first of all the form of its contents when they existed as oral tradition. This point of view has been ignored altogether too much hitherto, and investigators have instead treated the legendary books too much as "books." If we desire to understand the legends better we must recall to view the situations in which the legends were recited. […] But the common situation which we have to suppose is this: In the leisure of a winter evening the family sits about the hearth; the grown people, but more especially the children, listen intently to the beautiful old stories of the dawn of the world, which they have heard so often yet never tire of hearing repeated. (pp. 40-41)
An interesting piece of advice on exegesis, and incidentally also the clearest possible illustration of 19th century family norms that one could imagine.

Nevertheless, Gunkel's formal and contextual approach to the scriptures must have stood in a quite stark contrast to the alternatives at the time. In a sense, his insistence on seeing meaning in context can be seen as a precursor of archaeology of knowledge and other strongly textual techniques from later in the 20th century.

Wednesday, October 24, 2012

Deborah Cameron: "Men are from Earth, Women are from Earth" (2003)

The annual Feminist Lecture at Leeds University was delivered by Deborah Cameron. It is a very nice piece of intellectual history which brings together many of the themes she has been writing about elsewhere.

I does not refer explicitly to Foucault (ever, I think) but reading her lecture in parallel with The History of Sexuality makes very good sense. Her acute sensitivity to the subtle changes of valorization in the quasi-scientific literature on gender differences is exactly the kind of archaeology of knowledge that Foucault was engaged in, and just as good.

The Caveman Model

The topic of the lecture is the change in background values that writings about gender and language have underwent between the 1970s and the 1990s. According to Cameron, who has been both an observer and participant of these discussions throughout the period, our culture has changed from seeing "female" speech strategies as deficient to seeing "male" speech strategies as such.

Specifically, while the different speech styles associated with the genders in the 1970s would be attributed to the subdued and unassertive nature of women, in the 1990s, the difference is attributed to the emotional and social ineptitude of men. An example:
In a 1993 Ofsted report called Boys and English, the authors include the following description of the differences they observed in the behaviour of boys and girls during group discussion sessions. '[Boys] were more likely to interrupt one another, to argue openly and to voice strong opinions. They were also less likely to listen carefully to and build upon one another's contributions.' (p. 135)
What is striking about this example, if you have followed debates about education and gender over a longer period of time, is that the actual descriptions of boys' behaviour could just as easily have been published in the 1970s. The behaviour itself is completely unchanged. But the evaluation of it has changed significantly: the boys' ways of conducting a discussion are represented here as an obstacle to their educational progress. In the 1970s or even the mid-1980s, the same observations would probably have been framed within a feminist 'domincance' model focusing on girls' lack of assertiveness as an obstacle to success. Rather than praising girls for their sympathetic listening, such a report might have noted, regretfully, that 'girls were more likely to listen than to speak. They were also less likely to disagree with others or show a strong commitment to their own opinions.' Our hypothetical 1970s author would undoubtedly have presented girls as victims rather than villains, but her phrasing would have made clear she saw their unassertive linguistic strategies as a problem. Yet today these same strategies are held up as the ideal for boys to emulate. (p. 136)
Cameron goes on to state that this ideal for communication "can be traced pretty directly to the clinical practice of psychotherapy" (p. 139). Thus:
In my own view, then, the current belief in female verbal superiority does not reflect the cultural ascendance of feminism so much as the cultural pervasiveness and influence of therapy, whose definition of good communication happens to include some key features of what is widely perceived a typically female speech style. (p. 140)
In a less convincing section, she proposes a quite speculative explanation of the emergence of this therapeutic ideal, citing globalization and related socio-economic changes (pp. 140–42).

Subtexts and Ideals

What is more important than the material causes of the change is its effects. As she says about the valorization of "female" speech strategies,
this change has little to do with feminism, and does little or nothing to advance the interests of women. On the contrary, what looks on the surface like anti-male discourse […] is more fundamentally an anti-feminist discourse. (p. 134)
The reason for this is, as always, that praising the delicate sensibilities of women can be a pretext for keeping them in place. In her words,
the slogan 'different but equal' is always a lie: when difference becomes naturalized, inequality becomes institutionalized. (p. 144)
This comment is itself the conclusion of a short discussion of Simon Baron-Cohen's book The Essential Difference (2003), which claims that men are evolutionarily selected to be "systematizers," while women have evolved to be "empathizers."

Although Baron-Cohen is careful to disavow any overt political commitment behind this idea, he still eventually "gives the game away" (in Cameron's words, p. 144) when he draws the political conclusion from his evolutionary claims:
People with the female brain make the most wonderful counsellors, primary school teachers, nurses, carers, therapists, social workers, mediators, group facilitators or personnel staff […] People with the male brain make the most wonderful scientists, engineers, mechanics, technicians, musicians, architects, electricians, plumbers, taxonomists, catalogists, bankers, toolmakers, programmers or even lawyers. (Cohen 2003, p. 287)
So many things could be said about this list, but Cameron says it nicely:
If this is really the cutting edge of twenty-first-century science, 1970s school careers advisers were clearly ahead of their time. Creative professions like music and architecture are off-limits to those who possess a female brain; so is anything to do with science, numbers or classification (though mysteriously a lot of librarians seem to be women); and so too are the high-paying craft occupations like plumbing. Female brains are better suited to occupations like nursing and primary school teaching, which apparently do not involve any systematizing, but only the ability to empathize – and of course, to live on a much smaller salary than a plumber or an engineer. (p. 144)

Thursday, October 11, 2012

Bob van Tiel: "Embedded scalars and typicality" (2012)

Bob van Tiel has, as far as I understand, been arguing for a while that the various empirical problems surrounding scalar implicatures can be explained in terms of typicality. So the strangeness of saying that I ate some of the apples if I in fact ate all of them should be compared to the strangeness of saying there's a bird in the garden if there is in fact an ostrich in my garden.

This argument is nicely and succinctly presented in a manuscript archived at the online repository The Semantics Archive. It contains a fair amount of nice empirical data.

A Bibliography

First of all, the paper contains pointers to most of the interesting recent literature on the subject. Let me just liberally snip out a handful of good references that I either have read or should read:
This list should probably also include the following, which I still have to read:

Quantification According to van Teil

In sections 6 and 8 of the paper, van Teil suggests a very particular semantics for the use of some and any, both extracted from "goodness" ratings by 30 American subjects regarding the sentences All the circles are black and Some of the circles are black.

Semantics for All

His suggestion for the semantics of all is, loosely speaking, that the truth value V("all x are F") should be computed as the harmonic mean of the truth values of V("x1 is F"), V("x2 is F"), etc.

This obviously only makes sense for finite sets, but more strangely, it does not make sense if the truth value 0 occurs anywhere (since the harmonic mean involves a division). Consequently, he has to assume that V("x is black") = .1 when then x is white, and = .9 when x is black.

While this is not completely unreasonable, it does introduce yet another degree of freedom in his statistical fit (remember, he already chose the aggregation function himself), and it be a cause for some caution when interpreting his significance levels.

Semantics for Some

With respect to some, his suggestion is that the paradigmatic case of some circles are black is half of the circles are black. He thus sets the truth value V("some x are F") to be 1 minus the squared difference between the actual case and the half-of-the-individuals case. Ideally, this should give rise to truth value computation of the form
T(k) = 1 – (n/2 – k)2.
However, on the graph on page 17 of the paper, we can see that T(5) < 7 (7 being the maximal "goodness" level), so even when exactly half of the circles are black, we do not get maximal truth. This must be due to some additional assumption like the .9 parameter introduced above, but as far as I can see, he doesn't explain this anywhere in the paper.

One assumption he does make explicit is that
this definition is supplemented with penalties for the situations where the target sentence is unequivocally false (i.e., the 0 and 1 situations) (p. 18)
While these seems relatively innocuous as a general move, we should note that the situation in which exactly one circle is black counts as a counterexample to Some of the circles are black. It also seems to postulate to different mechanisms for evaluating a sentence: First comparing it to a prototype example, and then in addition checking whether it is "really" true. This extra postulation makes his typicality model lose a lot of its attraction, since it discreetly smuggles conventional truth-conditional semantics back into the system rather than superseding it.

Van Tiel's Comments on Chemla and Spector

While the rest of the paper is reasonably clear, there is one part that I do not understand. This is the part where van Tiel recreates the results from Chemla and Spector's letter-and-circle judgment task.

Here's what I do get: He says that the sentence used by Chemla and Spector,
Every letter is connected to some of its circles
suggests most strongly a some-but-not-all reading (labeled "Mixed"), less strongly an all reading, and least strongly a none reading. So however a subject rates the seven different pictures given by Chemla and Spector (0 to 6 connections), they should respect this constrain on appropriateness orderings.

But then van Tiel says the following:

Using Excel, I randomly generated 5,000 values for each of the three cases such that every triplet obeyed the constraint [that some suggests Mixed more than All, and All more than None]. For every triplet, I calculated the typicality value for the seven situations. Ultimately, I derived the mean from these values for comparison with the results of Chemla & Spector. The product-moment correlation between the mean typicality values from the Monte Carlo simulation and the mean suitability values found by Chemla & Spector was nearly perfect (r = 0.99, p < .001). This demonstrates that Chemla & Spector’s results can almost entirely be explained as typicality effects. (p. 19)
I don't get what it is that he is simulating here. Since he randomly generates triplets (not 7-tuples), the stochastic part must be the proposed "goodness" intuition of a random subject. But how does he go from those three numbers to assigning ratings to all seven cases? I suppose you could compute backwards from the three values to the parameter settings for the model discussed above, but that doesn't seem to be what he's doing. So what is he doing?

I think it would have made more sense to compute the theoretically expected truth value of Chemla and Spector's sentence directly now that he has just gone through such pains to construct a compositional semantics for some and every.

We have the number of connections for each picture, so we can compute the truth value of, say, The letter A is connected to some of its circles; and we also have, in each condition, the set of pictures, so we could compute the harmonic mean of these values for the six truth values that are presented to the subject. Why not do that instead if we really want to test the model?