Showing posts with label independence-friendly logic. Show all posts
Showing posts with label independence-friendly logic. Show all posts

Monday, September 10, 2012

Hintikka: "Quantifiers in Natural Languages" (1977)

In this paper, Jaakko Hintikka yet again presents his "game-theoretical semantics" and its extension to games with imperfect information. However, he also proposes his "any-thesis" — a conjecture stating that any is grammatical in exactly the contexts in which it means something different than every.

The paper was originally published in the second-ever issue of Linguistics and Philosophy (1977), but was reprinted in the anthology Game-Theoretical Semantics (1979).

Ordering Principles

The most notable new idea in the paper is the set of ordering principles that Hintikka introduces in section 10. These are principles that govern whether, say, the modality player or the quantification player should move first when we encounter sentences like some people might not be nice. These principles can essentially be translated into rules about the relative scope of various operations.

The reason he introduces these principles is that he wants to account for the fact that any sometimes behaves like an existential quantifier rather than a universal:
  • I can do anything. (universal)
  • I can't do anything. (existential)
He explains this by introducing an ordering principle that requires any to "scope out" over a negation whenever there is one. Using this principle, "not any" will then be equivalent with "every not," which then again is equivalent to "not some" which we were looking for. The change of quantifier type is, in other words, achieved by swapping negation and quantification.

One possible problem with this approach is double negation. Hintikka doesn't explicitly discuss this, but somewhere in his system, he needs to bar the quantifier from scoping out over both negations in sentences like
  • It's not true that we haven't done anything about the crisis.
Otherwise this sentence would come out as equivalent to We have done everything about the crisis instead of We have done something about the crisis.

The Any-Thesis

So, Hintikka's self-titled any-thesis can be put as follows: the quantifier any is grammatical if and only if it appears under a negation, or under some other operation that it can scope out of with a resulting change of meaning. Beyond that, it is synonymous with every.

This explains distributions like the following:
  • *I know anything.
  • I don't know anything.
If we take If A, then B to be equivalent with Not A, or B, then it further explains the following pattern:
  • If I have any medical issues, the test comes out positive.
  • *If the test comes out positive, I have any medical issues.
In both cases, any can scope out over the negation and thus effectively change its meaning from every to some.

The Status of the Context

But then things get a little hairy. We can note that some tenses apparently interact with the universal quantifier in a meaning-changing way, while others don't:
  • *I have been anywhere.
  • ?*I am going anywhere.
  • ?*I went anywhere.
  • *I am anywhere.
  • ?I will go anywhere.
  • I would go anywhere.
The asterisks here are based on my estimates. I'm quite unsure what native speakers would think about them.

Hintikka has a conjecture about this (sec. 18). He thinks that when we're thinking about the future or about counterfactual scenarios, we're dealing with a special kind of modal logic in which the domain of quantification changes from world to world. We consequently get a logical difference between sentence pairs like these:
  • I every scenario, everybody wins.
  • Everybody wins in every scenario.
To see this, consider for instance a model in which some possible worlds contains one more loser than the actual world. In that case, the first sentence might be false, while the second true. (I am here assuming that entities have all properties in scenarios where they don't exist.)

So, since such pairs are no longer equivalent in such a grow/shrink logic, it makes a difference whether a universal quantification comes before or after a necessity modal. This thus introduces a difference in meaning and accounts for the (possible) grammaticality of I will steal anything.

An Alternative Approach

However, I would explain these facts in terms of how odd it is to use a free choice operation in a context that essentially excludes any real choice. This would also explain the context-sensitivity that there seems to be about acceptability of any.

As far as I can tell, at least, we are more inclined to accept present tense uses of any when the sentence can be interpreted as offering a real choice. Thus, according to my personal intuitions (and some googling), we have:
  • I buy anything if the price is right.
  • *I have any problems if the test is positive.
  • I switch off any device that consumes electricity.
  • *I experienced any emotion that is humanly possible.
This seems to support a free-choice reading of any over the scoping-out story.

Methodology: Some Quotes

In this paper as elsewhere, Hintikka is pretty dismissive of asking other people for their opinions about sample sentences. He thus brushes off "different speakers' more or less confused uneducated intuitions" (p. 91) as misrepresenting "what a truly competent speaker would do" (p. 90).

In footnote 13, he writes:
I have been amazed time and again by linguists who claim that they are dealing with competence and not performance and then go on to base their theories on people's uneducated and unanalysed reactions to complicated sentences. (p. 115)
It's difficult to see what the object of semantics is, then, if only the intuitions of trained logicians really count as data. With the right training, people can come to see whatever sentence meaning we want them to see.

Methodology: Examples

Just to illustrate the problem, let me briefly cite a couple of the sentences that Hintikka takes to be good, grammatical English sentences with definite meaning:
  • Every townsman admires a friend and every villager envies a cousin who have met each other. (p. 89)
  • Every actor of each theatre envies a film star, every review of each critic mentions a novelist, and every book by each chess writer describes a grand master, of whom the star admires the grand master and hates the novelist while the novelist looks down on the grand master. (p. 97)
  • Some product of some subdivision of every company of every conglomerate is advertised in some page of some number of every magazine of every newspaper chain. (p. 97)
  • Every girl has not been dated by John. (p. 101)
  • If Jane wins, anybody who has bet on her is happy. (p. 113)
How thin is the line between "complicated sentences" and word salad? Well, compare these "grammatical" sentences to the ones that Hintikka stars as ungrammatical:
  • If Jane has won any match, she has won any match. (p. 100)
  • John must pick any apple. (p. 101)
  • If everyone loses, anyone loses. (p. 110)
  • Mary hopes that Jane will win any match. (p. 112)
  • Mary believes that Jane will win any match. (p. 112)
According to Hintikka's methodology, if we ask an average English speaker about these sentences, we would get nothing but "confused uneducated intuitions." Instead, we should look for a "consistent, general set of rules of semantic interpretation" (p. 91) that can accommodate the entailments that we, we educated logicians, think the sentences ought to have.

That sounds like a recipe for injecting the theory into the data, even quite openly and deliberately. I find it difficult to see how any rational discussion could follow if we value our own speculative and introspective intuitions about foreign languages over the judgments of other people.

Thursday, May 31, 2012

Janssen: "Independent Choices and the Interpretation of IF Logic" (2002)

In this paper, Theo Janssen argues that Jaako Hintikka and Gabriel Sandu's notion of independence-friendly logic does not adequately formalize the notion of quantifier independence, at least according to his intuitions about what independence should mean. He has essentially two arguments, of which the first one is the strongest.

Dependence Chains

The most serious problem that Janssen points out is that IF logic may require that a choice A is independent from a choice C without ruling out that there is some intermediate choice B such that A depends on B, and B depends on C.

Such cases create some quite strange examples, e.g.:
  • TRUE: x: (x 2) v (u/x: x = u).
  • FALSE: x: (x = 2) v (∃u/x: x = u).
  • TRUE: x: (x ≠ 2) v (∃u/x: xu).
  • TRUE: x: (x = 2) v (∃u/x: x u).
The true sentence are here true in spite of the independence between u and x, the reason being that the disjunction is not independent of x. The Verifier can thus circumvent the quantifier independence. For instance, in the first sentence, he can set u := 2 and then pick the left disjunct if and only if x = 2.

Similar examples exists where the middle term which smuggles in the dependence is not a disjunction, but another quantifier.

Naming Problems

Another problem occurs, according to Janssen, when "a variable is bound within the scope of a quantifier that binds the same variable" (p. 375). This occurs for instance in sentence like
  • xx: R(x,x).
He claims that such sentences come about by "classically allowed" substitutions from, in this case,
  • xy: R(x,y).
After such a substitution, an indirect dependencies referring to the value of y might be lost, and an otherwise winning Verifier strategy might be broken. However, I don't know whether there would be any problem with just banning double-bound quantifiers as the x above; it doesn't seem to have any necessary or positive effect.

Solutions

To avoid the problems of Hintikka's system, Janssen defines a new game with explicit extra conditions such as "The strategy does not have variables in W as arguments" and "If the values of variables in W are changed, and there is a winning choice, then the same choice is a step towards winning" (p. 382).

This solves the problem, but doesn't bring about much transparency, it seems to me. A better solution would probably be to describe the instantiation of the quantifiers and the selection of branches at the connectives as a probability distribution on a suitable power of the domain and of the branch options {L,R}. Then independence could be clearly described as statistical independence.

Such a system would require the domains to be finite, which is not good. However, within finite domains, results about logical strength of solution concepts would be easy to extract, because they would simply correspond to different constraints on the dependencies between the choices, i.e., marginal distributions. It would, in fact, allow us to quantify the amount of information that was transmitted from one choice to another by computing the mutual information between two marginal distributions.

Wednesday, May 16, 2012

Sevenster: "A strategic Perspective on IF Games" (2009)

This is a commentary on Hintikka and Sandu's game-theoretical approach to independence-friendly logic. Sevenster considers how the notion of truth is changed when one changes the information flow or the solution concept for the falsification game over a sentence.

Information Access

When playing an extensive game, one can be "forgetful" to different degrees. In the general setting, forgetfulness has various effects, such as blocking the possibility of threat behavior. In terms of the semantics of quantifiers, the various degrees of forgetfulness also allow for different types of independence:

Memory capacity Solution concept Independence relations
Gobal strategy and past moves Nash equilibrium None
Global strategy Other Existentials of universals
NeitherSubgame perfect equilibirum Anything of anything

To see the difference between the two degrees of independence, consider the following sentence:
  • There is an x and there is a y such that x = y.
Assume that we are in a world with two objects, and that the two existential quantifiers in the sentence are independent of each other. Then the verifier can at most achieve a 50% chance of verifying the sentence, since there is no information flow from the first choice to the other.

If, on the other hand, the second choice is dependent on the first, the verifier can achieve a 100% success rate. The difference is that between a sequential and a simultaneous coordination game.

This example is exactly the one that I have felt missing in Hintikka's discussions, so its nice to see that I'm not alone. Apparently, Theo Janssen has discussed the problem in a paper from 2002 (cf. Sevenster's article, p. 106).

Solution Concepts

Sevenster uses three different solution concepts in his article:
  1. Nash equilibrium
  2. WDS + P
  3. WDS
WDS strategy profiles are profiles in which all players play a weakly dominant strategy, i.e., one that is (weakly) optimal whatever everyone else does. This is a very strong condition.

WDS + P strategy profiles are, as far as I can see from Sevenster's Definition 12, the ones that remain after the removal of the weakly dominated strategies for player n, then for player n – 1, and so on. This is weaker than WDS, since a WDS + P strategy for player i does not need to be (weakly) optimal with respect to every other strategy, but only optimal with respect to the WDS + P strategies for players j with j > i.

Neither of these last two solution concepts are standard. The middle one is slightly problematic because it may give different results when players are enumerated differently. But that's a drawback it shares with all solution concepts based on elimination of weakly dominant strategies.

Some Mixed Equilibria

Just the for the sake of it, let me just briefly review two example sentences:
  • There is an x and there is a y such that x = y.
  • There is an x such that for all y, x = y.
Assume further that we are in a model in which there are exactly two objects, a and b. These two sentences then correspond to a simple coordination game and to Matching Pennies, respectively. The coordination game has the three equilibria (0, 0), (1/2, 1/2), and (1, 1), white Matching Pennies only has the equilibrium (1/2, 1/2).

The interpretation of this in classical logic is that the double existential sentence is supported by two pieces of evidence (a = a and b = b), while the sentence with the universal quantifier is supported by no evidence.

However, in the mixed strategy equilibrium, both strategies pay the same for both players. This means that the players achieve no gain by knowing that the other player is going to play (1/2, 1/2). Accordingly, they correspond to the case without any (useful) information flow between the players.

The existence of such equilibria thus witness the existence of a true, independence-friendly reading of the sentence in the relevant model. Note, however, that this does not mean that the sentences are tautological—the Verifier does not have any strategy that guarantees the payoff 1 in any of the two cases.

Tuesday, May 15, 2012

Hintikka: "What is elementary logic?" (1995)

The claim of paper is that independence-friendly logic is more "natural" that ordinary first-order logic. That is, the restriction to quantifiers with nested scopes is unnecessary and unfounded.
In this article, as in everything he has written, there are some serious linguistic issues with the examples he uses, and it is by no means clear that his own semantic intuitions are generalizable.

The paper is reprinted in an 1998 anthology of Hintikka's work, but Hintikka referred to the paper as "forthcoming" in 1991, and it was published for the first time in 1995.

The Old Example: Villagers and Townsmen

His old natural-language argument for the usefulness of independence-friendly logic comes from his introspective intuitions about the following sentence:
  • Some relative of each villager hates some relative of each townsman
This sentence has two readings, a classical and an independence-friendly. These readings can be distinguished by the following model: Suppose that there is one villager and one townsman, and that they are related to themselves and to each other; suppose further that they hate each other, but do not hate themselves.

The Verifier then instantiates some relative of each (= the only) villager by picking either the villager or the townsman, since everyone is related. The same goes for some relative of each (= the only) villager. The sentence is true exactly when the two choices are different, and not true when they are the same (since no one, per assumption, hate themselves).

When the two choices are independent, the Verifier has no winning strategy, and the sentence is thus not true. The Falsifier, on the other hand, also doesn't have a winning strategy, since some combinations of Verifier choices do in fact make the sentence true, and others don't. In the independent-friendly reading, the sentence is thus neither true nor false. In the classical reading, it's true.

Now Hintikka's claim is that the independence-friendly reading of this English sentence is a plausible reading (or the most plausible?). He does not give any empirical arguments for the claim.

Note that the same logical structure can be replicated with a slightly less far-fetched example:
  • A north-going driver and a south-going driver can choose to drive in a side of the road so that they avoid a collision.
If you think that this sentence is true, you have read it in the classical way. If you think it is false, you have read it in the independence-friendly way.

The New Example: The Boy that Loved the Girl that Loved Him

In support of his claim, he provides the following "perfectly understandable English sentence" as evidence (p. 10):
  • The boy who was fooling her kissed the girl who loved him.
He then claims that this sentence cannot be expressed in first-order logic "no matter how you analyze the definite descriptions" (p. 10).

So how can we analyze the definite descriptions? I guess we have at least the following options:
  • The boy1 who was fooling her2 kissed the girl2 who loved him1.
  • The boy1 who was fooling her2 kissed the girl3 who loved him1.
  • The boy1 who was fooling her2 kissed the girl2 who loved him4.
  • The boy1 who was fooling her2 kissed the girl3 who loved him4.
I suppose that the problematic case that he is thinking about is the first one. That's the one where the sentence implicitly states that the identity of the girl uniquely identifies a beloved boy, while the identity of the boy also uniquely identifies a fooled girl.

This is obviously a circular dependence, but can still meaningfully apply (or not apply) to various cases. For instance, if x fools y and y loves x, then it applies. If x fools y, and y loves z, or loves both x and y, then it doesn't.

But unlike the villagers-sentence, I can't see how this is not expressible in terms of first-order logic, given the usual legitimate moves in sentence formalization. But perhaps Hintikka has some strange and far-fetched "natrual" reading of the sentence in mind?