Thursday, May 31, 2012

Janssen: "Independent Choices and the Interpretation of IF Logic" (2002)

In this paper, Theo Janssen argues that Jaako Hintikka and Gabriel Sandu's notion of independence-friendly logic does not adequately formalize the notion of quantifier independence, at least according to his intuitions about what independence should mean. He has essentially two arguments, of which the first one is the strongest.

Dependence Chains

The most serious problem that Janssen points out is that IF logic may require that a choice A is independent from a choice C without ruling out that there is some intermediate choice B such that A depends on B, and B depends on C.

Such cases create some quite strange examples, e.g.:
  • TRUE: x: (x 2) v (u/x: x = u).
  • FALSE: x: (x = 2) v (∃u/x: x = u).
  • TRUE: x: (x ≠ 2) v (∃u/x: xu).
  • TRUE: x: (x = 2) v (∃u/x: x u).
The true sentence are here true in spite of the independence between u and x, the reason being that the disjunction is not independent of x. The Verifier can thus circumvent the quantifier independence. For instance, in the first sentence, he can set u := 2 and then pick the left disjunct if and only if x = 2.

Similar examples exists where the middle term which smuggles in the dependence is not a disjunction, but another quantifier.

Naming Problems

Another problem occurs, according to Janssen, when "a variable is bound within the scope of a quantifier that binds the same variable" (p. 375). This occurs for instance in sentence like
  • xx: R(x,x).
He claims that such sentences come about by "classically allowed" substitutions from, in this case,
  • xy: R(x,y).
After such a substitution, an indirect dependencies referring to the value of y might be lost, and an otherwise winning Verifier strategy might be broken. However, I don't know whether there would be any problem with just banning double-bound quantifiers as the x above; it doesn't seem to have any necessary or positive effect.

Solutions

To avoid the problems of Hintikka's system, Janssen defines a new game with explicit extra conditions such as "The strategy does not have variables in W as arguments" and "If the values of variables in W are changed, and there is a winning choice, then the same choice is a step towards winning" (p. 382).

This solves the problem, but doesn't bring about much transparency, it seems to me. A better solution would probably be to describe the instantiation of the quantifiers and the selection of branches at the connectives as a probability distribution on a suitable power of the domain and of the branch options {L,R}. Then independence could be clearly described as statistical independence.

Such a system would require the domains to be finite, which is not good. However, within finite domains, results about logical strength of solution concepts would be easy to extract, because they would simply correspond to different constraints on the dependencies between the choices, i.e., marginal distributions. It would, in fact, allow us to quantify the amount of information that was transmitted from one choice to another by computing the mutual information between two marginal distributions.

No comments :

Post a Comment