Tuesday, May 6, 2014

de Finetti: Probability, Induction, and Statistics (1972)

In Chapters 8 and 9 of this anthology, Bruno de Finetti reiterates his reasons for espousing Bayesian probability theory as the unique optimal calculus of reasoning. This brings him into a discussion of several controversies surrounding the two paradigms of statistics.

Bruno de Finetti and a computer; image from www.moebiusonline.eu.

No Unknown Unknowns

According to de Finetti, the ordinary meaning of the word "probability" is "a degree of belief" (p. 148), and he rejects any attempt to define it in terms of frequency:
… we reject the idea that the ostensible notion of identical events or trials gives a suitable basis for an empirical formulation of a frequentist theory of probability or for some objectivistic form of the "law of large numbers". (p. 154)
Consequently:
The probability of an event conditional on, or in the light of, a specified result is a different probability, not a better evaluation of the original probability. (p. 149)
There is thus no such things as an "unknown probability." You always know your own uncertainty:
Any assertion concerning probabilities of events is merely the expression of somebody's opinion and not itself an event. There is no meaning, therefore, in asking whether such an assertion is true or false or more or less probable. (p. 189)
Thus, "speaking of unknown probabilities must be forbidden as meaningless" (p. 190) and in fact rejected as a "superstition" (p. 154–55).

But of course we do have problems assigning numbers of things, so de Finetti has some explaining to do. He thus invokes the analogy of choosing a price for a commodity:
A personal probability is, in effect, a quantitative decision closely akin to deciding on a price. In seeking to fix such a number with precision the person will sooner or later encounter difficulties that evoke the expressions "vagueness", "insecurity", or "vacillation". Analysis of this omnipresent phenomenon has given rise to misunderstandings. Thus, attempts to say that the exact probabilities are "meaningless" or "non-existent" pose more severe problems than they are intended to resolve, similarly for replacements of individual probabilities by intervals or by second-order probabilities. […] Sight should not be lost of the the fact that a person may find himself in an economic situation that entails acting in accordance with a sharply defined probability, whether the person chooses his act with security or not. (p. 145)
In spite of this seeming pluralism about personal opinion, he still maintains that the mathematical concept of probability is an idealization:
The (subjectivistic) theory of probability is a normative theory (p. 151).
But of course, the latter refers only to the mechanics of the calculus, not the choice of priors.

Rants Against Frequentism

De Finetti hates frequentist statistics. In his brief historical sketch, he says that the frequentist theory is a set of "substitutes" for Bayesian reasoning which were supposed to fill the "void" left after the analysis by Bayes was rejected (p. 161).

He adds:
The method pursued in the construction of such substitutes consists in general of adopting or imitating some case where the correct method reduces to a simple form based on summarizing parameters, however substituting for the true formulation and justification some incomplete and fragmentary justification or even no justification at all, as comes to seem legitimate when each notion is interpreted as something autonomous and arbitrary. For each isolated problem it appeared thus legitimate to devise as many ad hoc expedients as desired, and in fact it often happens that several are devised, proposed, and applied, to a single problem. (p. 161)
Shortly after, another rant follows:
In this manner, any notion of a systematic and meaningful interpretation of the problem of statistical inference is abandoned for the position of devising, case by case, "tests" of hypotheses or methods of "estimating" parameters. This means formulating, as an autonomous and largely arbitrary question, the problem of extracting from experience something that is apparently to be employed as though it were a conclusion or conviction, while asserting that it is neither one nor the other. (p. 162)
He is specifically angry about the "grossly inconsistent" notion of tests and hypothesis rejections, which he finds to be perverse distortions of the proper use of Bayes' rule (p. 163):
The severest of these mutilations is that of the oversimplified criteria according to which a probability P(E | H) us taken as a basis for rejecting the isolated hypothesis H if this probability, for the observation E, is small. (p. 163)
Such hypothesis rejection are, namely, ambiguous about what event E the observed data actually testifies to, as in the problem of choosing between one-sided and two-sided tests:
If, for example, as is often the case, E consists in having observed the exact value x of a random number X, such a deviation, the probability of that exact value is ordinarily zero. In order to eliminate the evident meaninglessness of this criterion that rejects the hypothesis no matter what value x may have, some other is substituted for it, such as observation of a value equal to or greater than x in absolute value, or equal or greater in absolute value and of the same sign. But all these variables are arbitrary, at least in the framework of so crudely mutilated a formulation. (p. 163)
On the following page, he also gives the example of having to decide whether a point on a target was hit by a particular marksman. He gives various examples of sets that such a point can belong to: The singleton set containing only the point itself, a circle having the point as a center, a slice of the target containing the point, a circle having the center of the target as its center, etc.

Various ways of construing the acceptance region for a test.

He continues to say that "One might say that all the deficiencies of objectivistic statistics stem from insistence on using only what appears to be soundly based" (p. 165). This, he says, is like setting a price according to the things that are easiest to measure rather than the things that are most relevant.

The issue of building a statistical enterprise on likelihoods alone is, he contends, like a systematic attempt to find P(E | H) when you are looking for P(H | E). In an example he attributes to Halphen:
We need a cement that will not be harmed by water. The merchant advises us to buy a certain kind that, he assures us, will not harm water. He does not try to cheat us by saying that the two things are equivalent but he want to convince us not to insist on asking for what we need (p. 173).
This is apparently a commentary on related example used by Neyman.

De Finetti on Wald

In a series of papers from the 1940s and 50s, Abraham Wald developed a theory of "admissible decision functions" for decision problems with uncertainty (see, e.g., here). His idea was to consider a decision admissible if it minimized the maximal damage that could obtain in the given situation. This correspond to the solution of a two-person zero-sum game against a malevolent nature.

In his discussion of Wald's theory, de Finetti helpfully "completes" the specification of a decision problem by putting a prior probability on the various hypotheses. Having provided these marginal probabilities, he comments:
Of course, these marginal elements do not appear in Wald's formulation; their absence there is just what prevents the problem of decision from having the solution that is obvious when the table is thus completed. Namely, choose the decision corresponding to the minimal(mean) loss, or equivalently to the maximal(mean) gain or the maximal (mean) utility. Here we have always put "mean" between parentheses but from now on shall suppress the word altogether; for value and utility in an uncertain situation is, by definition, the mathematical expectation of the values of utilities. (p. 179)
In his own work, Wald concluded that the admissible strategies are the mixed strategies whose support consists of pure strategies that are optimal for some parameter setting. But these are also the ones that can be rationalized by some prior probability distribution, so de Finetti happily concludes that
… the admissible decisions are the Bayesian ones; that is, those that minimize the loss with respect to some evaluation of the [prior probabilities]. (p. 181).
Abraham Wald; image from Wikimedia.
Having thus turned Wald into a closet Bayesian, de Finetti only needs to object a bit to the distribution-free worst-case reasoning that Wald applied in order to reach his conclusion:
Wald did not explicitly recognize the rule of the probability evaluation in induction and, even more, he seemed inclined to emphasize everywhere the application of the minimax principle, which is reasonable only in strategic situations (like the zero-sum-two-person case in the theory of games) or under such a superstition as that of a "malevolent nature". In spite of its shortcomings, Wald's formulation avoids the narrow interpretation of decisions as acceptance of hypotheses, and offers freedom to choose the proper decision according to a not yet openly recognized prior opinion. (p. 183)
He also later criticizes the minimax solutions on the grounds that "their initial assumptions seem rather arbitrary and artificial" (p. 198). He thus notes:
If the subjectivistic formulation were to lead to conclusion diverging from the objectivistic ones, opposition would be understandable; but the conclusions are the same. Among the admissible rules, the objectivistic theory requires that one be chosen arbitrarily, and it cannot give any criterion of preference; the subjectivistic theory does the same but explains each possible choice as corresponding to a suitable initial opinion. Why then reject this compelling unification? (p. 185)
This is, I think, quite crude, and also misses the essential concern about statistical consistency which plays such a large role in frequentist reasoning, and which has no place in Bayesian reasoning, where all priors are considered equal. Another way of saying this is that Wald would have worried as much about the admissible priors as he worried about the admissible decisions if he had turned Bayesian. A foundation for statistical reasoning cannot itself be statistical.

No comments :

Post a Comment