## Monday, December 8, 2014

### Edwards: Likelihood (1972)

 Edwards (from his Cambridge site)
The geneticist A. F. W. Edwards is a (now retired) professor of biometry who was massively influenced by Ronald Fisher in his scientific writings. His books Likelihood argues that the likelihood concept is the only sound basis for scientific inference, but it reads at times almost like one long rant against Bayesian statistics (particularly ch. 4) and Neyman-Pearson theory (particularly ch. 9).

### Don't Do Probs

As an alternative to these approaches to statistics, Edwards proposes that we limit ourselves to makes assertions only in terms of likelihood, "support" (log-likelihood, p.12), and likelihood ratios. In the brief epilogue of the book, he states that this
… allows us to do most of the things which we want to do, whilst restraining us from doing some things which, perhaps, we should not do. (p. 212)
In particular, this approach emphatically prohibits the comparison of hypotheses in probabilitistic terms. The kind of uncertainty we have about scientific theories is simply not, Edwards states, of a nature that can be quantified in terms of probabilities: "The beliefs are of a different kind," and they are "not commensurate" (p. 53)

### The Difference Between Bad and Worse

He briefly mentions Ramsey and his Dutch book-style argument for the calculus of probability, and then goes on to speculate that, had not died so young,
… perhaps he would have argued that his demonstration that absolute degrees of belief in propositions must, for consistency's sake, obey the law of probability, did not compel anyone to apply such a theory to scientific hypotheses. Should they decline to do so (as I do), then they might consider a theory of relative degrees of belief, such as likelihood supplies. (p. 28)
In other words, it might be true that you cannot assign numbers to propositions in any other way than according to the calculus of probabilities, but you can always reject to have a quantitative opinion in the first place (or not make a bet).

### Nulls Only

Consistently with Fisher's approach to statistics, Edwards finds it important to distinguish between null and not-null hypotheses: That is, in opposition to Neyman-Pearson theory, he refuses to explicitly formulate the alternative hypothesis against which a chance hypothesis is tested.

Here as elsewhere, this is a serious limitation with quite profound consequences:
It should be noted that the class of hypotheses we call 'statistical' is not necessarily closed with respect to the logical operations of alternation ('or') and negation ('not'). For a hypothesis resulting from either of these operations is likely to be composite, and composite hypotheses do not have well-defined statistical consequences, because the probabilities of occurrence of the component simple hypotheses are undefined. For example, if $p$ is the parameter of a binomial model, about which inferences are to be made from some particular binomial results, '$p=\frac{1}{2}$' is a statistical hypothesis because its consequences are well-defined in probability terms, but its negation, '$p\neq\frac{1}{2}$', is not a statistical hypothesis, its consequences being ill-defined. Similarly, '$p=\frac{1}{4}$ or $p=\frac{1}{2}$' is not a statistical hypothesis, except in the trivial case of each simple hypothesis having identical consequences. (p. 5)
This should also be contrasted with Jeffreys' approach, in which the alternative hypothesis has a free parameter and thus is allowed to 'learn', while the null has the parameter fixed at a certain value.

### Scientists With Attitude

At several points, in the book, Edwards uses the concerns of the working scientist as an argument in favor of a likelihood-based reasoning calculus. He thus faults Bayesian statistics for "fail[ing] to answer questions of the type many scientists ask" (p. 54).

This question, I presume, is "What does the data tell my about my hypotheses?" This is distinct from "What should I do?" or "Which of these hypotheses is correct?" in that it only supplies the objective, quantitative measure of support, not the conclusion:
The scientist must be the judge of his own hypotheses, not the statistician. The perpetual sniping which statisticians suffer at the hands of practising scientists is largely due to their collective arrogance in presuming to direct the scientists in his consideration of hypotheses; the best contribution they can make is to provide some measure of 'support', and the failure of all but a few to admit the weaknesses of the conventional approaches has not improved the scientists' opinion. (p. 34)
In brief form, this leads to the following tirade against Bayesian statistics:
Inverse probability, in its various forms, is considered and rejected on the grounds of logic (concerning the representation of ignorance), utility (it does not allow answers in the form desired), oversimplicity (in problems involving the treatment of frequency probabilities) and inconsistency (in the allocation of prior probability distributions). (p. 67–68)