At various online locations I am referred to as a ‘philosopher’ in addition to being a politician Nonetheless, in recent months philosophy has very much taken a back seat to my political work. Whilst this is to be expected with the realities of the European campaign being what they are; having been reminded of the joys of philosophical inquiry in a late night (and not all-together sober) conversation recently, I thought I would take the time to share a little snippet of philosophical reasoning with you: a follow-up to my previous post on the paradox of the ravens…
A central concern in the philosophy of science has been to provide a precise account of the relation between theory and evidence in such a way that explains when a theory is well confirmed by supporting evidence. A theory based on Bayes’ theorem has emerged as a promising approach in this field, explicating the relationship between theory and evidence precisely and doing so in a way which potentially solves the apparent paradoxes of confirmation. I have previously written about one such paradox, the paradox of the ravens, and therein suggested that Bayesian confirmation theory, or Bayesianism, might provide a way out of this philosophical quandary. However, Bayesianism is not without its critics and many unanswered questions arise from my earlier treatment of the theory. Therefore, in what follows I will examine the main criticisms that have been or could be made of Bayesian confirmation theory. Nevertheless, throughout I will argue that Bayesianism is an innovative and useful approach in the philosophy of science; one that, when put in its proper context, illuminates precisely that which any theory of confirmation should: the general relationship between theory and evidence that ought to exist for any rational agent.
Before examining the main criticisms that can be made of the theory, it will be useful to remind ourselves what exactly the approach of Bayesianism consists in. Bayes’ theorem results from a close analysis of the relationship between two conditional probability relations. We can precisely explicate conditional probability mathematically. To see this, let ‘P(A)’ stand for the probability that proposition A is true, ‘P(A|B)’ stand for probability that A is true on the assumption that B is true (i.e. the conditional probability of A given that B), and P(A^B) stand for the probability of both A and B being true. Conditional probabilities can thus be worked out as follows:
Bayes’ theorem stems from the relationship between this equation and its converse:
By multiplying the first equation by P(B) we get this:
And by multiplying the second by P(A) we get:
We can then combine the two equations thus:
Finally, we can then remove the middle term and divide everything by P(B) giving us this:
This is Bayes’ theorem: precisely explicating the relation between the conditional probability of A given B, the conditional probability of B given A, and the prior probabilities of both A and B. The genius of Bayesian confirmation theory was to realise that one could substitute A and B for the propositions that a given theory is true and that certain evidence obtains, respectively. Using ‘T’ for the theory and ‘E’ for any given evidence, the theorem as it used by Bayesianism can be restated thus:
Moreover, we can further reorganise the theorem in accordance with the maxims of mathematics to more precisely explicate the relationship between theory and evidence which is described by it:
This means that the likelihood of the theory being true given that certain evidence is true is equal to the prior probability of the theory’s being true multiplied by the probability of the evidence obtaining given the theory over the likelihood of the evidence obtaining independent of the theory. I submit that this accords well with pre-philosophical intuitions about the relationship between theory and potential confirmatory evidence.
Nevertheless, all this talk about probability is rather vague. Just what is being talked about here? Well there are many different ways to characterise probability, some subjective and some objective. However, in engaging with confirmation theory we are attempting to explicate the way in which a theory is or is not well supported by the evidence. In talking about ‘support’ and ‘confirmation’ we are dealing with epistemic notions, not metaphysical ones. As far as the metaphysics is concerned, whatever theories correctly describe reality are true and those which do not are false, irrespective of any available evidence for the truth or falsehood of such. These considerations suggest that if Bayes’ theorem is to have any application to confirmation theory, as Bayesians wish it to, the probabilities it refers to must be subjective probabilities. This is because these are epistemic notions, best considered as measures of the degree of belief in the proposition in question. Like all probabilities, however, subjective probabilities can and, if the agent concerned is being rational, should conform to the axioms of the probability calculus. In short, all values to be assigned to degrees of belief should be between 0 and 1, 0 being attributable to impossible propositions and 1 to necessary ones, and the probabilities for mutually exclusive propositions such as P(A) and P(¬A) should sum to 1.
It thus seems that we have an equation for specifying the precise relationship between theory and evidence which philosophers have been seeking; discovered in a 250+ year old theorem. However, not all is as it seems here. There are many potential problems with this approach and, rather frustratingly, most of these have to do with the very fact that what we are dealing with in Bayesianism are indeed the very subjective probabilities that we just established are needed for the theorem to have any application to confirmation theory.
Most notably, there are problems in quantifying the prior probabilities involved in the equation. Firstly, it is hard to imagine that for obscure theories one would have any prior degree of belief that the theory is true and likewise perhaps for the evidence in question too. If one does indeed have no degree of belief in the proposition at issue then surely one’s degree of belief for that proposition is 0. However, if that is the case then the theorem will say that regardless of what evidence one may be presented with one’s degree of belief in the theory given that evidence will always remain 0. This is a clear divergence from our common sense notions about theory and evidence since one would expect even the most hard-line sceptics with regard to a particular theory to at least have the potential to change their minds in light of strong evidence in its favour.
Secondly, there is the problem of divergent prior probabilities. The theory does give us a precise explication as to the effect of new evidence on the degree of belief for a given theory. Nevertheless, if different people have different degrees of belief for that same theory then the operation of the theorem in each of their cases will be different. This seems to be a problem since surely we want an account of confirmation that provides for the general acceptance of theories given certain supporting evidence that is more or less uniform. In other words, either a theory is well supported or it is not: it simply will not do to have different people having different degrees of belief based on the same evidence.
Neither of these quantification concerns is close to providing a fatal objection to the Bayesian approach, however. With regard to the first problem, that of zero prior probability, the Bayesian can simply point to the axioms of probability and remind the critic that 0 is a value to be reserved only for necessarily false propositions. It may prima facie seem the commonsensical thing to do to attribute 0 to a lack of any obvious degree of belief, but this would be to miss the point about the axioms of probability. Instead one should remember that one’s beliefs fit into cognitive web and when attributing a value to the prior probability of a hitherto uncomprehended possibility give a little thought to how likely it is or is not given other beliefs that one possesses. If it is not literally impossible but so outlandish and unlikely given the great bulk of other beliefs one has then it is appropriate to attribute a value close to, but greater than, 0. If, conversely, though not particularly unlikely and potentially congruent with existing beliefs but nevertheless never before conceived, something closer to an entirely non-committal 0.5 would be called for.
As regards the second quantification concern, divergent prior probabilities, Bayesians can likewise remind their critics that this is a theory which if it is to be valuable needs to be at least in principle compatible with actual scientific practice. In the practice of science it is in fact often the case that many different scientists have entirely divergent views as to the appropriateness of accepting a theory given certain evidence. It takes a long time for a theory that, even if well supported evidentially, otherwise conflicts with established doctrine to be considered acceptable by the scientific community. This is phenomenon on which Kuhn in particular has focused attention. Nonetheless, what is clear from scientific practice is that divergent opinions as to the likelihood of theories being true and the likelihood of evidence obtaining independently etc. are eventually equalised over periods of time. To use the much repeated phrase in the literature, divergent prior probabilities are ‘washed out’ through the scientific population over a suitable period of time. Thus, the mere fact that divergent prior probabilities are possible does not pose a threat to Bayesianism since they eventually converge.
These kinds of response are, however, not accepted by one famous critic. Glymour in his paper Why I am not a Bayesian criticises this ‘washing out’ defence in particular and says that this is no defence at all. In his mind, Bayesians often point to the practice of washing out without ever providing an explanation of as to how or why this takes place. It is simply not sufficient to say that convergence takes place, no matter how formally such convergence may be characterised (indeed it can be symbolically expressed, though for brevity I have resisted the urge to restate such formal characterisation here). The whole point of any confirmation theory is to provide an account that can apply to scientific practice, but crucially to explain how it may so apply. To point to the phenomenon of washing out in actual scientific practice without demonstrating how such convergence is actually accommodated by the theory in question is to obfuscate the issue at hand and fail to deal with the objection directly.
Is this a fair characterisation of the initial washing out defence? I think so, particularly because it is hard to see how a formula that is so precise in its characterisation of the relation to theory of evidence and vice versa could allow for such convergence on the assumption of initially divergent figures: the maths just could not allow for it surely? Glymour raises further objections too, among them both that one cannot determine a person’s subjective probabilities based on their betting behaviour (a criticism of those Bayesians who have made precisely the opposite claim) and that one might have all sorts of reasons for not amending one’s degree of belief in the theory based on the evidence. On the latter point, he asserts that since no contradiction is involved in not so amending one’s degree of belief there is no rational obligation on the agent to amend his or her view on the matter.
Nevertheless, I remain committed to the view that Bayesianism is a useful and innovative approach in the philosophy of science precisely because it does demonstrate a constraint on the intellectual actions of would-be rational agents. As has been demonstrated, the theorem derives from pure mathematical considerations and it is hard to get more rational than that. That having been said, the theorem when applied to confirmation theory has to involve subjective probabilities and these are vague, divergent, and often indeterminable. This presents something of a dilemma. However, Horwich quite rightly reminds us that Bayesianism is not to be considered anything like a complete explanation of scientific confirmation. Rather, it is there as a useful but very general guide to how one ought to react to evidence. It is useful because it does so precisely explicate the relationship between theory and evidence but it is general because so often the precise figures at issue are difficult to get to in virtue of the elusiveness of the prior probabilities in question.
We are now, then, in a position to engage directly with Glymour’s criticisms. As regards the wash out problem, the point is that Bayesianism is a general approach in virtue of the often indeterminateness of the prior probabilities involved. This is not a fault with the theory but rather a fault with us. We are not as rational creatures as we like to think ourselves to be and, when remembering that, we can see Bayesianism in its proper light. It is there to guide us, to act as a prescription for proper scientific thinking. Bayesian confirmation theory cannot fully account for how and why wash out could occur except as to say that in a perfectly rational world it would not need to occur. It does so in actual scientific practice because it is more irrational than it ought to be in the assignment of prior probabilities, an irrationality that is eventually corrected over time as a result of greater adherence to something like a Bayesian way of thinking. Bayesians could therefore accommodate for wash out in given population of thinkers by saying that the population tends towards being more rational and so the resultant probabilities converge. However, I think this is to focus too much attention on what is a not a great problem. The same goes for any criticism of the theory by way pointing out identification of prior probabilities, i.e. the zero prior probability problem and betting behaviour determinacy critiques. The point is, as I have argued throughout this discussion, that this is a prescription for being more rational. It is important that it can map onto our practice and explain in a general way a hitherto mysterious relation, but this is not to be confused with the need for the theory to conform to every initial intuition we have about prior probabilities. I have argued that we should think more carefully about just what our prior probabilities are and in so doing be more rational in our assessment of our doxastic structure. This is same kind of ‘more rational’ that is referred to when talking of the scientific population tending towards being more rational and so we in fact have a working model for how Bayesianism can overcome the objections by Glymour and others.
In conclusion, therefore, I submit that the Bayesian approach, despite the many criticisms which have been explored here, is indeed a useful and innovative theory in the philosophy of scientific confirmation. I submit that there has been a confusion as to the status of the theorem as regards its descriptive versus prescriptive function, and that when one takes note of its proper role as a prescription for rational intellectual action in the face of erstwhile unconsidered evidence, it is indeed a most illuminating theory.
 McLaren, M.J., Why a pink blancmange confirms that all ravens are black…, available online URL=<http://mjmclaren.blogspot.co.uk/2010/08/why-pink-blancmange-confirms-that-all.html>, dated 26th August 2010
 See Kuhn, T., The Structure of Scientific Revolutions (Second Edition), London: Chicago University Press, 1969
 Glymour, C., ‘Why I am not a Bayesian’, in Curd, M. and Cover, J.A. (eds.), Philosophy of Science: The Central Issues, London: W. W. Norton & Company Ltd., 1998, pp. 584-606
 Horwich, P., ‘Wittgensteinian Bayesianism’, in Curd, M. and Cover, J.A. (eds.), Philosophy of Science: The Central Issues, London: W. W. Norton & Company Ltd., 1998, pp. 607-624
- Anonymous, In praise of Bayes, dated 30th September 2000, available online, URL = <http://www.cs.ubc.ca/~murphyk/Bayes/economist.html>
- Caruana, L., PH329 Philosophy of Science Lecture Series and Associated Notes, as delivered at Heythrop College, University of London, Michaelmas Term 2010
- Glymour, C., ‘Why I am not a Bayesian’, in Curd, M. and Cover, J.A. (eds.), Philosophy of Science: The Central Issues, London: W. W. Norton & Company Ltd., 1998, pp. 584-606
- Horwich, P., ‘Wittgensteinian Bayesianism’, in Curd, M. and Cover, J.A. (eds.), Philosophy of Science: The Central Issues, London: W. W. Norton & Company Ltd., 1998, pp. 607-624
- Kuhn, T., The Structure of Scientific Revolutions (Second Edition), London: Chicago University Press, 1969
- McLaren, M.J., Why a pink blancmange confirms that all ravens are black…, available online URL=<http://mjmclaren.blogspot.co.uk/2010/08/why-pink-blancmange-confirms-that-all.html>, dated 26th August 2010
- Papineau, D., ‘Methodology: Elements of the Philosophy of Science’, in Grayling, A.C. (ed.), Philosophy: A guide through the subject, Oxford: Oxford University Press, 1995, pp. 123-180
- Popper, K.R., ‘The propensity interpretation of probability’, in British Journal for the Philosophy of Science 37 (1959), pp. 25-42
- Worrall, J., ‘Philosophy and the Natural Sciences’, in Grayling, A.C. (ed.), Philosophy 2: Further through the subject, Oxford: Oxford University Press, 1998, pp. 199-266