One hope has been that science and philosophy, partly by extrapolating from obvious cases, might be able to fill out a set of framework principles: regulative principles of rationality in assessing beliefs, methodological rules to steer people towards the more plausible interpretations of the world. If epistemology and philosophy of science gave this guidance, we would have a map of the regulative part of the plausibility constraints on beliefs. But those parts of philosophy largely disappoint this hope. Despite some efforts starting in this direction by Popper and Lakatos, books on philosophy of science are not rulebooks for scientists trying to choose between hypotheses. One question is whether this is because, like chemists before the periodic table, we have not yet worked out the map, or whether it is because there is no “best” set of framework principles.


It is worth starting with a reminder of a familiar reason for a degree of modesty about our own beliefs. Many of them clearly result from highly contingent features of our lives and experience.

In the emotion-charged abortion debate, the rival positions are usually defended as derivable from abstract principles about rights (whether the right to life or the right to choose), personhood, ownership of one’s body, or about when human life begins. Yet it is hard to believe that people really work out some definition of personhood or some theory of rights and then find, to their surprise, that some view about abortion follows. It seems more likely that often people have some experience that pushes them to take one side in the debate and then invent the reasons afterwards.

Experience (even at second hand) of the consequences of leaving desperate women to seek out back street abortionists could push someone towards the pro-choice view. The experience of working in a hospital, and on the same day helping both to save the life of a premature baby and to abort a fetus at the same gestational age, could push someone towards the pro-life view. Our deeply held commitments may often depend in this way on the particular experiences we have had. If, instead of clinging to the abstract arguments that we hope will force our opponents to accept our view, we were more open about the contingency of how we came to acquire that view, debates could change. Some of the notorious hostility of the abortion debate could be replaced by an awareness that, if each of us had had the experiences of our opponent, we might well have ended up
with each other’s beliefs.

The relatively recent discipline of social epistemology, pioneered by Alvin Goldman, explores the social conditions likely to assist or retard the acquisition of true beliefs.i And, of particular relevance to our topic, under the name social moral epistemology, this approach has been applied by Allen Buchanan to the mixture of factual and moral beliefs that characteristically make up an ideology.ii Buchanan draws on his own experience of being indoctrinated in the racist world-view of many in the Deep South of the United States. He came to see this outlook as being “built on a web of false beliefs about natural differences between blacks and whites” and felt he had been betrayed: “Those I had trusted and looked up to –my parents, aunts and uncles, pastor, teachers, and local government officials- had been sources of dangerous error, not truth.”

Knowledge and belief are not just an individual affair, but are a shared enterprise. On many matters we have to accept the authority or testimony of others. But Buchanan points out that this inevitable “epistemic deference” brings with it the danger of being misled as he had been. He goes on to explore the kinds of social institutions and of attitudes that may be hoped to reduce the dangers of our dependence on authority figures. The dangers are acute because these figures (parents, pastor, teachers, etc.) have such a large influence on the “anchoring context” part of the core of a person’s belief system.

Both the case of abortion and this indoctrination into racist ideology can trigger recognition of the contingency of how we come to believe things. It is a familiar point (and in teenagers sometimes the start of an interest in philosophy) that, if I had been born in another part of the world or at a different time, I would have had different beliefs. The explicit recognition that this is so might at least alter the over-confident tone on both sides in debates between “Islam” and “the West”.


Much modern epistemology has its origins in reactions against Descartes’ heroic attempt to escape from the contingency of his own beliefs: the project of making a completely fresh start, doubting everything he believed and rebuilding a new belief system on secure foundations.

The reaction against this has generated the central platitude of modern epistemology. You cannot give up all your current beliefs and still rebuild a belief system. If you are looking for new foundations, there has to be some reason for thinking the ones you choose are secure. To have a reason for accepting a belief into the new system, you have to believe in some criteria or principles of selection.

We cannot reconstruct our beliefs from some Archimedean point quite outside the system. New axioms can themselves be challenged. If they are not based on reasons they are dogma, and if they are based on reasons they are no longer axioms. Karl Popper suggested that the myth of the framework was linked to the axiomatic or foundationalist approach to knowledge. If we are uncomfortable with asserting the axioms as dogma, we may be driven to a kind of relativism. “Your axioms are as good as mine, and so we can have fruitful discussion only when we share some axioms.” If the belief in foundational axioms is given up, we can criticize each other’s views without this inhibiting relativism.

The point that there is no rebuilding out of nothing is often dramatized by using Otto Neurath’s famous image. In reconstructing our belief system, we are not, as “foundations” suggests, rebuilding a house. We are like a sailor having to rebuild a boat at sea. The whole boat may need rebuilding, but at any one time we have to keep enough of it afloat to enable us to reconstruct other parts. This is a liberating image. But our ideological disagreements suggest that we disagree about which bits of the boat need replacing.


If we cannot rebuild out of nothing, what we build is likely to depend on which beliefs are accepted provisionally at the start. If we say we will start only with indubitable axioms, the results are likely to be meagre. A potentially more fruitful approach is that of naturalized epistemology. Why not start, at least provisionally, with the broad picture of the world given to us by science? Then we can give scientific reasons, for instance, to support the general reliability of our senses. A species with senses that often gave its members radically wrong information about the world would be unlikely to survive in the evolutionary competition. The same would be true of a species whose ways of interpreting evidence regularly led its members off on the wrong track.

I share the optimism of Quine and others about starting by keeping these (large) extra bits of the boat intact. It is more promising than approaches that keep only a few planks afloat, whether those planks are axioms or some minimal reports of sensory experiences.

Critics, however, rightly point out the circularity involved. The scientific world picture, including evolutionary theory, rests on evidence coming from sensory observation. And the support for the reliability of sensory observation comes from evolutionary theory. Surely our whole system of knowledge cannot be based on a circular argument of this kind?

But, if the picture conveyed by Neurath’s boat is right, some circularity is unavoidable. We have to keep some beliefs afloat to look at the reliability of others. This means that the beliefs we end up with will inevitably be in mutual support. In particular, our general system of factual beliefs about the world and our framework principles will be in mutual support, as they are in the case of naturalized epistemology.

Hilary Putnam once wrote, “Madmen sometimes have consistent delusional systems; so madness and sanity can both have a “circular” aspect… If we have to choose between “circles”, the circle of reason is to be preferred to any of the many circles of unreason”.iii But how can the preference for one circle over others be defended? Why is it the circle of reason? Or, if there is some definitional strategy that makes it so, why is “the circle of reason” to be preferred? Could we refute someone who took a different view?


We cannot escape a degree of circularity in our belief systems. Which particular beliefs we hold is likely to be influenced by the contingencies of our history and experience. Do these admissions lead to epistemological relativism, with no serious prospects of ever resolving our disagreements? My belief is that there has been some progress in epistemology and that its continued progress holds out some hope of contributing to the alleviation of our ideological conflicts.

Since Socrates, philosophy has been taught by getting people to spell out what they believe and their reasons for doing so, followed by the method of reductio ad absurdum through producing counterexamples to the beliefs or to the reasons given for them. “Surely you are not prepared to accept this implication of what you say?” In this way epistemology argues against beliefs by showing their costs. Unwelcome consequences are an implicit invitation to abandon or modify a belief. But the fact that they are unwelcome is not itself generated by logic, but by an intuitive sense of what is plausible. Logic alone is enough to exclude inconsistent belief systems, but not enough to choose between consistent ones. An epistemologist with no intuitive sense of plausibility or implausibility could still produce a map of the costs of different belief systems, but would have no way of deciding which costs were acceptable or unacceptable. Something extra is needed.

The “something extra” needed in addition to logic may be a “feel” for plausibility. It is clear that up to a point we do have such a feel. Most of us have a feel for the plausibility constraints lacking in some people with psychiatric delusions. But, if it is right that those plausibility constraints come from the core structure of a belief system, from the framework principles and the anchoring context, there may be no neutral plausibility constraints to be used in adjudicating between rival belief systems. The frequency of ideological conflict suggests that people differ in their intuitive feel for which beliefs to take as fixed points, in their intuitive feel for when to give up a belief, and in their feel for when we should be agnostic between alternative systems of belief. This could be because these questions do not have objectively correct answers. Are our different intuitions about plausibility are just the product of our particular personal or group histories?

But perhaps this does not have to be so. Could there one day be the possibility of agreement on the superiority of some cognitive strategies over others? Epistemology not only shows the costs of holding some particular belief about the world, but also shows the cost of adopting a particular cognitive strategy. Few of us are impressed by the argument used by people who believe they were once abducted by aliens that they do not remember the experience because the aliens deliberately erased all memory of it.iv Few of us think much of Gosse’s way of dealing with the fossil evidence for evolution or Palme Dutt’s way of dismissing doubts about the new Communist Party line in 1939. If you can show me that my strategy for defending a cherished belief is exactly parallel to one of those ploys, you have gone some way towards undermining it. Just as we revise particular beliefs by seeing their unacceptable logical consequences, we revise our cognitive strategies for the same kind of reason.


It may be that at some deep level we humans are programmed with dispositions towards some cognitive strategies rather than others. It is obvious from our disagreements over cognitive strategies that these dispositions, if they exist, can be over-ridden by structural distortions caused by the desire to defend some ideological commitment or by the kinds of social brainwashing described by Allen Buchanan. But, if there is in this way a shared human epistemological deep structure, there is the hope that patient investigation of the implications of different cognitive strategies might help us to escape these structural distortions, as Allen Buchanan broke free from racist ideology. It seems to hold out the possibility of a degree of agreement if we can penetrate below our surface differences of belief to the shared deep cognitive deep structure.

And, if we make some of the optimistic assumptions of evolutionary psychology, the existence of such programming might even support the view that the programmed strategies are objectively good ones. One argument for holding that our senses can in general be relied on is that a species whose senses systematically gave wrong information about the environment would probably lose in the evolutionary struggle, both against species it preys on and against its own predators. In a parallel way, we can expect a species that survives, especially one that has proved as successful as ours, to have cognitive strategies that also lead to successful interpretations of what the world is like.

(Perhaps Martians, with very different brains from ours, might have very different intuitions about strategies for interpreting evidence. Our evolution might have programmed our brains to deal with one environment in one way and their evolution in a different environment might have programmed their brains in a different way. Especially if Martians were as combative as us, this might be a recipe for disastrous interplanetary ideological conflict. But, in the context of ideological conflicts within the human species, the idea of biologically programmed shared cognitive strategies has to be a bit encouraging.)

But this optimism may be unfounded. Perhaps there are no shared dispositions towards particular cognitive strategies. It could be that Jerry Fodor is right that the central psychology of human thinking is holistic in a way that excludes prescribing more specific cognitive strategies.v Or there may be a degree of shared cognitive deep structure, but not enough. Or there may be other reasons making unattainable “regulative ideal” of human convergence on a particular set of cognitive strategies.


Suppose we take the more pessimistic view that we will continue to disagree over which costs of a cognitive strategy are great enough to make it unacceptable. There is still the possibility that we will come to agree on the epistemological map. That is, we may agree about how different beliefs and cognitive strategies hang together, and about the different costs of each of those systems of beliefs and strategies. Our disagreements would be about the relative importance of different intellectual costs and benefits. There is a political analogy. Our situation would be like that of people who agree that there is a trade-off (for instance in education) between liberty and equality of opportunity, and agree exactly on the degree of restriction of liberty needed for a given increase in equality of opportunity, but who disagree on whether it is worth it. Even to reach the epistemological equivalent of this will be a long and difficult business. But it is not obviously impossible.

If we were to reach this agreement on the epistemological map, it would be an enormous gain for dialogue. We would know that we differ over acceptable intellectual costs, and that neither of us has a decisive argument to overthrow the view of the other. (For one of us not to accept what the other regards as a decisive argument is to disagree on the epistemological map.) This would establish that, while we have differences, we have a shared view about their nature. It would also show that we are each aware of how a reasonable person could adopt the other position. It would support both mutual understanding and a respectful agreement to differ.vi


The idea of a shared epistemological deep structure may be wildly utopian. To expect continuing divergence both of beliefs and of cognitive strategies may be more realistic. Such divergence would be compatible with agreement on the epistemological map. But this more cautious hope also may be far too optimistic. Even if so, we still have grounds for thinking that (very gradually, as part of a long, slow strategy) epistemology can make a substantial contribution to alleviating ideological conflict.

Those of us who teach philosophy see its effects. It is not that people converge on a set of right answers. Sometimes they change from their original views, though not always in the same direction. Sometimes they stick to their original views. But, in either case, often (though not always) they hold their new or old views in a different way. The change is sometimes hard to pin down. There is perhaps less certainty, more awareness of the variety of alternative views, and an awareness of how any view rests on a precarious basis and may be vulnerable to objections not yet considered. This tempered confidence can be reflected in a change in the tone in which things are asserted. It often arises out of the experience of not being certain how to reply to a criticism. Perhaps sometimes it comes from having caught a glimpse of how disagreements might seem against the background of a shared epistemological map.

If in education experience of epistemology was as common as experience of mathematical thinking, the same tempered confidence and change of tone might start to permeate our ideological debates about religion and politics. It is not just philosophers who have interpreted the world in different ways. But they can help to change it. We are now used to the thought that through “applied ethics” philosophy can do a bit to help people think and act differently in some of their practical decisions, and do the same for some kinds of public policy decisions. Philosophy may make a new kind of contribution if the idea of “applied epistemology” starts to take root in our divided world.