Skip to content

Why Do Views Differ? Bad reasoning, bad evidence or bad faith?

August 19, 2015

Follow on Twitter: @leightonvw

When two parties to a discussion differ, it is useful, in seeking to resolve this, to determine from where the difference arises, and whether it is resolvable in principle.

The reason might be that the parties to the difference have access to different evidence, or else interpret that evidence differently. Another possibility is that one of the parties is applying a different or better process of reasoning to the other. Finally, differences might arise from each party adopting a different axiomatic starting point. So, for example, if two parties differ in a discussion on euthanasia or abortion, or even the minimum wage, with one party strongly in favour of one side of the issue and the other strongly opposed, it is critical to establish whether this difference is evidence-based, reason-based, or derived from axiomatic differences. We are assuming here that the stance adopted by each party on an issue is genuinely held, and is not part of a strategy designed to advance some other objective or interest.

The first thing is to establish whether any amount of evidence could in principle change the mind of an advocate of a position. If not, that leads us to ask where the viewpoint comes from. Is it purely reason-based, in which case (in the sense I use the term) it should in principle be either provable or demonstrably obvious to any rational person who holds a different view. Or else, is the viewpoint held axiomatically, so it is not refutable by appeal to reason or evidence? If the different viewpoints are held axiomatically by the parties to the difference, there the discussion should fall silent.

Before doing so, however, we should determine whether the conflict in beliefs between one person and another actually is axiomatic. The first question is to ask whether the beliefs are held by one or more parties as self-evident truths, or are they open to debate based on reason or evidence? In seeking to clarify this, it is instructive to determine whether the confidence of the parties to the difference could in principle or practice be shaken if others who might be considered equally qualified to hold a view on the matter disagree. In other words, when other people’s beliefs conflict with ours, does that in any way challenge the confidence we hold in these beliefs?

Let us pose this in a slightly different way. If two parties hold conflicting views or beliefs, and each of these parties has no good reason to believe that they are the person more likely to be right, does that at least make them doubt their view, even marginally? Does it perhaps give a reason to doubt that either party is right? The more closely the answer to these questions converges on the negative, the more likely it is that the divergent views or beliefs are held axiomatically.

If the differences are not held axiomatically, both parties should in principle be able to converge on agreement. So the question reduces to establishing whether the differences arise from divergences in reasoning, which should be resolvable in principle, or else by differences in access to evidence or proper evaluation of the evidence. Again, the latter should be resolvable in principle. In some cases, a viewpoint is held almost but not completely axiomatically. It is therefore in principle open to an appeal to evidence and/or reason. The bar may be set so high, though, that the viewpoint is in practice axiomatically held.

If only one side to the difference holds a view axiomatically, or as close as to make it indistinguishable in practical terms, then the views could in principle converge by appeal to reason and evidence, but only converge to one side, i.e. the side which is holding the view axiomatically. This leads to a situation in which it is in the interest of a party seeking to change the view of the other party to conceal that their viewpoint is held axiomatically, and to represent it as reason-based or evidence-based, but only where the other party is not known to also hold their divergent view axiomatically. This leads to a game-theoretic framework where the optimal strategy, in a case where both parties know that the other party holds a view axiomatically, is to depart the game.

In all other cases, the optimal strategy depends on how much each party knows about the drivers of the viewpoint of the other party, and the estimated marginal costs and benefits of continuing the game in an uncertain environment. It is critical in attempting to resolve such differences of viewpoint to determine whence they arise, therefore, in order to determine the next step. If they are irresolvable in principle, it is important to establish that at the outset. If they are resolvable in principle, setting this framework out at the beginning will help identify the cause of the differences, and thus help to resolve them. What applies to two parties is generalizable to any number, though the game-theoretic framework in any particular state of the game may be more complex.

In all cases, transparency in establishing whether each party’s viewpoint is axiomatically held, reason-based or evidence-based, is the welfare-superior environment, and should be aimed for by an independent facilitator at the start of the game. Addressing differences in this way helps also to distinguish whether views are being proposed out of conviction, or whether they are being advanced out of self-interest or as part of a strategy designed to achieve some other objective or interest.

So in seeking to derive a solution to the divergence of view or belief, we need to ascertain whether the differences are actually held axiomatically. To do this, we need to examine the source of the belief, and whether this can be dissected by appeal to reason or evidence.

To do so, we need to ask whether there are absolute ethical imperatives, which can be agreed upon by all reasonable people, or not. For example, is it reasonable to agree that people should not be treated merely as means but as ends in themselves?

Of course, people might out of self-interest choose to treat others as means, disregarding their status as human beings of equal worth, but this is different to holding that to be ethically true. The appeal to a Rawlsian ‘veil of ignorance’ argument helps here, where each person must choose whether to hold their ethical framework without knowing in advance who or where they are, poor or rich, able-bodied or disabled, male or female, free or slave.

In this context, we can ask whether there are ethical principles, views, or beliefs, that all reasonable people could agree with, or that all people could not reasonably reject.

Are there such?

Take, for example, this idea that people should be treated as means, not ends. What if we harm someone in order to prevent a greater harm to someone else? If we do all we can do to minimize harm to them in so doing, are we treating them merely as means? Suppose, for example, you need to jam someone’s foot in a mechanism, so crushing it, in order to save another person from being killed by the mechanism. You are certainly using the other person as means to an end, but if you deliberatively did the least harm to the person commensurate with saving the other person, is that really treating the person as a means, given that you paid full attention to the harmed person’s well-being in saving the life of the other person?

Looked at another way, does treating people as ends and not means imply that their interests should not be sacrificed even if that creates greater overall good? In other words, is it sufficient to take each person’s interests into account or must they be fundamentally and absolutely protected? Does everyone, in other words, have an innate right to freedom, which includes independence from being constrained by another’s choice? Do we have absolute duties we owe each other arising from our equal status as persons? To what extent, in other words, are the rights and freedoms of people absolute as opposed to instrumental, and is it possible to formalize an ethical code which all reasonable people could assent to, or not reasonably reject, around this? If not, axiomatic differences remain possible. To the extent that we can, less so.

More generally, are there absolute ethical principles which hold regardless of consequences, regardless of how much benefit or harm accrues from acting upon them? This reduces to whether we conceive of morality as grounded in relations between people and the duties we owe each other, or whether it is about the relations of people to states of affairs, such as the maximizing of overall well-being, however defined. Such a definition could include happiness, knowledge, creativity, love, friendship, or else simply realisation of personal preferences or desires.

So, to summarize, I suggest that views and beliefs can be usefully classified into fundamental (or axiomatic) ethical imperatives and ethical imperatives based on reason and evidence. While reason-based and evidence-based ethical imperatives can, of course, be influenced by evidence and reason, fundamental ethical imperatives cannot.

In considering ethical imperatives which are duty-based, grounded in moral duties owed by all people to each other, as opposed to the effect upon general well-being, we can benefit from the use of a Bayesian framework.

Bayes’ theorem concerns how we formulate beliefs about the world when we encounter new data or information. The original presentation of Rev. Thomas Bayes’ work, ‘An Essay toward Solving a Problem in the Doctrine of Chances’, gave the example of a person who emerges into the world and sees the sun rise for the first time. At first, he does not know whether this is typical or unusual, or even a one-off event. However, each day that he sees the sun rise again, his confidence increases that it is a permanent feature of nature. Gradually, through a process of statistical inference, the probability he assigns to his prediction that the sun will rise again tomorrow approaches 100 per cent. The Bayesian viewpoint is that we learn about the universe and everything in it through approximation, getting closer and closer to the truth as we gather more evidence. The Bayesian view of the world thus sees rationality probabilistically.

I propose that we apply the same Bayesian perspective to Immanuel Kant’s duty-based ‘Categorical Imperative.’ This can be summarised in the form: ‘Act only according to that maxim which you could simultaneously will to be a universal law.’ On this basis, to lie or to break a promise doesn’t work as a general code of conduct, because if everyone lied or broke their promises, then the very concept of telling the truth or keeping one’s promises would be turned on its head. A society that operated according to the universal principle of lying or promise-breaking would be unworkable. Kant thus argues that we have a perfect duty not to lie or break our promises, or indeed do anything else that we could not justify being turned into a universal law.

The problem with this approach in many eyes is that it is too restrictive. If a crazed gunman demands that you reveal which way his potential victim has fled, you must not lie to save him because the lying could not reasonably be universalisable as a rule of behaviour.

I propose that the application of a justification argument can solve the problem. This argument from justification, which I propose, is that we have no duty to respond to anything which is posed without reasonable appeal to duty. So, in this example, the gunman has no reasonable appeal to a duty of truth-telling from us, so we can make an exception to the general rule.

In any case, we need to assess the practical implications of Kant’s ‘universal law’ maxim from a probabilistic perspective. In the great majority of situations, we have no defence based on the argument from justification for lying or breaking a promise. So the universal expectation is that truth-telling and promise-keeping is overwhelmingly probable. The more often this turns out to be true in practice, the closer this approach converges on Kant’s absolute imperative by a process of simple Bayesian updating. As such, this is the ethical default position, a default position based on appeal to rules of conduct which should be universally willable as a general rule of behaviour, or are unreasonable to be rejected as such. Because it is rare that we need to deviate from this, the loss to the general good arising from the reduction in the credibility of truth-telling is commensurately less.

In a world in which ethics is indeed based on duty, I suggest in any case that the broader conception of duty, including the appeal to the argument from justification, should inform our actions. As long as this is clearly formulated within the universal law, i.e. tell the truth except where the person asking you to do so has no right to ask it of you, the core ethical rule of action is not weakened by lying in the crazed gunman example.

But can we use this sort of approach to arrive in principle on an ethical framework to which all reasonable people might ascribe?

I suggest the following.

First, that there are certain principles, most fundamentally that we owe each person a duty of respect to be treated as an end in themselves, based on our equal status as people, and not as a means to an end. This is most clearly seen if we are asked to decide on the merit of this through a ‘veil of ignorance’ as to our position in the world. Secondly, we should adopt ethical codes of behaviour based on the principle that these principles would be agreeable to all reasonable people, or could not be rejected by all reasonable people.

As such, we have Kant’s framework of ethical imperatives, as well as T.M. Scanlon’s idea of a contract between human beings based on what we owe each other, which he summarizes as that everyone ought to follow the principles that no one could reasonably reject.

As such, cheating, lying, breaking promises are uncontroversial as core ethical principles, because they obey our duty to treat others as ends, not means to an end, and because a world in which cheating, lying an breaking promises is not the norm is not a world in which things go for the best. So we should aim to obey these rules. Indeed, by the principle of simple Bayesian updating, we can demonstrate that on the great majority of occasions, this works, and so we converge on these principles of behaviour as the default position. If there arise occasions when it is clear that adherence to the principle does not work for the best, however, we may have a duty to deviate from the default position, but only on the basis of committed and sure deliberation that the exception is warranted. It is not a position to be taken lightly.

The example of the crazed gunman demanding to know where the potential victim has fled is one such example. The value of the duty to the potential victim as a unique human being, as well as the damage to overall well-being, will likely outweigh the value of the consequent reduction in the value of truth-telling, though no decision to deviate from the default position should be taken lightly. There is the additional issue of the argument from justification to consider here, i.e. that the duty to tell the truth is conflicted when the person demanding you tell the truth has no moral justification to do so.

Take another example. By crushing a person’s foot in a sophisticated explosive device, you will save the lives of fifty people. The default position is not to use another person as means, but this conflicts with the duty to protect others as well.

Derek Parfit seeks to reconcile the ethical theories derivable from Kantian deontology, Scanlon-type contractualism, and consequentialism, into a so-called ‘Triple Theory’, i.e.

An act is wrong if and only if, or just when, such acts are disallowed by some principle that is:

a. One of the principles whose being universal laws would make things go best.

b. One of the only principles whose being universal laws everyone could rationally will.

c. A principle that no one could reasonably reject.

To the duty-based approach, I would add the argument from justification, i.e. there is no duty to respond to any request which is posed without reasonable appeal to duty.

The next step is to identify actual examples of differences of view or belief or action, and determine whether we can resolve these differences through a synthesis of re-configured (by appeal to justification) Kantian deontology, contractualism and consequentialism. We could call this Adapted Kantian rule consequentialism, mediated through a Bayesian filter.

To do so, I will consider the well-known stylised examples of the Trolley Problem and the Bridge Problem. In one version of the Trolley Problem, a trolley on a rail track is heading straight for five unsuspecting people, but a switch can be thrown to divert this to hit just one person. Is it right, with no other knowledge, to divert the train? In the Bridge Problem a very heavy man can be pushed off a bridge without his knowledge to prevent a runaway locomotive from striking and killing, say, five people on the track.

Are either of these scenarios consistent with adapted Kantian rule consequentialism? In the case of the Bridge, the default position is clearly that it is wrong to deliberately take someone’s life, which is an extreme case of interfering with the human right to be considered an end and not a means to an end, and would in almost all cases be the default position. Appeal to a consequentialist viewpoint, based on the saving of five lives for one, would seem to conflict with the idea that this sort of action would make things go best if adopted as a general rule of behaviour. A world so structured ethically would very arguably not be one in which things go best. What if the pushing of the man off the bridge crushed an explosive device that would have saved a million lives? The question that arises here is whether it is right to take each case on its merits.

To take each case on its merits, with a view to the actual consequences in each case, is an act utilitarian ethical prescription. This is, however, a very damaging ethical prescription, as it leaves the default rule very seriously, if not fatally weakened, which in the bigger picture would conflict with the objective of making things go best.

This leaves us with a very difficult ethical dilemma. In the first case, saving the five people, it is reasonable to reject the pushing of the man off the bridge as a part of the universal rule that people be treated as ends not means. Killing one man to save a million would mean weakening the default position, just as would the act of judging the right to take an innocent life on a case by case basis..

The Trolley Problem is easier, because it is not a matter of deliberately killing someone, but saving the five by diverting the train from their path. It is unfortuitous that the one person is on the other track, but it is not a deliberate intention to kill that person to save the others. In effect, though, the outcome is the same, so the question is whether the intention matters. It does if we address this in terms of willingness to make the action a universalisable code of behaviour.

In both cases, though, we need to consider the value of the default position, and how much damage is incurred to making things go for the best if we act in such a way as to weaken or even destroy this default position. It is this consideration which, it seems to me, lies at the heart of synthesising deontological (duty-based) and contractualist ethics-based criteria of behaviour and those based on pure consequentialism, such as maximising the greatest happiness of the greatest number. In particular, damage to the default position works to weaken the consequentialist outcome, and the more damage is done to it, the more damage is done on its own terms to the consequentialist measurement of outcome .

It is this default-based ethical calculus which to me synthesises the deontological, contractualist and consequentialist ethical frameworks, and differences of opinion in the proper ethical judgement of good behaviour and bad behaviour derives, it seems to me, from differences in the value attached to the maintenance of this default.

So belief in the absolute value of the default position would mean that it is never right to push the man off the bridge, however many millions are saved.

In setting the value of the default position, this might be set by a calculus based on weighing the overall loss to the sum of well-being from weakening confidence and trust in the default position with the directly caused gain to the well-being of those who benefit from the action. This would be a fundamentally consequentialist view of the world, albeit grounded in the strategic benefits to well-being of a universally trusted default position. The value of the default position might on the other hand be considered absolute, axiomatically held, that it is always wrong, say, to sacrifice an innocent and unwilling person to save any number of lives, or even to use a human being as a means to achieve a goal based on some wider conception of general well-being.

Insofar as these differences are axiomatically held, no resolution can be achieved. It might, on the other hand, be the case that the differences are the result of faulty reasoning or evidence. Either way, consideration through the lens of this default-based ethical framework might help clarify the reason for differences of view, belief and action, and even change those views, beliefs or actions.

One task now is to apply this ethical lens, not least to some of the great moral and touchstone issues which divide opinion, with a view to at least making progress in resolving differences of view, belief and action.

Leave a Comment

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: