Skip to content

Why do views differ, and how can we resolve these differences?

When two parties to a discussion differ, it is useful, in seeking to resolve the ‘argument’, to determine from where the differences arise, and whether these differences are resolvable in principle. The reason for the difference might be that the parties to the difference have access to different evidence, or else interpret that evidence differently. Another possibility is that one of the parties is applying a different or better process of non-evidence based reasoning to the other. Finally, the differences might arise from each party adopting a different axiomatic starting point. So, for example, if two parties differ in a discussion on euthanasia or abortion, or even the minimum wage, with one party strongly in favour of one side of the issue and the other strongly opposed, it is critical to establish whether this difference is evidence-based, reason-based, or derived from axiomatic differences. We are assuming here that the stance adopted by each party on an issue is genuinely held, and is not part of a strategy designed to advance some other objective or interest. The first thing is to establish whether any amount of evidence could in principle change the mind of an advocate of a position. If not, that leads us to ask where the viewpoint comes from. Is it purely reason-based, in which case (in the sense I use the term) it should in principle be either provable or demonstrably obvious to any rational person who holds a different view, without the need to appeal to evidence. Or else, is the viewpoint held axiomatically, so it is not refutable by appeal to reason or evidence? If the different viewpoints are held axiomatically by the parties to the difference, there the discussion should fall silent. If the differences are not held axiomatically, both parties should in principle be able to converge on agreement. So the question reduces to establishing whether the differences arise from divergences in reasoning, which should be resolvable in principle, or else by differences in access to evidence or proper evaluation of the evidence. Again, the latter should be resolvable in principle. In some cases, a viewpoint is held almost but not completely axiomatically. It is therefore in principle open to an appeal to evidence and/or reason. The bar may be set so high, though, that the viewpoint is in practice axiomatically held. If only one side to the difference holds a view axiomatically, or as close as to make it indistinguishable in practical terms, then the views could in principle converge by appeal to reason and evidence, but only converge to one side, i.e. the side which is holding the view axiomatically. This leads to a situation in which it is in the interest of a party seeking to change the view of the other party to conceal that their viewpoint is held axiomatically, and to represent it as reason-based or evidence-based, but only where the other party is not known to also hold their divergent view axiomatically. This leads to a game-theoretic framework where the optimal strategy, in a case where both parties know that the other party holds a view axiomatically, is to depart the game. In all other cases, the optimal strategy depends on how much each party knows about the drivers of the viewpoint of the other party, and the estimated marginal costs and benefits of continuing the game in an uncertain environment. It is critical in attempting to resolve such differences of viewpoint to determine whence they arise, therefore, in order to determine the next step. If they are irresolvable in principle, it is important to establish that at the outset. If they are resolvable in principle, setting this framework out at the beginning will help identify the cause of the differences, and thus help to resolve them. What applies to two parties is generalizable to any number, though the game-theoretic framework in any particular state of the game may be more complex. In all cases, transparency in establishing whether each party’s viewpoint is axiomatically held, reason-based or evidence-based, is the welfare-superior environment, and should be aimed for by an independent facilitator at the start of the game. Addressing differences in this way helps also to distinguish whether views are being proposed out of conviction, or whether they are being advanced out of self-interest or as part of a strategy designed to achieve some other objective or interest.

The Shortest Day

December 21st, 2014 was the shortest day of the year, at least in the UK, located in the Northern hemisphere of our planet.

So does that mean that the mornings should start to get lighter after that day (earlier sunrise), as well as the evenings (later sunset). Not so, and there’s a simple reason for that. The length of a solar day, i.e. the period of time between the solar noon (the time when the sun is at its highest elevation in the sky) on one day and the next, is not 24 hours in December, but about 30 seconds longer than that.

For this reason, the days get progressively about 30 seconds longer throughout December, so that by the end of the month a standard 24-hour clock is lagging roughly 15 minutes behind real solar time.

Let’s say just for a moment that the hours of sunlight (the time difference between sunrise and sunset) stayed constant through December. This means that a 24-hour clock which timed sunset at 3.50pm one day would be 30 seconds slow by 3.50pm the next day. The solar day would be 30 seconds longer than this, so the sun would not set the next day till 3.50pm and 30 seconds. After ten days the sun would not set till 3.55pm according to the 24-hour clock. So the sunset would actually get later through all of December. For the same reason, the sunrise would get later through the whole of December.

In fact, the sunset doesn’t get progressively later through all of December because the hours of sunlight shorten for about the first three weeks. The effect of this is that the sun would set earlier and rise later.

These two things (the shortening hours of sunlight and the extended solar day) work in the opposite direction. The overall effect is that the sun starts to set later from a week or so before the shortest day, but doesn’t start to rise earlier till about a week or so after the shortest day.

So the old adage that that the evenings will start to draw out after the end of the third week of December or so, and the mornings will get lighter, is false. The evenings have already been drawing out for several days before the shortest day, and the mornings will continue to grow darker for several days more.

There’s one other curious thing. The solar noon coincides with noon on our 24-hour clocks just four times a year. One of those days is Christmas Day! So set your clock to noon on December 25th, look up to the sky and you will see the sun at its highest point. Just perfect!



Re-evaluating Kant’s ‘Categorical Imperative’

Bayes’ theorem concerns how we formulate beliefs about the world when we encounter new data or information. The original presentation of Rev. Thomas Bayes’ work, ‘An Essay toward Solving a Problem in the Doctrine of Chances’, was given in 1763, after Bayes’ death, to the Royal Society, by Mr. Richard Price. In framing Bayes’ work, Price gave the example of a person who emerges into the world and sees the sun rise for the first time. At first, he does not know whether this is typical or unusual, or even a one-off event. However, each day that he sees the sun rise again, his confidence increases that it is a permanent feature of nature. Gradually, through a process of statistical inference, the probability he assigns to his prediction that the sun will rise again tomorrow approaches 100 per cent. The Bayesian viewpoint is that we learn about the universe and everything in it through approximation, getting closer and closer to the truth as we gather more evidence. The Bayesian view of the world thus sees rationality probabilistically.

As such, Bayes’ perspective on cause and effect can be contrasted with that of David Hume, the logic of whose argument on this issue is contained in ‘An Enquiry Concerning Human Understanding’. According to Hume, we cannot justify our assumptions about the future based on past experience unless there is a law that the future will always resemble the past. No such law exists. Therefore, we have no fundamentally rational support for believing in causation. Bayes instead applies and formalizes the laws of probability to the science of reason, to the issue of cause and effect.

I propose that we apply the same Bayesian perspective to Immanuel Kant’s duty-based ‘Categorical Imperative.’ This can be summarised in the form: ‘Act only according to that maxim which you could simultaneously will to be a universal law.’ On this basis, to lie or to break a promise doesn’t work as a practical imperative, because if everyone lied or broke their promises, then the very concept of telling the truth or keeping one’s promises would be turned on its head. A society that worked according to the universal principle of lying or promise-breaking would be unworkable. Kant thus argues that we have a perfect duty not to lie or break our promises, or indeed do anything else that we could not justify being turned into a universal law.

The problem with this approach in many eyes is that it is too restrictive. If a crazed gunman demands that you reveal which way his potential victim has fled, you must not lie to save him because this could not be universalisable as a rule of behaviour.

I propose that the application of a justification argument can solve the problem. This argument from justification is that you have no duty to respond to any request which is posed without reasonable appeal to duty. So, in this example, the gunman has no reasonable appeal to duty from you, so you can make an exception to the general rule.

Why is this consistent with the practical implications of Kant’s ‘universal law’ maxim? It’s an issue of probability. In the great majority of situations, you have no defence based on the argument from justification for lying or breaking a promise. So the universal expectation is that truth-telling and promise-keeping is overwhelmingly probable. The more often this turns out to be true in practice, the closer this approach converges on Kant’s absolute imperative by a process of simple Bayesian updating.

In a world in which ethics is indeed based on duty, it is this broader conception of duty which, I propose, should inform our actions.

The Abilene Paradox

The Abilene Paradox is a classic management parable. Does it sound familiar in your family or workplace? If so, it may be time to do something about it.


On a hot afternoon visiting in Coleman, Texas, the family is comfortably playing dominoes on a porch, until the father-in-law suggests that they take a trip to Abilene [53 miles north] for dinner. The wife says, “Sounds like a great idea.” The husband, despite having reservations because the drive is long and hot, thinks that his preferences must be out-of-step with the group and says, “Sounds good to me. I just hope your mother wants to go.” The mother-in-law then says, “Of course I want to go. I haven’t been to Abilene in a long time.”

The drive is hot, dusty, and long. When they arrive at the cafeteria, the food is as bad as the drive. They arrive back home four hours later, exhausted.

One of them dishonestly says, “It was a great trip, wasn’t it?” The mother-in-law says that, actually, she would rather have stayed home, but went along since the other three were so enthusiastic. The husband says, “I wasn’t delighted to be doing what we were doing. I only went to satisfy the rest of you.” The wife says, “I just went along to keep you happy. I would have had to be crazy to want to go out in the heat like that.” The father-in-law then says that he only suggested it because he thought the others might be bored.

The group sits back, perplexed that they together decided to take a trip which none of them wanted. They each would have preferred to sit comfortably, but did not admit to it when they still had time to enjoy the afternoon.


Originally stated by George Washington University Professor, Jerry B. Harvey.

The Vaughan Williams ‘Completeness Problem’

If there is more than one possible universe, impenetrable to the others, is it enough that God is God of this universe, in order to be God, or does God have to be God of all of the possible universes?

Are ethical differences a state of mind or a state of evidence?

Ethical imperatives can, I suggest, be usefully classified into reason-based ethical imperatives and evidence-based ethical imperatives. Evidence-based ethical imperatives can, of course, be influenced by evidence. But reason-based ethical imperatives cannot. Both are subject to the application of reason for their formulation, however, and both can be challenged by reason.

Both types of ethical imperative are duty-based. To the extent that their justification depends on evaluating their particular consequences, they are not true imperatives.

When acting in accordance with an ethical imperative, I suggest also that that a Law of Justification holds, i.e. nobody has a duty to undertake any particular action in response to others unless that person has a right to demand that action arising out of an absolute or personal ethical imperative of those being called upon to act. In other words, there is no duty to respond to any request which is posed without reasonable appeal to duty. This is, I suggest, a universal principle.

In the context of evaluating an evidence-based ethical imperative, I propose that greater weight should be attached (other things equal) to the evidence of a person whose personal incentive (including self-interest or self-regard) to offer that evidence or opinion is less. Particular weight should be attached, other things equal, to evidence offered by a person offering that evidence or opinion who has a personal disincentive (including harm to self-interest or self-regard) to do so.

In evaluating subjective perception of evidence, weight should be given to a consideration of any implicit incentives, self-interest or self-regard which might affect that perception.

In assessing the value of an evidence-based ethical imperative, all evidence and opinion relevant to that imperative should reflect an objective evaluation of how consistent that evidence is with the imperative relative to its consistency with an alternative and competing ethical imperative, mediated by the degree of prior belief in these alternative personal ethical frameworks.

Simply put then, I advocate making a clear distinction between reason-based ethics and evidence-based ethics. Appeal to evidence cannot influence the first, appeal to reason can influence both. If a position is taken that some action is absolutely wrong, which no amount of evidence could contradict, I term this a reason-based ethical judgment, because it is open to influence, at least in principle, by reason, but not by evidence. If it is possible to change one’s mind based on the production of some evidence, then that is an evidence-based ethical judgement. No mind should ever be closed to reason when formulating or revising any ethical imperative.

It is very important to distinguish these, I propose, in order to help resolve or arbitrate ethical differences or to decide whether they are likely to be resolvable.

The next step is to identify actual examples of these ethical imperatives, and to probe how we might best resolve them.

If it is determined that the ethical positions under consideration contain no ethical imperatives but are instead consequence-based judgements, matters move on in a different direction, but can again be categorised into evidence-based and reason-based consequentialist ethics, and then considered from that perspective.

Truth and sound justice depend on sound ethics, among other things.

The task now is to try and establish which ethical frameworks are sound and which are not. By the application of reason and (perhaps) evidence.

The Most Important Idea in Probability. Truth and Justice Depend on Us Getting it Right.

People have been wrongly hanged because of it. People have been wrongly punished because of it. And people have suffered unnecessarily because of it. It’s the mistaken belief that the probability of something being true based on seeing the evidence is the same thing as seeing the evidence if something is true. This can have, has had and continues to have, devastating implications. In fact, the probabilities of these two things will often diverge enormously.

To dramatize the problem I will introduce you to the Shakespearean tragedy, Othello. In the play, Othello’s wife Desdemona is set up by the evil Iago, who plants a treasured keepsake that Othello had given her in the home of young Cassio. When Othello comes upon the keepsake, he soon leaps to the mistaken conclusion that Desdemona has been unfaithful to him, with tragic consequences.

Othello made the mistake of believing that the probability that Desdemona was unfaithful given the evidence of the treasured keepsake being found in Cassio’s home was the same probability that the keepsake would have been found in Cassio’s home if Desdemona had been unfaithful to him. Easy mistake. We do it all the time in everyday life, usually with less dramatic implications. More importantly, juries do it all the time, as do practitioners in others fields, like medicine.

Let’s put it another way. What is the chance that someone who has been repeatedly shot in a flat that you rent out would die? Very high. The evidence here is the dead person, the gunshot wounds and the fact that you have access to the flat. The hypothesis is that you are the murderer. Now, the probability we would see that evidence if you ARE the murderer is 100%. But the probability that you are the murderer given that we see that evidence is much lower. There are perhaps many different people who could have committed the murder, even if you are one of them. This seems obvious, and when stated this way it IS obvious, but in real life the problem is usually not stated or understood so clearly, and is often disguised.

This is sometimes referred to as the ‘Prosecutor’s Fallacy.’ It is the fallacy of making out that someone is guilty because the evidence is consistent with their guilt. This is often enough to convict, because this  measure is often confused with the probability that the accused is guilty given that the evidence exists. They are totally different things. But even when they are clearly distinguished, the probability we assign to guilt can be seriously over-estimated because of a common cognitive failing known as the prior indifference fallacy. This is the fallacy of believing that the likelihood that something is true rather than false, when we have little prior idea, starts out as 50-50. This is just not so without proper justification but the implications of this belief, which may be implicit, are potentially huge. The prior probability, in the absence of any evidence, is simply not 50-50 unless there is a very good reason to believe that to be so before we see any evidence. But without the evidence, what good reason could there be for believing that to be so? Unless we can anchor this properly, all successive evidence-based reasoning will be flawed.

Fortunately for us, there is a rule used by those conversant with the laws of probability which can in fact help determine the actual relation between the truth of a hypothesis and the evidence relating to that hypothesis. The solution it arrives at is very rarely the same as would be arrived at without it. It is called Bayes’ Rule, but not many people know it, or how to apply it. Until more people do, the relationship between truth and justice is likely to remain severely strained.


Get every new post delivered to your Inbox.

Join 25 other followers