Skip to content

The Abilene Paradox

Twitter: @leightonvw

The Abilene Paradox is a classic management parable. Does it sound familiar in your family or workplace? If so, it may be time to do something about it. THE ABILENE PARADOX On a hot afternoon in Coleman, Texas, a family is comfortably playing dominoes on a porch, until the father-in-law suggests that they take a trip to Abilene [53 miles north] for dinner. The wife says, “Sounds like a great idea.” The husband, despite having reservations because the drive is long and hot, thinks that his preferences must be out-of-step with the group and says, “Sounds good to me. I just hope your mother wants to go.” The mother-in-law then says, “Of course I want to go. I haven’t been to Abilene in a long time.” The drive is hot, dusty, and long. When they arrive at the cafeteria, the food is as bad as the drive. They arrive back home four hours later, exhausted. One of them dishonestly says, “It was a great trip, wasn’t it?” The mother-in-law says that, actually, she would rather have stayed home, but went along since the other three were so enthusiastic. The husband says, “I wasn’t delighted to be doing what we were doing. I only went to satisfy the rest of you.” The wife says, “I just went along to keep you happy. I would have had to be crazy to want to go out in the heat like that.” The father-in-law then says that he only suggested it because he thought the others might be bored. The group sits back, perplexed that they together decided to take a trip which none of them wanted. They each would have preferred to sit comfortably, but did not admit to it when they still had time to enjoy the afternoon. =============================================== Originally stated by George Washington University Professor, Jerry B. Harvey.

The Vaughan Williams ‘Completeness Problem’

If there is more than one possible universe, impenetrable to the others, is it enough that God is God of this universe, in order to be God, or does God have to be God of all of the possible universes?

Are ethical differences a state of mind or a state of evidence?

Ethical imperatives can, I suggest, be usefully classified into fundamental (or axiomatic) ethical imperatives and ethical imperatives based on reason and evidence. While reason-based and evidence-based ethical imperatives can, of course, be influenced by evidence and reason, fundamental ethical imperatives cannot.

Fundamental ethical imperatives are duty-based. To the extent that their justification depends on evaluating their particular consequences, they are not fundamental ethical imperatives in this sense.

When acting in accordance with an ethical imperative, I suggest that that a Law of Justification holds, i.e. nobody has a duty to undertake any particular action in response to another person unless that person has a reasonable duty-based right to demand that action. In other words, there is no duty to respond to any request which is posed without reasonable appeal to duty. This is, I suggest, a universal principle.

In the context of evaluating a reason-based or evidence-based ethical imperative, I propose that greater weight should be attached (other things equal) to the evidence or opinion of a person whose personal incentive (including self-interest or self-regard) to offer that evidence or opinion is less. Particular weight should be attached, other things equal, to evidence offered by a person offering that evidence or opinion who has a personal disincentive (including harm to self-interest or self-regard) to do so.

In evaluating subjective perception of evidence, weight should be given to a consideration of any implicit incentives, self-interest or self-regard which might affect that perception.

In assessing the value of a reason or evidence-based ethical imperative, all evidence and reason relevant to that imperative should reflect an objective evaluation of how consistent that evidence is with the imperative relative to its consistency with an alternative and competing ethical imperative, mediated by the degree of prior belief in these alternative personal ethical frameworks.

Simply put then, I advocate making a clear distinction between fundamental (or axiomatic) ethics and reason-based or evidence-based ethics. Appeal to reason and evidence cannot influence the first, but both can influence the latter. So if a position is taken that some action is absolutely wrong, which no amount of reason or evidence could contradict, I term this a fundamental ethical judgement. If it is possible to change one’s mind based on the production of some reasoning or evidence, then that is a reason-based or evidence-based ethical judgement.

It is very important to distinguish these in order to help resolve or arbitrate ethical differences or to decide whether they are likely to be resolvable.

The next step is to identify actual examples of these ethical imperatives, and to probe how we might best resolve them.

If it is determined that the ethical positions under consideration contain no ethical imperatives but are instead consequence-based judgements, matters move on in a different direction, but can again be categorised into evidence-based and reason-based consequentialist ethics, and then considered from that perspective.

Truth and sound justice depend on sound ethics, among other things.

The task now is to try and establish which ethical frameworks are sound and which are not. Where relevant, by the application of reason and (perhaps) evidence.

Update at: https://leightonvw.com/2015/08/19/who-do-views-differ-is-there-any-reason-for-it/

The Most Important Idea in Probability. Truth and Justice Depend on Us Getting it Right.

People have been wrongly hanged because of it. People have been wrongly punished because of it. And people have suffered unnecessarily because of it. It’s the mistaken belief that the probability of something being true based on seeing the evidence is the same thing as seeing the evidence if something is true. This can have, has had and continues to have, devastating implications. In fact, the probabilities of these two things will often diverge enormously.

To dramatize the problem I will introduce you to the Shakespearean tragedy, Othello. In the play, Othello’s wife Desdemona is set up by the evil Iago, who plants a treasured keepsake that Othello had given her in the home of young Cassio. When Othello comes upon the keepsake, he soon leaps to the mistaken conclusion that Desdemona has been unfaithful to him, with tragic consequences.

Othello made the mistake of believing that the probability that Desdemona was unfaithful given the evidence of the treasured keepsake being found in Cassio’s home was the same probability that the keepsake would have been found in Cassio’s home if Desdemona had been unfaithful to him. Easy mistake. We do it all the time in everyday life, usually with less dramatic implications. More importantly, juries do it all the time, as do practitioners in others fields, like medicine.

Let’s put it another way. What is the chance that someone who has been repeatedly shot in a flat that you rent out would die? Very high. The evidence here is the dead person, the gunshot wounds and the fact that you have access to the flat. The hypothesis is that you are the murderer. Now, the probability we would see that evidence if you ARE the murderer is 100%. But the probability that you are the murderer given that we see that evidence is much lower. There are perhaps many different people who could have committed the murder, even if you are one of them. This seems obvious, and when stated this way it IS obvious, but in real life the problem is usually not stated or understood so clearly, and is often disguised.

This is sometimes referred to as the ‘Prosecutor’s Fallacy.’ It is the fallacy of making out that someone is guilty because the evidence is consistent with their guilt. This is often enough to convict, because this  measure is often confused with the probability that the accused is guilty given that the evidence exists. They are totally different things. But even when they are clearly distinguished, the probability we assign to guilt can be seriously over-estimated because of a common cognitive failing known as the prior indifference fallacy. This is the fallacy of believing that the likelihood that something is true rather than false, when we have little prior idea, starts out as 50-50. This is just not so without proper justification but the implications of this belief, which may be implicit, are potentially huge. The prior probability, in the absence of any evidence, is simply not 50-50 unless there is a very good reason to believe that to be so before we see any evidence. Unless we can anchor this properly, all successive evidence-based reasoning will be flawed.

Fortunately for us, there is a rule used by those conversant with the laws of probability which can in fact help determine the actual relation between the truth of a hypothesis and the evidence relating to that hypothesis. The solution it arrives at is very rarely the same as would be arrived at without it. It is called Bayes’ Rule, but not many people know it, or how to apply it. Until more people do, the relationship between truth and justice is likely to remain severely strained.

Further Reading and Links

https://selectnetworks.net

Why is Everything in the Universe Just Right?

It shouldn’t be possible for us to exist. But we do. That’s the sort of puzzle I like exploring. So I will. Let’s start with the so-called ‘Cosmological Constant.’ This is an extra term added by Einstein in working out equations in general relativity that describe a non-expanding universe. The need for the cosmological constant is required to explain why a static universe doesn’t collapse in upon itself through the action of gravity. It’s true that the force of gravity is infinitesimally small compared to the electromagnetic force, but it has a lot more influence on the universe because all the positive and negative electrical charges in the universe somehow seem to balance out. Indeed, if there were just a 0.00001 per cent difference in the net positive and negative electrical charges within a body, it would be torn apart and cease to exist. This cosmological constant, therefore, is added to the laws of physics simply to balance the force of gravity contributed by all of the matter in the universe. What it represents is a sort of unobserved “energy” in the vacuum of space which possesses density and pressure, which prevents a static universe from collapsing in upon itself. But we now know from observation that galaxies are actually moving away from us and that the universe is expanding. In fact, the Hubble Space Telescope observations in 1998 of very distant supernovae showed that the Universe is expanding more quickly now than in the past. So the expansion of the Universe has not been slowing due to gravity, but has been accelerating. We also know how much unobserved energy there is because we know how it affects the Universe’s expansion. But how much should there be? We can calculate this using quantum mechanics. The easiest way to picture this is to visualize “empty space” as containing “virtual” particles that continually form and then disappear. This “empty space”, it turns out, “weighs” 1,093 grams per cubic centimetre. Yet the actual figure differs from that predicted by a factor of 10 to the power of 120. The “vacuum energy density” as predicted is simply 10120 times too big. That’s a 1 with 120 zeros after it. So there is something cancelling out all this “dark” energy, to make it 10 to the power of 120 smaller in practice than it should be in theory. Now this is very fortuitous. If the cancellation figure was one power of ten different, 10 to the power of 119, then galaxies could not form, as matter would not be able to condense, so no stars, no planets, no life. So we are faced with the mindboggling fact that the positive and negative contributions to the cosmological constant cancel to 120 digit accuracy, yet fail to cancel beginning at the 121st digit. In fact, the cosmological constant must be zero to within one part in roughly 10120 (and yet be nonzero), or else the universe either would have dispersed too fast for stars and galaxies to have formed, or else would have collapsed upon itself long ago. How likely is this by chance? Essentially, it is the equivalent of tossing a coin and needing to get heads 400 times in a row and achieving it. Go on. Do you feel lucky? Now, that’s just one constant that needs to be just right for galaxies and stars and planets and life to exist. There are quite a few, independent of this, which have to be equally just right. We can talk about those another time, but this I think sets the stage. I’ve heard this called the fine-tuning argument. I’m now rather interested in finding out who or what is composing this very fine tune.

Update at: https://leightonvw.com/2015/08/03/why-is-there-something-rather-than-nothing/

Further Reading and Links

https://selectnetworks.net

The Vaughan Williams Possibility Theorem

If something might or might not exist, but is unobservable, it is more likely to exist than something which can be observed, with any positive probability, but is not observed. If something might or might not exist, it is more likely to exist if it is less likely to be observed and is not observed than something else which is more likely to be observed, and is not observed.

Further Reading and Links

https://selectnetworks.net

See application of the VW Possibility Theorem to the Black Raven Paradox. Link: https://leightonvw.com/2014/12/11/seeing-a-blue-tennis-shoe-makes-it-more-likely-that-flamingos-are-pink/

See application of the VW Possibility Theorem to a unified theory of science. Link: https://leightonvw.com/2014/11/28/430/

See application of the VW Possibility Theorem to the Boy-Girl Paradox

https://leightonvw.com/2014/11/27/the-boy-girl-paradox/

 

 

 

Towards a Unified Theory of Science: The Search for Truth

When we pose a question, it is usual that we want an answer. Sometimes the answer is clear, because it is defined to be what it is, or else is true as a matter of logic.

For example, what is 2 plus 2? If your number system defines the answer to be 4, then it is 4.

If I roll two standard dice, the highest total I can achieve is 12. So if I am asked what is the probability that I will roll a total of 14, the answer is zero.

For most questions, however, which are not true by definition or logic, there is no sure answer to any question, only various levels of probability.

Similarly, for any set of observations, the rule or set of rules that gave rise to these observations might not be clear. There may be a large number of different explanations which are consistent with the data.

For example, what rule gives rise to the number sequence 1,3,5,7? If we know this, it will help us to predict what the next number in the sequence is likely to be, if there is one.

Two hypotheses spring instantly to mind. It could be: 2n-1, where n is the step in the sequence. So the third step, for example, gives 2×3-1 = 5. If this is the correct rule generating the observations, the next step in the sequence will be 9 (5×2-1).

But it’s possible that the rule generating the number sequence is: 2n-1 + (n-1)(n-2)(n-3)(n-4). So the third step, for example, gives 2×3-1 + (3-1)(3-2)(3-3)(3-4) = 7. In this case, however, the next step in the sequence will be 33.

So if this is all the information we have, we have two different hypotheses about the rule generating the data. How do we decide which is more likely to be true? In general, when we have more than one hypothesis, each of which could be true, how can we decide which one actually is true?

For the answer, we need to turn to some basic principles of scientific enquiry. I list these as Epicurus’ Principle, Occam’s Razor, Bayes’ Theorem, Popperian ‘Falsifiability’; Solomonoff Induction and the Vaughan Williams ‘Possibility Theorem.’

To address these, and how they contribute to a grand unified theory of scientific enquiry, is beyond the scope of this post, but I can at least provide a basic explanation of the terms.

Epicurus’ Principle is the idea that if there are a number of different possible truths, we should keep open the possibility that any of them might be true until we are forced by the evidence to do otherwise. Otherwise stated, it is the maxim that if more than one theory is consistent with the known observations, keep them all.

Occam’s Razor is the idea that the theory which explains all (or the most) and assumes the least is most likely. This is totally consistent with Epicurus’ Principle, with the additional insight that a simpler theory consistent with known observations is more likely to be true.

Bayes’ Theorem is the idea that the likelihood of a hypothesis being true is a combination of the likelihood of it being true before some new evidence arises and the likelihood of the new evidence arising if the hypothesis is true and if the hypothesis is false. Its most critical insight is that the probability of a hypothesis being true given the evidence is a very different thing to the probability of the evidence arising given that the hypothesis is true.

Popperian ‘Falsifiability’ is the idea that a scientific hypothesis should be testable and falsifiable. Otherwise stated, it notes that a single observational event may prove hypotheses wrong, but no finite sequence of events can verify them correct.

Solomonoff induction is the idea that the information contained in the various explanations consistent with known observations can in principle be reduced to binary sequences, and that the shorter the binary sequence the more likely that explanation of the observations is to be true.

The Vaughan Williams ‘Possibility Theorem’ states that: “If something that might or might not exist is unobservable, or is less likely to be observed, it is more likely to exist than if it can be observed (or is more likely to be observed) but is not observed.” This is critical when assessing how the probability of a hypothesis being true might be affected by information which potentially exists and is relevant but is missing because it is for whatever reason unobserved or unobservable.

Combining these principles into a unified framework can help identify the truth based on known and potentially missing observations.

That is the next step.

Further Reading and Links

https://selectnetworks.net

The Halloween Stock Market Indicator – Guide Notes

In 1694, the Bank of England was founded by Act of Parliament, with the original purpose of  acting as the Government’s banker and debt-manager. 1694 was also the year that the French Enlightenment writer and philosopher, Francois-Marie Arouet, better known as Voltaire, was born.

It is also the year from which we can be trace the so-called ‘Halloween Effect’.  This is the effect seminally confirmed in the leading academic journal, the American Economic Review, in 2002, by researchers  Sven Bouman and Ben Jacobsen. In a paper titled, ‘The Halloween Indicator, Sell in May and Go Away: Another Puzzle’, they test the hypothesis that stock market returns tend to be significantly higher in the November-April period than in the May-October period and find it to be true in 36 of the 37 developed and emerging markets studied in their sample between 1973 and 1998, though it can be observed in their UK data back to 1694. They further find the effect is particularly pronounced in the countries of Europe and that it persists over time. The puzzle was especially noteworthy because the anomaly has been widely recognised for years, though not previously rigorously tested, yet it still persists.

Some analysts trace this back to the practice of the landed classes of selling off stocks in May of each year as they headed to their country estates for the summer months, and re-investing later in the year. Times have moved on, but summer vacations and attitudes may have not. That’s a theory, at least, but a strange one to explain modern-day investment strategies if true.

In a bigger follow-up study by Jacobsen now working with finance professor, Cherry Yi Zhang, published in 2012,  they looked at 108 countries, using over 55,000 observations, including more than 300 years of UK data. And guess what! They found that the effect is confirmed for 81 of the 108 countries, with the post-Halloween returns out-performing the pre-Halloween period returns by 4.52 per cent on average, and by 6.25 per cent over the past 50 years looked at alone.

Strange but true! The Halloween Indicator seems to be working better than ever.

So let’s say it’s Halloween today. Time to get stuck into the stock market? You decide!

Reading and Links

Boumen, S. and Jacobsen, B. (2002). The Halloween Indicator: “Sell in May and Go Away”: Another Puzzle. American Economic Review, 92, 5. 1618-1635.

Click to access HalloweenIndicator.pdf

https://www.sciencedirect.com/science/article/pii/S1057521910000608

https://www.sciencedirect.com/science/article/pii/S0275531916300034

Calendar Effects. Wikipedia. https://en.wikipedia.org/wiki/Calendar_effect

Market anomaly. Wikipedia. https://en.wikipedia.org/wiki/Market_anomaly

Why Do Used Cars Sell For So Much Less Than New?

If Mr. Smith wants to SELL me his horse, do I really WANT to buy it? It’s a question as old as markets and horses have existed, but it was for many, many years, one of the unspoken questions of economics.

It is a question Groucho Marx once posed in a slightly different way, when he declared that he refused to join any club which was prepared to accept him as a member.

So how do we solve this paradox? The paradox of the horse that is, not the Groucho paradox.

For most of the history of economics, the answer was quite simple. Simply assume perfect markets and perfect information, so the horse buyer would know everything about the horse, and so would the seller, and in those cases where the horse is worth more to the buyer than the seller, both can strike a mutually beneficial deal. Gains from trade, it’s called.

In the real world, of course, life is not so straightforward, and the person selling the horse is likely to know rather more about it than the potential purchaser. This is called ‘asymmetric information’, and the buyer is facing what is called an ‘adverse selection’ problem, as he has adverse information relative to the seller.

This is a classic case of what economics professor George Akerlof sought to address in 1970, in a seminal paper called ‘The Market for Lemons’.

Akerlof had become intrigued by the limited tools that economists were using in the late 1960s. Unemployment, so went much of the general thinking, was caused by money wages adjusting too slowly to changes in the demand for labour. This was the so-called ‘neo-classical synthesis’ and it assumed classic markets, albeit they could be a bit slow to work.

At the same time, economists had come to doubt that changes in the availability of capital and labour could in themselves explain economic growth. The role of education was called upon as a sort of magic bullet to explain why an economy grew as fast as it did. But that posed a problem for Akerlof. How can we distinguish the impact on productivity of the education itself from the extent to which education simply helped grade people, he asked. The idea here is that more able people will tend on average to seek out more education. So how far does education in itself contribute to growth, and how far does it help simply as a signal and a screen for employers? In the real world, of course, these signals could be useful because employers are like the horse buyers – they know less about the potential employees than the employees know about themselves, the classic adverse selection problem.

Akerlof turned to the market for used cars for the answer, not least because at the time a major factor in the business cycle was the big fluctuation in sales of new cars. He quickly spotted the problem. Just like in the market for horses, the first thing a potential used car buyer is likely to ask is “Why should I WANT to buy that used car if he wants so much to SELL it to me”. The suspicion is that the car is what Americans call a ‘lemon’, a sub-standard pick of the crop. Owners of better quality used cars, called ‘plums’, are much less likely to want to sell.

Now let’s say you’re willing to spend £10,000 on a plum but only £5,000 on a lemon. In such a case, the best price you’d be willing to pay is about £7,500, and only then if you thought there was an equal chance of a lemon and a plum. At this price, though, sellers of the plums will tend to back out, but sellers of the troublesome lemons will be very happy to accept your offer. But as a buyer you know this, so will not be willing to pay £7,500 for what is very likely to be a lemon. The prices that will be offered in this scenario may well spiral down to £5,000 and only the worst used cars will be bought and sold. The bad lemons have effectively driven out the good plums, and buyers will start buying new cars instead of plums. Just as with horses, asymmetric and imperfect information in the used car market has the potential, therefore, to severely compromise its effective operation.

What can be done about this? For the answer we must go back to part of the reason why people seek education, which is to signal personal qualities which might otherwise be difficult to discern. This is part of the wider theory of signalling and screening, and it takes us to another place, on another day.

Further Reading and Links

Akerlof, G. (1970), The Market for Lemons: Quality, Uncertainty and the Market Mechanism. Quarterly Journal of Economics. 84:488-500.

https://selectnetworks.net

Is Altruism Rational?

In a game first popularized in the academic literature in the early 1980s, two people are invited separately to play a game, with a monetary prize. The players are acting anonymously, playing the game via a computer terminal. The game, widely known these days as the ‘Ultimatum Game’, involves one of the players, who we will call Jack, being given £50, say, by the experimenter. He must now decide whether or how much he should offer the other player, who we will call Jill. Remember that Jack and Jill don’t know each other, and will remain anonymous to each other. The game is only played once, so there is no comeback from Jill whatever Jack does. There is, however, one consideration for Jack to think about. If Jill turns down the offer, they both walk away empty-handed. So how much should Jack offer Jill? And how much will he? Traditional economic theory about rational behaviour would suggest that Jack, as a profit-maximizing agent, should offer Jill a very small amount, and that Jill should accept this very small amount rather than get nothing. In fact, early experimenters who put real people into this scenario usually found that the amount of money offered lay somewhere between 50-50 and 65-35. Sometimes, nevertheless, the second player was indeed offered only a small amount, and in those cases where this was less than 30% of the prize, usually refused. In other words, when Jill was offered less than £15 of the £50, she usually walked away from the deal, leaving both with nothing. Is this reconcilable with rational economic behaviour? One explanation that is often proposed is that offers of less than 30% or so are considered as desultory, even insulting, and Jill is getting utility (as economists would call it) from punishing Jack. Yet the low offer made by Jack is not in fact a personal insult, and arises as part of the design of the game. Indeed, neither player will ever know who the other person is. It is certainly not profit-maximizing behaviour by Jill. Is there another explanation? One explanation that makes some sense, proposed in the mid-1980s by the distinguished mathematician and game theoretician, Robert Aumann, is that people tend to evolve rules of thumb according to which they behave in their day-to-day lives. One such rule he identifies as “Don’t be a sucker; don’t let people walk all over you.” This might indeed work well as a general rule for Jill to live by, insofar as it helps build up her reputation for people’s future reference. But in this particular situation, turning down £15 does nothing to build up her reputation, because she is anonymous. Aumann’s explanation is that Jill doesn’t think like that. She has built up this rule-of-thumb behavioural code over a lifetime, and will not so easily abandon it a particular context, when the situation is different. This is what we might call ‘bounded rationality’, in that people do not usually consciously maximize in each decision situation, but instead use rules of thumb that work well “on the whole”. So, that leaves a couple of questions. The first is whether Jack is being rational when he offers a small slice of the cake to Jill, and the second is whether he is he being altruistic or self-interested when he offers her a bigger slice?   Reference: Robert Aumann, Rationality and Bounded Rationality: The 1986 Nancy L. Schwartz Memorial Lecture.