Prime Minister Harold Wilson used to say that there were few jobs more stressful and more precarious than that of a politician but that the post of football manager was certainly one of them. Well, opinions are divided about the benefits of sacking those who run the country, and it is difficult to devise a measure which all would agree on, but there is a great deal of published analysis available which allows us to judge the effect of a change of management in other fields.
A seminal study in this regard, published by Professors Lieberson and O’Connor, found little evidence of any link between changes of Chief Executive Officer (CEO) and subsequent movements in company performance indicators such as sales and profits.
Other studies have found, in contrast, that changes of top managers do tend to be followed by sharp improvements in performance, particularly in certain sectors, such as computer equipment manufacture. Some areas of activity do seem impervious to changes at the top, however, as witnessed by a study of the effect of changes of Methodist ministers on church attendance, membership and donations. There was no discernible effect.
There is also a well-established literature which considers the impact of management change on team performance in professional sport, and this can be sub-divided into three distinct theories. According to the “common sense” theory, when a team is under-performing the manager is replaced, and if a better manager takes over, performance should improve. In the “vicious circle” theory, poor performance tends to trigger managerial change, but the disruption caused tends to make things worse. The there is the “ritual scapegoating” theory, in which the appointment of a new manager makes no difference, on average, to team performance.
Rick Audas, Stephen Dobson and John Goddard disentangled the competing theories, for the case of English football, in an article published in the Journal of Economics and Business. A detailed examination of the results of their study reveals that on average it takes up to 16 matches for a team subject to a within-season change of manager to adapt to the usual changes of tactics and playing style which ensue, and even then the team’s win rate tends to revert only to where it was prior to the change. The transition period is, on average, simply a sink into which some of the points that would have been earned are emptied.
What these results appear to tell us, then, is that the rate at which managers are replaced in English football is not optimal. The turnover is simply too fast.
So in light of these findings, is it possible to offer a rational explanation for existing attitudes to management change? Well, there may just be such an explanation, and it lies in a thing called “variance”, which is the dispersion of results about the average.
The idea here is that a change of manager may not improve performance but it does shake things up a bit. This can only be good news, of course, for a team which is likely to go down anyway. The reason is that while a change of manager may on average mean even fewer points, the change does at least improve the small chance of pulling clear of the relegation zone. Seen like this, it can be likened to an all-or-nothing throw of the dice. That’s the rational side of the argument. The problem comes when those in charge of the team’s future grow just a little too fond of the dice.
Links:
http://www.jstor.org/pss/2392715
http://onlinelibrary.wiley.com/doi/10.1111/1468-0270.00039/abstract
http://thevideoanalyst.com/does-sacking-the-manager-improve-results
Traditional finance is more concerned with checking that the price of two 8-ounce bottles of ketchup is close to the price of one 16-ounce bottle than it is in understanding the price of the 16 ounce bottle. Such is the view of Lawrence Henry (‘Larry’) Summers, currently Director of the White House’s National Economic Council, writing in the ‘Journal of Finance’ in 1985. “They have shown”, he went on, “that two quart bottles of ketchup invariably sell for twice as much as one quart bottle of ketchup except for deviations traceable to transactions costs … Indeed, most ketchup economists regard the efficiency of the ketchup market as the best established fact in empirical economics.” If so, this represents an example of the LOOP (‘Law of One Price’) principle in economics, i.e. identical goods should have identical prices.
But are they right? To find out, I checked the prices on offer at my local branch of a well-known local supermarket chain and found the following pricing structure. A 460g bottle of a leading brand of tomato ketchup was priced at £1.63, while the bigger (by 73.9%) 800g bottle sold at £2.19 (an extra 34.4%). According to the LOOP principle, one might have thought that the 800g bottle would have sold for 73.9% more than the 460g bottle, i.e. for £2.83. So is this a mispricing of 64p. Does this indicate that the market is inefficient? Well, the answer is pretty simple here. There is nothing wrong with the market, since there’s no clear way to exploit the mispricing, short of tipping the contents of the bigger bottle into the smaller bottles and selling them yourself. Summers would call this a “deviation due to transactions costs.” More fundamentally, the smaller bottle offers advantages that the larger bottle doesn’t have. Most obviously, it’s easier to store. Perhaps it also looks nicer on the table.
Trading financial assets, on the other hand, is a different issue altogether. Transactions costs are relatively small and assets trading in different markets are often identical, so in these cases one would expect the LOOP principle to more clearly apply. What’s the evidence? Well, one well-known apparent violation is the case of Royal Dutch Shell. Royal Dutch and Shell are separate legal entities but merged their interests in 1907 on a 60/40 basis. On this basis, the Royal Dutch shares should automatically have been priced at 50% more than Shell shares. However, they diverged from this by up to 15% until their final merger in 2005.
When the company 3Com spun off shares of its mobile phone subsidiary Palm into a separate stock offering, 3 Com kept most of Palm’s shares for itself. (Thaler and Lamont, 2003). So a trader could invest in Palm simply by buying 3Com stock. 3Com stockholders were guaranteed to receive three shares in Palm for every two shares in 3Com that they held. This seemed to imply that Palm shares could trade at an absolute maximum of 2/3 of the value of 3Com shares. Rather than being worth less than 3Com shares, however, Palm shares instead traded at a higher price for a period of several months. This should have allowed an investor to make a guaranteed profit by buying 3Com shares and shorting Palm – a virtual no-risk arbitrage opportunity, the equivalent of exchanging, say, $1,000 for £600 in the UK and almost simultaneously exchanging the £600 for $1,500 in the US.
How about prediction markets (speculative markets used for making predictions)? Is it possible to buy low and sell high across different prediction markets? Seems so! For an example, we need only point to the 2008 and 2012 US Presidential elections when it was for several days possible to back John McCain and Mitt Romney on the Betfair betting exchange at a healthy shade of odds against and simultaneously to do likewise with Barack Obama on the Intrade exchange. A guaranteed profit, even net of commission.
Professor Eugene Fama once defined an efficient market as one in which “deviations from the extreme version of the efficiency hypothesis are within information and transactions costs.” On this basis, there would appear to be some evidence that markets (in particular respect of the ‘Law of One Price’) are not always efficient.
Reading and Links
Lamont., O.A. and Thaler, R.A. (2003). Anomalies: The Law of One Price in Financial Markets. Journal of Economic Perspectives. 17, 4, 191-202.
On economics and finance. Summers, L.H. (1985). Journal of Finance, 40, 3, 633-635. http://m.blog.hu/el/eltecon/file/summers_ketchup%5B1%5D.pdf
Law of One Price. Wikipedia. https://en.wikipedia.org/wiki/Law_of_one_price
It is said that on returning from a day at the races, a certain Lord Falmouth was asked by a friend how he had fared. “I’m quits on the day”, came the triumphant reply. “You mean by that,” asked the friend, “that you are glad when you are quits?” When the said Lord replied that indeed he was, his companion suggested that there was a far easier way of breaking even, and without the trouble or annoyance. “By not betting at all!” The noble lord said that he had never looked at it like that and, according to legend, gave up betting from that very moment.
While this may well serve as a very instructive tale for many, a certain Edward O. Thorp, writing in 1962, took a rather different view. He had devised a strategy, based on probability theory, for consistently beating the house at Blackjack (or ’21’). In his book, ‘Beat the Dealer: A Winning Strategy for the Game of Twenty-One’, Thorp presents the system. On the inside cover of the dust jacket he claims that “the player can gain and keep a decided advantage over the house by relying on the strategy”.
The basic rules of blackjack are simple. To win a round, the player has to draw cards to beat the dealer’s total and not exceed a total of 21.
Because players have choices to make, most obviously as to whether to take another card or not, there is an optimal strategy for playing the game. The precise strategy depends on the house rules, but generally speaking it pays, for example, to hit (take another card) when the total of your cards is 14 and the dealer’s face-up card is 7 or higher. If the dealer’s face-up card is a 6 or lower, on the other hand, you should stand (decline another card). This is known as ‘basic strategy.’
While basic strategy will reduce the house edge, it is generally not enough to turn the edge in the player’s favour. That requires exploitation of the additional factor inherent in the tradition that the used cards are put to one side and not shuffled back into the deck.
This means that by counting which cards have been removed from the deck, we can re-evaluate the probabilities of particular cards or card sizes being dealt moving forward. For example, a disproportionate number of high cards in the deck is good for the player, not least because in those situations where the rules dictate that the house is obliged to take a card, a plethora of remaining high cards increases the dealer’s probability of going bust (exceeding a total of 21).
Thorp’s genius was in devising a method of reducing this strategy to a few simple rules which could be understood, memorized and made operational by the average player in real time. As the book blurb puts it, “The presentation of the system lends itself readily to the rapid play normally encountered in the casinos.”
Since the publication of the book, the strategy has been amended and improved, but Ed Thorp’s original insights stand. The problem simply changed to one historically familiar to many successful horse players – how to get your money on before you are closed down or kicked out.
Follow on Twitter: @leightonvw
When you have the edge in a competition, it would seem that the rational strategy is to minimize the role of luck in determining the outcome. So if Roger Federer agrees to play one point against me at tennis, on his own serve, with a forfeit of £100,000 if he loses the point, I doubt that his optimal strategy would be to serve flat out. This would be to increase the risk that he double faults, which is by some margin my best hope of winning the point. In other words, variance is my friend but Mr. Federer’s enemy. The same would seem to apply in a horse race, where the jockey’s optimal strategy when aboard the best horse in the field is unlikely to be that of boxing himself in on the rail behind a wall of horses. When I talk about this, I like to use the example of Mick Kinane and his 2009 Prix de l’Arc de Triomphe triumph atop one of the best racehorses ever, Sea The Stars. Kinane put it like this: “I didn’t have any worries because I knew I was on the fastest horse in the race. His acceleration was fantastic.” If so, it is clear that he must have had nerves of steel, because there certainly wasn’t anyone else watching who shared that sentiment less than three furlongs out. To my mind, the horse won quite simply because he was so far superior to his rivals that he was able to overcome those obstacles which beset ordinary flesh and blood. And that includes variance! Did Kinane know that? Perhaps! Which brings us to another puzzle arising from the result of the race. Why were the bookmakers afterwards bemoaning a lost fortune on the race? According to a spokesman for Coral, for example, the success of Sea the Stars in the Arc cost bookmakers “a sizeable seven-figure sum.” It is a lament we hear so often. The bookmakers regularly decry a series of winning hot favourites. But why? If favourites are such losers in the book for the layers, why do the layers not simply shorten up the price of those particular horses or football teams or whatever, and lengthen the price of their rivals. In the case of the Arc, this would have meant cutting the odds offered about Sea the Stars by a couple of notches, or even more, and lengthening the others accordingly. Is it because the betting public would bet the same amount on the favourite regardless of the price? Unlikely! But even if true, the liability would in any case be less. So who is behaving irrationally? Is it the bookmakers or those who bet with them? And does the same apply on the betting exchanges? Anyway, do bookmakers actually lose when hot favourites win? Sometimes, but not always. The bottom line, though, is that bookmakers will on average do better when longshots win. But why? Of course, there will be fewer winners, but why is this not fully offset by the larger payouts? There are answers to this puzzle, based in the main around attitudes to risk and information misperceptions, but still no single definitive answer. That may well be because there actually is no definitive answer, but a range of them. The puzzle remains, therefore, but we are closer than ever to solving it.
The weight of opinion among the spokesmen of the major bookmakers, as reported on the morning of Epsom Derby Day, was that the John Oxx-trained ‘Sea the Stars’ would go off an even more sold favourite than he was in the early trading. And indeed, all the 7 to 2 soon disappeared, to be replaced on the bookmakers’ boards by 11 to 4, and by the time the market opened on course, that price (bar the odd 3 to 1 and 5 to 2 in places) was pretty much set in stone. Meanwhile, Criterium de Saint Cloud winner ‘Fame and Glory’, available at a general 4 to 1 in the morning, opened on course at 7 to 2, touched 4 to 1 in places, and after frantic late trading, went off as 9 to 4 favourite. What happened? Well, an enormous late plunge, including one confirmed wager of £40,000 to win £110,000 might have had something to do it! All those 7 to 2 and 4 to 1 offers were soon wiped off the boards and market-watchers who like to follow in those bettors who unload the biggest satchels might perhaps have been forgiven for thinking the horse was home and hosed before it even exited the stalls. In the event, the Montjeu colt performed creditably enough, and might well have benefited from a stiffer pace, but was never going to prevent Mick Kinane from following up his 2001 Derby success on Sea the Stars’ half-brother Galileo. So what can we learn from this? Well, the consistent money pointed firmly in the direction of Sea the Stars. The money for ‘Fame and Glory’ was late and big, but from what we can ascertain derived from a few very large individual punts. Still, money is money, and prices in a market respond to the weight of it, wherever it comes from. But live, flesh-and- blood price-setters need not respond solely to the sheer relative volume of money about different horses, but also to what information the money is imparting. Would you as a price-setter respond in the same way to ten bets of £4,000, placed gradually throughout the day, as you would to one £40,000 punt three minutes before the off? And should you? In the event, we know that the late and very large plunge came for the unbeaten colt that was already known to travel and to stay. And we were confirmed in our knowledge that he travels and stays. The only part of the triumvirate of qualities that wasn’t confirmed was his unbeaten status. If the market was like a ballot box in a first-past-the-post election, the winner of the 2009 Investec Derby and the winner in the market would, I judge, have been one and the same. But betting markets don’t work quite like ballot boxes. Most obviously, you can buy more than one vote. And so the market got it wrong and the ballot box (most probably) got it right. Would that the same were always true in the world of politics!
When all the votes cast in the 2008 US Presidential election were counted and tallied, it emerged that Barack Obama secured 52.9% of the popular vote, John McCain took 45.7% of the vote, and the remaining 1.4% was split between assorted third-party candidates. So how did the final polls published by the respective opinion polling organisations perform? RealClearPolitics published most of them, and displayed them on its website on election day. These ranged in sample size from just 714 likely voters (CBS News) to 3,000 (Rasmussen Reports), and covered survey dates ranging from as early as 29th October to 1st November (Pew Research) at one end to 3rd November only at the other (Marist). – Rasmussen Reports (3,000 likely voters): Obama 52%; McCain 46% – Pew Research (2,587 likely voters): Obama 52%; McCain 46% – Gallup (2,472 likely voters): Obama 55%; McCain 44% – ABC News/ Washington Post (2,470 likely voters): Obama 53%; McCain 44% This gives a big-sample average as follows: Obama 53%: McCain 45%. This is an 8% margin in favour of Obama compared to the actual margin of 7.2%. – Marist (3rd November): Obama 52%; McCain 43% – Battleground (Lake projection – 2/3 November): Obama 52%; McCain 47% – Battleground (Tarrance projection – 2/3 November): Obama 50%; McCain 48% Actually this is one poll divided into two according to different methodologies, so should be counted only once (an average of the methodologies gives: Obama 51%; McCain 47.5%). This gives a late-survey average as follows: Obama 51.5%; McCain 45.3%, a 6.2% margin in favour of Obama compared to the actual margin of 7.2% So we have something of an over-estimate in one case and an under-estimate in the other. There are a total of 14 polls (counting the alternative Battleground methodologies as one poll), ranging from highs of 55% (Gallup) and 54% (Reuters/C-SPAN/Zogby) for Obama to lows of 50% (Fox News) and 51% (NBC News/Wall Street Journal). For McCain the polls ranged from a high of 47.5% (Battleground average) to a low of 42% (CBS News). So what happens if we simply take all the final polls published on RealClearPolitics and divide by the number of polls? This is without regard to the dates of the surveys or the sample sizes or the methodology of the poll but simply taking a bare average of everything on offer. Well, we obtain the following: Obama 52.1% (actual: 52.9%); McCain 44.5% (actual: 45.7%). This represents an advantage of 7.6% for Obama (taking an unweighted, unadjusted average of all these polls), 0.4% off the actual margin in favour of Obama of 7.2%. Now add in the final Daily Kos/Research 2000 poll which RealClearPoltics for their own reasons decided to exclude from any of their daily summaries and what do we obtain? An Obama spread of 7.4%, within 0.2% of the final tally! And so there we have it! The election day polls performed almost perfectly – on average!
Link:
Undertaking an online search for dictionary definitions of the word ‘Wisdom’, I quickly came up with this: “The ability to use your experience and knowledge in order to make sensible decisions or judgments”. The second definition I hit upon gives the following: “The ability to discern or judge what is true, right, or lasting; insight”. There are plenty of dictionaries to skim, but these two definitions pretty much sum up what people usually mean when they use the word ‘Wisdom’. The two definitions I’ve highlighted are not identical, of course. It might, for example, be sensible from the point of a view of a jury wanting to go home to make a hasty and ill-considered decision about the defendant’s guilt, but does this make their action wise? The jury may indeed be acting wisely in their immediate personal interests but can we justifiably describe such behaviour as wise in a greater context? So when we say that a crowd is wise, what exactly are we saying and how does this fit in with the idea of a prediction market? A classic example is that of ‘Galton’s ox’, a seminal study in which Sir Francis Galton noted down the estimates of 800 or so entrants in a competition to guess the weight of an ox. He found that the average (mean) estimate of the crowd was almost exactly correct. Similar accuracy was reproduced in classic experiments in which students were asked to guess the number of jelly beans in a jar or the weight of a range of objects. Prediction markets are essentially betting markets created for the purpose of making predictions. The idea behind the use of these markets stems from the view that information concerning the likelihood of future events is dispersed among many people, i.e. the ‘crowd’, and that these markets allow for the aggregation of this information. Prediction markets have been used to forecast what sometimes seems like almost everything, from the outcome of elections to the timing of influenza outbreaks to sales of the latest printer to box office receipts, and have often proved astonishingly accurate. But what does this really tell us about the ‘crowd’ and the market which reflects the views of the crowd? James Surowiecki extols the virtues of prediction markets in his book, ‘The Wisdom of Crowds: Why the Many Are Smarter Than the Few and How Collective Wisdom Shapes Business, Economies, Societies and Nations’. But does the success of prediction markets really tell us anything about the wisdom of the crowd? After all, there is a big difference between estimating the number of jelly beans in a jar and judging whether to put the defendant to death or go to war. At a recent seminar presented on December 7, 2009 at the University of Hamburg, Julie Vaughan Williams coined the phrase ‘The Accuracy of the Crowd’ as a better representation of what the empirical evidence tells us so far. As such, maybe we should see the crowd as clever rather than wise. To deduce something about its wisdom from its cleverness is perhaps a step too far. So let’s coin a new phrase. Let’s call it ‘The Cleverness of the Crowd’. Reference: Vaughan Williams, Leighton and Vaughan Williams, Julie, The Cleverness of Crowds, The Journal of Prediction Markets, Vol. 3, 3, December, 2009, 45-47. Link: http://www.ingentaconnect.com/content/ubpl/jpm/2009/00000003/00000003/art00004
“Would you like to open the box or take the money”? This was the standard line of the original ‘quiz inquisitor’, Michael Miles, on his show ‘Take Your Pick’, a popular favourite in the early days of commercial television. The idea was to present contestants with a choice between opening a box, which might contain anything from a car to a rotten tomato, and receiving a wad of notes in the hand. Many, but not most, chose the money. Now let’s change the format of the game a little and offer contestants a choice between three boxes, one of which contains a cheque for a million pounds and the other two which are empty. Let’s call the boxes “Gold, Silver and Lead”, the choice of caskets offered to Portia’s suitors in Shakespeare’s ‘The Merchant of Venice’. Let’s say that, like Bassanio, you choose the box made of lead. I am the quiz inquisitor on this occasion and I now open the box made of gold. It is empty. At this point I offer you the opportunity to stick with your original choice or to switch. What should you do, and does it matter? Basic intuition may tell you that there are two remaining boxes, of silver and lead, and so it should be an even chance of winning whichever of these boxes you select. If this line of reasoning is correct, it makes no difference to your probability of winning the prize whether you switch to the silver box or stay with your original choice of lead? But would this intuition be correct? To answer this we need to ask one simple question. Has any new information been introduced by my decision to open the gold box? This depends on whether I know which box contains the cheque when I open the box. To take an example, assume I know that the box with the prize is the silver box. When you choose the lead box, I now have no choice but to open the gold box. In effect, then, I am actually pointing out to you which box contains the prize. This is the case whenever the box you have chosen is empty. There is a 2 in 3 chance of this as there are two empty boxes and only one containing the prize. There is a 1 in 3 chance that the box you have chosen is in fact the box containing the cheque. In this case, it doesn’t matter which box I as the quizmaster choose to open as they are both empty. To summarize this, there is a 2 in 3 chance that I am pointing out to you the winning box. It is the box which I chose not to open, in this example the silver box. There is a 1 in 3 chance that you chose the right box in the first place. So what’s your optimal strategy? This becomes clear once you realize that you only have a 1 in 3 chance of winning if you stick with your original box but a 2 in 3 chance if you switch to the box which I didn’t open. The key to the riddle is the new information I introduced by opening the box which I knew to be empty. By acting on this new information, you can improve your chance of correctly predicting which box will open to reveal the cheque from 1 in 3 to 2 in 3 – by switching boxes when given the chance.
It is as commander of the British Eighth Army in the Western desert during World War 2, and later as commander of Allied ground forces in Operation Overlord, the invasion of Normandy, that Bernard (‘Monty’) Montgomery is perhaps best known. What is less well known is that he loved to wager on almost anything. One of the best documented examples was Monty’s bet with Walter Bedell (‘Beetle’) Smith, later Chief of Staff of US General Dwight (‘Ike’) Eisenhower, that the Eighth Army would capture the strategically important Tunisian port of Sfax by April 15th, 1943. The terms of the wager were that the Americans would, if he was successful, deliver him a B-17 Flying Fortress, complete with an American crew. Sfax fell on April 10 and Montgomery cabled Eisenhower immediately, demanding full and immediate payment. Montgomery got the Flying Fortress but not before receiving a stinging dressing down from the Chief of the Imperial General Staff, General Alan Brooke (later Viscount Alanbrooke). So we learn from Alanbrooke’s diary entry for June 3rd, 1943. Smith is reported to have later remarked to Monty: “You may be great to serve under, difficult to serve alongside, but you sure are hell to serve over.” Still, the days of the wager were not over, as evidenced by the £5 wager struck between Montgomery and Eisenhower about the timing of the end of the war. Eisenhower describes the bet in his memoirs as follows: “I was personally so confident that we could launch ‘Overlord’ strongly and promptly in the spring of 1944 that I bet Montgomery five pounds that we would end the war by Christmas of that year. I lost the bet.” Now one of the potential pitfalls of betting comes about when one party to the wager has inside information that the other does not, or when one of the parties can influence the outcome in a way the other is unaware of. So could this £5 bet have influenced the timing or perhaps even the outcome of the Second World War? Can a man be so obsessed with winning a wager that he would be prepared to lose a war? Of course not! We are talking the real world here. As Eisenhower himself relates, he lost the bet because he failed to take account of two things. “The first of these”, he writes, “was the late date of the assault across the Channel, and second, was that I did not conceive that Hitler would continue fighting after we had once lined up the Allied Armies on the banks of the Rhine.” No, a £5 wager had absolutely no impact whatsoever on the conclusion of the war. Now make it a Flying Fortress and we might be talking!
“October”, wrote Mark Twain, “is one of the peculiarly dangerous months to speculate in stocks. The others are July, January, September, April, November, May, March, June, December, August and February.”
Yes, speculation in stocks is always risky. But are there some times of the year when it more advisable to buy or sell than other times? Some people think so and the classic saying, “Sell in May, go away, buy again on St. Leger Day” is one of the more famous aphorisms encapsulating this advice.
For those totally unacquainted with affairs of the turf, ‘St. Leger Day’ is the day on which the final, oldest and longest classic race of the annual calendar (the St. Leger Stakes) is run, traditionally the second Saturday in September. A variant of this adage is the ‘Halloween indicator’, which holds the same advice with regard to selling in May, but advocates holding on for a few weeks after St. Leger Day, till about October 31st, before buying again.
The assumption underlying both strategies is that stocks tend to underperform during the summer months. But is this true? Ben Jacobsen and Sven Bouman, writing in the prestigious American Economic Review, in 2002, certainly believe so – “… we find this inherited wisdom to be true in 36 of the 37 developed and emerging markets studied in our sample.
The ‘Sell in May’ effect tends to be particularly strong in European countries and robust over time. Sample evidence, for instance, shows that in the UK the effect has been noticeable since 1694”. Note that – since 1694! That was the year that the Bank of England was formed. Also the year (incidentally) that the great French philosopher, writer and dramatist Voltaire was born. So what about it? Is early autumn a good buy to buy? Well, it all depends on what you take to be strong evidence, and the academic jury really is out on this one.
This doesn’t stop the financial press trotting out the famous dictum whenever events seem to support it. Take May 2006, for example, when the US S&P index declined by 3 per cent and the Japanese Nikkei 225 by nearly 9 per cent. Forbes magazine duly declared on June 6th that the “axiom ‘sell in May and go away’ worked like a charm”. The Financial Times noted on July 14th that “this year [2006] ‘sell in May and go away’ would have been a great strategy”. The Economist on May 25th went further, arguing that the ‘sell in May’ adage was “an explanation of why investors the world over have been selling shares since May 11th”.
So what would have happened if you had sold your shares on May 1st, 2009? On that date the FTSE closed at 4,243. By St. Leger week the FTSE had broken the 5,000 barrier and has edged up further since, to stand at over 5,100 by the end of September. So much for what you should, or should not have done in May!
Is October still a good time to buy? Well, I think Mark Twain sort of had it right all along.
Links:
