The Risky Thinking Newsletter is free for professionals working in business continuity, disaster recovery, and risk management. To get
your own subscription, please visit www.RiskyThinking.com/newsletter.
As supply chains become increasingly globalized, the possibility of disruption due to sudden political change
increases. Recently we have seen some rapid changes in politics in both the UK (Brexit) and the US (Trump) which can be expected
to have knock-on effects on trade agreements, pricing, suppliers, and markets.
Monitoring political risk is hard. The CIA, which has thousands of people dedicated to this task, has noticeably been unable
to anticipate key events on a number of occasions. Even if our objectives are limited, from a business perspective we can't expect to do much better.
In this issue we will be looking at two imperfect methods of assessing the probabilities related to the results of votes:
the impact and possible mitigations of the possible outcomes will obviously depend upon how the possible outcomes may affect you.
On Brexit, Bookies, and the Wisdom of Crowds
I was, perhaps fortunately, not working with any client exposed to the Brexit risk. So when asked for an off-the-cuff opinion on which way the vote would go, I used a quick method of determining risk: I looked at the odds the bookies were offering. Here's why that went badly wrong...
Looking at the odds offered by bookies is often a good way of determining probabilities: the favorite generally wins. But that didn't work here. The final odds being offered on a British exit of the European Union were in the range 3/1 to 6/1, suggesting that the probability of the Brexit vote winning was in the range 14% to 25%. (If you're not familiar with odds being given in this form, there's a good explanation of the conversion of odds to probability here). The final odds can be found here.
Unlikely events do happen, but that wasn't the problem here. This was something else and, if I'd thought about the issue more deeply, I should have realized that the betting odds might be a bad proxy for the outcome of the vote.
To understand what went wrong, we need to understand a little more about bookies and bookmaking…
[My apologies to professional bookmakers for this gross over-simplification. If I have made any major mistakes,
please let me know.]
The objective of a bookie is to make money. To do this, a bookie has to ensure that the amount he (or she) will pay out is less than the amount bet regardless of the outcome of the event being bet on. He can't always do this, but it's important to realize that this is the objective. The bookie knows he isn't smarter than everyone else. If he relied on the idea that his assessment of probabilities was better than that of his customers he would be a gambler, not a bookmaker.
So the objective of a bookmaker is to set up a situation as follows:
Payout If Team A Wins
Payout If Team B Wins
Team A Wins
Team B Wins
In this ideal situation the bookie makes the same amount of profit whatever happens.
But there is a problem. Gamblers can place a bet on either side of the proposition. Suppose gamblers think that Team A has 25% chance of winning,
and Team B has 75%. With these probabilities, rational gamblers will immediately bet on Team B. This situation may quickly develop:
Payout If Team A Wins
Payout If Team B Wins
Team A Wins
Team B Wins
Clearly our bookmaker now has a problem. He is now gambling on the outcome of the event and will lose if Team B wins. He can either stop taking
bets on Team B or adjust the odds so that more people will bet on Team A.
Assuming he adjusts the odds, the situation should stabilize to something like this:
Payout If Team A Wins
Payout If Team B Wins
Team A Wins
Team B Wins
With this arrangement, a rational gambler sees neither proposition as a viable bet and the bookmaker makes a profit. There is
no incentive for anyone to change the status quo. Crunching the numbers, the implied "collective wisdom" of
the gamblers is that Team A will win between 17% and 28% of the time, and Team B will win between 72% and 83% of the time, which
is what we would expect.
And Then There Was Brexit…
For sporting events this can work reasonably well. The final odds are a reasonable proxy for the combined beliefs of professional gamblers.
So what went wrong with Brexit?
There were a large number of amateur gamblers. Amateur gamblers bet on the side they want to win (e.g. their favorite team), rather than making a rational choice about which side of a proposition to bet on.
People who had more money with which to bet (people who were doing well under the status quo) were more likely to favor Remain; people who were less well off (those not doing so well under the status quo) had less money to bet and were more likely to vote Leave.
There wasn't a great deal of communication between the two groups, so people who thought that Remain should win were likely to talk to a lot of other people who also thought Remain should win. Even a rational gambler can make a poor probability assessment in this situation due to
So if those favoring Remain average a £100 bet, and those favoring Leave are average a £10 flutter, what happens?
Payout If Brexit Wins
Payout If Brexit Loses
The bookies adjusted their odds according to the bets. The final odds at Ladbrokes were 1/10 Remain, 6/1 Leave, suggesting incorrectly
that the probability of a Brexit was between 9% and 14%.
The actual proportion who voted for Brexit was 52%.
Bookies' odds are a poor proxy for probabilities when the bulk of gamblers are not likely to bet rationally, and when gamblers favoring one particular outcome can be expected to be more wealthy or more likely to bet than those favoring the other.
Was Brexit the right decision? It took 40 years for 52% of voters to decide on balance that it would be better to leave the EU. It may take another 40 years to decide if that was really the right decision.
Plan424 — The Case for a Mobile Plan
When there was a minor earthquake in Ottawa (a place that isn't on any major fault line), the staff in the downtown office
did not know what to do. So they did what their instincts told them, and tried to get out of the building as fast
as possible. This wasn't what they should have done. This wasn't what their business continuity plan said they should do. But it's unlikely most staff had ever read the business continuity plan or even knew where a copy was.
Fortunately it was a small quake: the building suffered minor damage, and nobody was hurt by falling glass or debris.
But this is a major problem. People don't know what they are supposed to do and when something happens, it takes them too long to find out.
This is why we developed Plan424, a business continuity plan system designed from the outset for
staff mobile phones. It puts the information people need where they can easily access it, and encourages a familiarity with the plan which
a paper or web-based system cannot match.
After the first debate in the US presidential contest, Donald Trump was able to point to many
online polls declaring him the winner. Had he really won? How well do polls help us to assess political risk?
As the air cleared after the first debate between US presidential candidates Donald Trump and Hilary Clinton, Trump was able to point to
many online polls declaring him the winner.
This was quite a surprise to many who watched the debate. Was our judgement wrong? Had he really won the debate? Should we now assume a Trump presidency would be a certainty?
In a previous article,
I pointed out the problem of relying on bookmaker's odds to assess political risk. Surely polls must be a more reliable guide.
There are a number of dangers we should always consider before we rely on polls to assess risk:
Self-Selection (Non-Response Bias)
Manipulation by the Pollster
Manipulation by the Poll subjects
When we see a poll of viewers by Fox News or of readers by Socialist Worker, it's easy to recognize that the poll may represent the views of a distinct group, but are unlikely to represent the views of the whole population. Unless the results of such polls are counter-intuitive ("Fox viewers think Bernie Sanders should be the next president") they are of little interest. "If you are the sort of person who supports Donald Trump, then you support Donald Trump".
But it's not necessarily that easy to spot selection bias. If you interviewed 1000 random people in Times Square, New York, what would that tell you? How about 1,000 Facebook users? 1,000 commuters? Clearly how you pick your selection of people to interview matters.
Even if you try and get as wide a sample as possible, things can go badly wrong. The 1936 Literary Digest Poll is the classic example.
The candidates in the US presidential election of 1936 were Alfred Landon, the governor of Kansas, and the incumbent, Franklin Roosevelt. The Literary Digest mailed out an "election ballot" to 10 million people to predict the result. That was a quarter of all voters. In terms of logistics, it was a massive achievement. They received 2.4 million responses. Their prediction: Landon 57%, Roosevelt 43%.
The fact that you haven't heard of President Landon, suggests that something went badly wrong.
The actual results were Landon 38%, Roosevelt 62%. A landslide for Roosevelt.
The primary problem here was selection bias. The addresses used for the sample were culled from telephone directories and magazine subscription lists. In 1936 telephones were luxury items. Magazine subscribers tended to be richer than the average population. An unemployed person was unlikely to have either a telephone or a magazine subscription At the time, there were 19 million unemployed — a significant fraction of voters. The people surveyed were much better off than average, and economic issues and policies were of major concern to voters due to the Great Depression.
Self-Selection Bias (or Non-Response Bias)
Another criticism of the Literary Digest poll, although it's impossible to know its effects, was self-selection or non-response bias. Only 24% of those polled returned the "ballot paper". Were these disproportionately Landon or Roosevelt supporters? Were those supporting Roosevelt more likely to view the poll as worthless and not bother to respond? It's impossible to tell.
[We note as an aside that today such a response to piece of direct mail would be viewed as an overwhelming success: a response rate of 2.4% would now be
regarded as good.]
Self-selection or non-response is always a problem.
A charity (which I will not name) conducted a survey to determine the frequency of sexual assaults on female athletes. It reported that, according to their survey, over 75% of female athletes had been sexually assaulted. The problem: it was a mail survey with a 3% response rate. Athletes who had been sexually assaulted were more likely to reply than those who hadn't. All we can safely say is that at least 2.25% of surveyed female athletes reported they had been sexually assaulted. The actual number could have been anywhere between 2.25% and 99.25%.
"In a recent survey, 75% of Americans said they would vote for Clinton." It's perfectly true. Of the last four US voters I spoke to, three indicated that they would vote for Clinton and one for Trump.
So based on my survey should Trump supporters give up on the presidency as a lost cause? Obviously not.
A key question to ask of any survey is "what was the size of the sample?". In this case, the sample size was 4. The good news for Mr. Trump is that there is a 31% chance of getting this result or worse even if the actual chances are 50:50. The sample size is just too small to accurately predict anything. Legitimate polls will tell us both the sample size ("1000 voters"), how these voters were selected ("telephone subscribers in Seattle"), and the confidence interval in the result ("plus or minus three percent 19 times out of twenty").
Manipulation by the Pollster
In the brilliant political satire Yes, Prime Minister. there is a wonderful scene which explains one method of manipulating poll results if you are the person designing the poll. Faced with the British Prime Minister seeking to introduce National Service (conscription) because a poll suggests widespread support for the idea, the Permanent Secretary (Sir Humphrey) explains to the prime minister's Principal Secretary (Bernard) how to produce a contradictory poll result. If you've never seen it, I suggest you watch the YouTube clip now. Even though you know what is happening, I bet
you found yourself mirroring Bernard's answers to the poll questions.
This is the least subtle form of manipulation, using context provided by previous questions to get the desired result.
However, there are more subtle forms of manipulation. Psychologists have repeatedly shown that people can easily be affected ("anchored") by irrelevant information (such as the last two digits of a social security number) presented to them just before a question is asked, as well as by what they believe
similar (or dissimilar) people have answered to the same question.
Manipulation by the Poll Subjects
It's not always the pollster doing the manipulation. Sometimes it's the people providing the answers:
In face-to-face or telephone polling, it is generally recognized that answers viewed as unpopular will be suppressed. People will lie to avoid the embarrassment of holding an unpopular view. Candidates or views which are unpopular tend to be under-represented as a result
I have a confession to make. I once rigged an online poll. The poll was to choose the name of a new in-house magazine. It was a simple matter of modifying a web browser's cookies to allow me vote multiple times. I probably overdid this a bit, as more people voted for my choice than worked for the company, but I still won. Vote early, as the saying goes, and vote often.
Special interest groups have been known to manipulate the results of online polls to fit their agenda. If one of their members identifies a poll relevant to the groups' aims, it is circulated to all its members who then register on the web site and take part in the poll. Members of the anarchic 4chan website are notorious for colluding to manipulate online polls - (mostly) for the fun of it. Notable manipulations include
manipulating Time Magazine's Top 100, and trying to get
Taylor Swift to play a concert at a school for the deaf. The 4chan (NSFW) and Reddit websites are widely credited with fixing the results of online polls after the presidential debates.
Polls can only be trusted if you know a lot about how they were conducted, and the measures put in place to prevent abuse either from the pollsters or the participants. Polls produced by partisan organizations or websites are easily recognized as worthless because of the opportunities and incentives for abuse, but even honest polls may go badly wrong if there is significant selection bias or participant manipulation in the results.
So when assessing political risk (e.g. the chance's of Trump or Clinton winning the US presidential election), be careful about trusting polls too much. For US politics, the FiveThirtyEight website is a good place to start. (You can find a lot of additional information on their methods in Wikpedia) For UK politics and policies, it's worth looking at the YouGov website. Their post-mortem on the Brexit poll results makes interesting reading.
And if you disregard all poll results as being meaningless, remember that our own assessments are being influenced by
confirmation bias and our personal
filter bubble. Polls may be bad, but in the absence of data
our own opinions may be badly distorted too.
The Risk Assessment Toolkit
A quick plug for our Risk Assessment Toolkit. If you've had to create or maintain a Risk Register or a Business Impact Analysis, you will know how
much work that can entail.
Our Risk Assessment Toolkit is designed to make those tasks easier, as well as providing you with quantitative reports
simulating the effects of disruptions, and calculating potential losses.
With recent news dominated by the UK's Brexit referendum, the Columbian FARC referendum, and the US presidential election, it's been easy to miss some other events in the world of interest to business continuity and risk management professionals:
As I wrote elsewhere, ransomware attacks are increasingly common. (I personally receive about 5 phishing emails a day, and some of them would be very plausible if I dealt with the organizations whose email addresses are forged.) It's a little out of date, but Barkly's blog has some interesting statistics. The main attack vectors are emailed links and attachments, and less than half of companies are apparently successful in recovering their data, even with backups. Less than 5% of companies consider paying the ransom. Ransomware typically attacks all data accessible to a user, and may also disable backup services to make it more likely the data encrypted is irreplaceable. This threat is definitely something that should be actively planned for by all IT departments.
And due to a virus of the other kind, a restaurant chain in England was forced to close several of its locations due to an
outbreak of the norovirus causing many of its staff and customers to become ill. Norovirus is frequent during the winter (it passes from person to person as well as via food or food preparation surfaces), is
exceptionally infective, and can be transmitted to food or other people in the 48 hours after the symptoms have ended. Staff sickness policies in food preparation should take this account. Also consideration should be given to the possibility of contamination in supply of fruit, vegetables, and oysters.
Distributed denial of service attacks (DDoS) can be expected to increase in the following months due to the widespread recognition of the ease with which Internet of Things devices (such as webcams) can be subverted and used for nefarious purposes. In September, a record 665 Gbps attack was launched against Brian Krebs security blog , resulting in DDoS protection for his blog being dropped by Akamai, who felt they were unable to continue to provide services which could cope with this level of threat. (Kreb's blog is frequently targeted by internet criminals due to his coverage of their activities). The attack was unusual in that a large number Internet of Things devices were subverted for the attack. Subsequently the author(s) of the malware released the source code to the Mirai botnet used in the attack. Since then an even bigger attack has been launched against Dyn, the DNS service provider used by a number of high profile sites such as Twitter and Reddit. This new attack used about 100,000 subverted devices. Given the availability of the attack code and the difficulty of patching consumer devices, such attacks can be expected to proliferate over the coming months. If the internet is critical to your business, you should plan for what you will do if you are impacted by a major attack, even if you are not its intended victim.
Product recalls require good public relations policies and practices to minimize damage. Samsung's handling of its Galaxy Note 7 recall was probably about as good as it could be in the circumstances, but it still had a significant effect on the both consumer confidence and the company's share price. Timeliness in public communications is vital. If you produce a product which has a significant number of customers, then you should assess the risks and draw up a plan for what you will do if a recall is required.
Finally terrorist attacks by individuals and small groups continue to present a threat in the West. 29 people were injured by a bomb in New York, https://en.wikipedia.org/wiki/2016_New_York_and_New_Jersey_bombings and there were minor attacks against police by individuals in Belgium, France, Germany and Italy. France remains on a state of high alert following the Bastille Day attack in Nice by an individual who killed 86 people and injured 34 by
simply driving a truck into a crowd.
There is no perfect defense against all such attacks, but we can plan for what to do after an incident has occurred.