What the Manhattan Project and James Bond Have in Common – and Why Every Analyst Needs to Know It

monte carlo simulation

Overview:

  • A powerful quantitative analysis method was created as a result of the Manhattan Project and named for an exotic casino popularized by the James Bond series
  • The tool is the most practical and efficient way of simulating thousands of scenarios and calculating the most likely outcomes
  • Unlike other methods, this tool incorporates randomness that is found in real-world decisions
  • Using this method doesn’t require sophisticated software or advanced training; any organization can learn how to use it

A nuclear physicist, a dashing British spy, and a quantitative analyst walk into a casino. This sounds like the opening of a bad joke, except what all of these people have in common can be used to create better decisions in any field by leveraging the power of probability.

The link in question – that common thread – gets its name from an exotic locale on the Mediterranean, or, specifically, a casino. James Bond visited a venue inspired by it in Casino Royale, a book written by Ian Fleming, who – before he was a best-selling author – served in the British Naval Intelligence Division in World War II. While Fleming was crafting creative plans to steal intel from Nazi Germany, a group of nuclear physicists on the other side of the Atlantic were crafting plans of their own: to unleash the awesome destructive power of nuclear fission and create a war-ending bomb.

Trying to predict the most likely outcome during a theoretical nuclear fission reaction was difficult to say the least, particularly using analog computers. To over-simplify the challenge, scientists had to be able to calculate whether or not the bomb they were building would explode – a calculation that required an integral equation to somehow predict the behavior of atoms in a chain reaction. Mathematicians Stanislaw Ulam and John Von Nuemann, both members of the Manhattan Project, created a way to calculate and model the sum of thousands of variables (achieved by literally placing a small army of smart women in a room and having them run countless calculations). When they wanted to put a name to this method, Ulam recommended the name of the casino where his uncle routinely gambled away large sums of money1Metropolis, N. (1987). The Beginning of the Monte Carlo Method. Los Alamos Science, 125-130. Retrieved from https://permalink.lanl.gov/object/tr?what=info:lanl-repo/lareport/LA-UR-88-9067.

That casino – the one Fleming’s James Bond would popularize and the one where Ulam’s uncle’s gambling addiction took hold – was in Monte Carlo, and thus the Monte Carlo simulation was born.

Now, the Monte Carlo simulation is one of the most powerful tools a quantitative analyst can use when incorporating the power of probabilistic thinking into decision models.

How a Monte Carlo Simulation Works – and Why We Need It To

In making decisions – from how to make a fission bomb to figuring out a wager in a table game in a casino – uncertainty abounds. Uncertainty abounds because, put simply, a lot of different things can happen. There can be almost-countless scenarios for each decision, and the more variables and measurements are involved, the more complicated the calculations become to try and figure out what’s most likely to happen.

If you can reduce possible outcomes to a range of probabilities, you can make better decisions in theory. The problem is, doing so is very difficult without the right tools. The Monte Carlo simulation was designed to address that problem and provide a way to calculate the probability of thousands of potential outcomes through sheer brute force.

Doug Hubbard provides a scenario in The Failure of Risk Management that explains how a Monte Carlo simulation works and can be applied to a business case (in this context, figuring out the ROI of a new piece of equipment). Assume that you’re a manager considering the potential value of a new widget-making machine. You perform a basic cost-benefit analysis and estimate that the new machine will make one million widgets, delivering $2 of profit per unit. The machine can make up to 1.25 million, but you’re being conservative and think it’ll operate at 80% capacity on average. We don’t know the exact amount of demand. We could be off by as much as 750,000 widgets per year, above or below.

We can conceptualize the uncertainty we have like so:

  • Demand: 250,000 to 1.75 million widgets per year
  • Profit per widget: $1.50 to $2.50

We’ll say these numbers fall into a 90% confidence interval with a normal distribution. There are a lot of possible outcomes, to put it mildly (and this is a pretty simple business case). Which are the most likely? In the book, Doug used an MC simulation to run 10,000 simulations – or 10,000 scenarios – and tallied the results for each (with each scenario representing some combination of demand and profit per widget to create a loss or gain). The results are described by two figures: a histogram of outcomes (figure 1) and a cumulative probability chart (figure 2)2Hubbard, D. W. (2009). The failure of risk management: Why its broken and how to fix it. Hoboken, NJ: J. Wiley & Sons.:

Figure 1: Histogram of Outcomes

Figure 2: Cumulative Probability Chart

You, the manager, would ideally then calculate your risk tolerance and use this data to create a loss exceedance curve, but that’s another story for another day. As Doug explains, using the MC simulation allowed you to gain critical insight that otherwise would’ve been difficult to impossible to obtain:

Without this simulation, it would have been very difficult for anyone other than mathematical savants to assess the risk in probabilistic terms. Imagine how difficult it would be in a more realistically complex situation.

The best way to sum up the diverse benefits of incorporating MC simulations into decision models was written by a group of researchers in an article titled “Why the Monte Carlo method is so important today”:3Kroese DPBrereton TTaimre TBotev ZIWhy the Monte Carlo method is so important todayWiley Interdisciplinary Reviews: Computational Statistics 201466): 386– 392.

  • Easy and Efficient. Monte Carlo algorithms tend to be simple, flexible, and scalable.
  • Randomness as a Strength. The inherent randomness of the MCM is not only essential for the simulation of real-life random systems, it is also of great benefit for deterministic numerical computation.
  • Insight into Randomness. The MCM has great didactic value as a vehicle for exploring and understanding the behavior of random systems and data. Indeed we feel that an essential ingredient for properly understanding probability and statistics is to actually carry out random experiments on a computer and observe the outcomes of these experiments — that is, to use Monte Carlo simulation.
  • Theoretical Justification. There is a vast (and rapidly growing) body of mathematical and statistical knowledge underpinning Monte Carlo techniques, allowing, for example, precise statements on the accuracy of a given Monte Carlo estimator (for example, square-root convergence) or the efficiency of Monte Carlo algorithms.

Summarized, Monte Carlo simulations are easy to use, not only help you more closely replicate real-life randomness but understand randomness itself, and are backed by scientific research and evidence as to how they make decision models more accurate. We need them to work because any significant real-world decision comes with a staggering amount of uncertainty, complicated by thousands of potential outcomes created by myriad combinations of variables and distributions – all with an eminently-frustrating amount of randomness haphazardly mixed throughout.

How to Best Use a Monte Carlo Simulation

Of course, knowing that an MC simulation tool is important – even necessary – is one thing. Putting it into practice is another.

The bad news is that merely using the tool doesn’t insulate you from a veritable rogue’s gallery of factors that lead to bad decisions, ranging from overconfidence to using uncalibrated subjective estimates, falling victim to logical fallacies, and making use of soft-scoring methods, risk matrices, and other pseudo-quantitative methods that aren’t better than chance and frequently worse.

The good news is that all of those barriers to better decisions can be overcome. Another piece of good news: you don’t need sophisticated software to run a Monte Carlo simulation. You don’t even need specialized training. Many of the clients we train in our quantitative methodology don’t have either. You can actually build a functional MC simulation in native Microsoft Excel. Even a basic version can help by giving you more insight than you know now; by giving you another proven way to glean actionable knowledge from your data.

On its own, though, a MC simulation isn’t enough. The best use of the Monte Carlo method is to incorporate it into a decision model. The best decision models employ proven quantitative methods – including but not limited to Monte Carlo simulations – to follow the process below (figure 3):

decision analysis process

Figure 3: HDR Decision Analysis Process

The outputs of a Monte Carlo simulation are typically shown in that last step, when the model’s outputs can be used to “determine optimal choice,” or, figure out the best thing to do. And again, you don’t need specialized software to produce a working decision model; Microsoft Excel is all you need.

You may not be creating a fearsome weapon, or out-scheming villains at the baccarat table, but your decisions are important enough to make using the best scientific methods available. Incorporate simulations into your model and you’ll make better decisions than you did before – decisions a nuclear physicist or a secret agent would admire.


Recommended Products:

Learn how to create Monte Carlo simulations in Excel with our Basic Simulations in Excel webinar. Register today to get 15% off with coupon code EXCEL15.

References   [ + ]

1. Metropolis, N. (1987). The Beginning of the Monte Carlo Method. Los Alamos Science, 125-130. Retrieved from https://permalink.lanl.gov/object/tr?what=info:lanl-repo/lareport/LA-UR-88-9067
2. Hubbard, D. W. (2009). The failure of risk management: Why its broken and how to fix it. Hoboken, NJ: J. Wiley & Sons.
3. Kroese DPBrereton TTaimre TBotev ZIWhy the Monte Carlo method is so important todayWiley Interdisciplinary Reviews: Computational Statistics 201466): 386– 392.

Are We Already In a Recession?

recession risk

Overview:

  • Fears of a recession are rising as experts attempt to predict when a recession will officially occur
  • Forecasting a recession, for most practical purposes, is irrelevant to decision-makers
  • Decision-makers need to ask the right questions that will help them mitigate the risk a recession poses

A Google search of “risk of recession” uncovers a treasure trove of prognostication, hand-wringing, and dire predictions – or sneering dismissals – involving whether or not the U.S. economy will soon take a nosedive.

Global Recession Will Come In 9 Months if Trump Takes This One Step, Morgan Stanley Argues,” from MarketWatch; “Global Recession Risks Are Up, and Central Banks Aren’t Ready,” from The New York Times; “Current U.S. Recession Odds Are the Same as During ‘The Big Short’ Heyday,” from Forbes; all of these articles reflect one of the nation’s most present fears: that the economic wave we’ve ridden for the past 8 years will soon crest, then crash onto the beach.

It’s surely a worrisome time. Even though the economy appears to be going strong – unemployment is still low, credit spreads are stable, etc. – there’s a tremendous amount of uncertainty when it comes to what the economy will do. If we knew when the recession would hit, we’d be able to do something about it, although “do something” is vague and means different things for different people and, frankly, we as a nation aren’t particularly good at knowing what that “something” is, let alone doing it.

Throw in the fact that the formal announcement of a recession always lags when the recession actually began, and our need to be able to predict the expected downturn only grows.

But two things are very possible, maybe even probable:

  1. The recession has already begun; and
  2. Asking when the recession will happen is completely irrelevant.

It Doesn’t Matter If We’re In a Recession Right Now

If a time traveler came to you from ten years from now and told you that this day marked the official beginning of the Great Recession Part 2: Judgment Day (or whatever clever name economic historians will bestow on it), would it make a difference?

Probably not, because it would be too late to take actions to avoid the recession, since it’s already here.

But even if the time traveler instead said that the recession would start three months from now, or six months, or 12 months, would that make a difference? Possibly – but it’s also very possible that the economic risks that collectively cause and make up a “recession” have already started impacting your business.

And if you knew that the recession was six months down the road, maybe you put off taking the actions that you need to take today (or needed to take X months ago) in order to mitigate the damage your organization could incur.

No matter how you slice it, asking “When will we be in recession?” or “Are we already in a recession?” is not only mostly irrelevant, but also largely counterproductive because it takes our focus off what we should already be doing: asking the right questions.

Questions we should ask instead are:

  1. What impact will a recession actually have on my organization?
  2. What specific economic risks are most likely for me?
  3. When would these risks start impacting my organization? Can I tell if they already have?
  4. What can I do today to mitigate these risks as much as possible?

All of these questions are completely independent and not reliant on knowing when a recession will happen. Remember, what constitutes a recession is completely arbitrary. It’s also one broad term for dozens of individual risks that tend to happen in clusters during recession periods but may all begin or end at wildly different times, and have different severity.

Developing answers to the above questions is far more productive than trying to discern when the recession will happen by reading news articles, watching percentages go up and down on TV, or hiring shamans to study chicken entrails. If you can find those answers, you’ll be far ahead of the curve and increase your chances of being in the minority of organizations that not only weathers economic downturns, but actually grows during them.


Recent Posts:


Recommended Products:

Contact Us

  • This field is for validation purposes and should be left unchanged.

Going Beyond the Usual Suspects in Commercial Real Estate Modeling: Finding Better Variables and Methods

commercial real estate modeling

Overview:

  • Quantitative commercial real estate modeling is becoming more widespread, but is still limited in several crucial ways
  • You may be measuring variables unlikely to improve the decision while ignoring more critical variables
  • Some assessment methods can create more error than they remove
  • A sound quantitative model can significantly boost your investment ROI

Quantitative commercial real estate modeling (CRE), once the former province of only the high-end CRE firms on the coasts, has become more widespread – for good reason. CRE is all about making good decisions about investments, after all, and research has repeatedly shown how even a basic statistical algorithm outperforms human judgment1Dawes, R. M., Faust, D., & Meehl, P. E. (n.d.). Statistical Prediction versus Clinical Prediction: Improving What Works. Retrieved from http://meehl.umn.edu/sites/g/files/pua1696/f/155dfm1993_0.pdf2N. Nohria, W. Joyce, and B. Roberson, “What Really Works,” Harvard Business Review, July 2003.

Statistical modeling in CRE, though, is still limited, for a few different reasons, which we’ll cover below. Many of these limitations actually result in more error (one common misconception is merely having a model improves accuracy, but sadly that’s not the case). Even a few percentage points of error can result in significant losses. Any investor that has suffered from a bad investment knows all too well how that feels. So, through better quantitative modeling, we can decrease the chance of failure.

Here’s how to start.

The Usual Suspects: Common Variables Used Today

Variables are what you’re using to create probability estimates – and, really, any other estimate or calculation. If we can pick the right variables, and figure out the right way to measure them (more on that later), we can build a statistical model that has more accuracy and less error.

Most commercial real estate models – quantitative or otherwise – make use of the same general variables. The CCIM Institute, in its 1Q19 Commercial Real Estate Insights Report, discusses several, including:

  • Employment and job growth
  • Gross domestic product (GDP)
  • Small business activity
  • Stock market indexes
  • Government bond yields
  • Commodity prices
  • Small business sentiment and confidence
  • Capital spending

Data for these variables is readily available. For example, you can go to CalculatedRiskBlog.com and check out their Weekly Schedule for a list of all upcoming reports, like the Dallas Fed Survey of Manufacturing Activity, or the Durable Goods Orders report from the Census Bureau.

The problem, though, is twofold:

  1. Not all measurements matter equally, and some don’t matter at all.
  2. It’s difficult to gain a competitive advantage if you’re using the same data in the same way as everyone else.

Learning How to Measure What Matters in Commercial Real Estate

In How to Measure Anything: Finding the Value of Intangibles in Business, Doug Hubbard explains a key theme of the research and practical experience he and others have amassed over the decades: not everything you can measure matters.

When we say “matters,” we’re basically saying that the variable has predictive power. For example, check out Figure 1. These are cases where the variables our clients were initially measuring had little to no predictive power compared to the variables we found to be more predictive. This is called measurement inversion.

measurement inversion examplesFigure 1: Real Examples of Measurement Inversion

The same principle applies in CRE. Why does measurement inversion exist? There are a few reasons: variables are often chosen based on intuition/conventional wisdom/experience, not statistical analysis or testing; decision-makers often assume that industries are more monolithic than they really are when it comes to data and trends (i.e. all businesses are sufficiently similar that broad data is good enough); intangibles that should be measured are viewed as “impossible” to measure; and/or looking into other, “non-traditional” variables comes with risk that some aren’t willing to take. (See Figure 2 below.)

solving measurement inversion problemFigure 2: Solving the Measurement Inversion Problem

The best way to begin overcoming measurement inversion is to get precise with what you’re trying to measure. Why, for example, do CRE investors want to know about employment? Because if too many people in a given market don’t have jobs, then that affects vacancy rates for multi-family units and, indirectly, vacancy rates for office space. That’s pretty straightforward.

So, when we’re talking about employment, we’re really trying to measure vacancy rates. Investors really want to know the likelihood that vacancy rates will increase or decrease over a given time period, and by how much. Employment trends can start you down that path, but by itself isn’t not enough. You need more predictive power.

Picking and defining variables is where a well-built CRE quantitative model really shines. You can use data to test variables and tease out not only their predictive power in isolation (through decomposition and regression), but also discover relationships with multi-variate analysis. Then, you can incorporate simulations and start determining probability.

For example, research has shown3Heinig, S., Nanda, A., & Tsolacos, S. (2016). Which Sentiment Indicators Matter? An Analysis of the European Commercial Real Estate Market. ICMA Centre, University of Reading that “sentiment,” or the overall mood or feeling of investors in a market, isn’t something that should be readily dismissed just because it’s hard to measure in any meaningful way. Traditional ways to measure market sentiment can be dramatically improved by incorporating tools that we’ve used in the past, like Google Trends. (Here’s a tool we use to demonstrate a more predictive “nowcast” of employment using publicly-available Google Trend information.)

To illustrate this, consider the following. We were engaged by a CRE firm located in New York City to develop quantitative models to help them make better recommendations to their clients in a field that is full of complexity and uncertainty. Long story short, they wanted to know something every CRE firm wants to know: what variables matter the most, and how can we measure them?

We conducted research and gathered estimates from CRE professionals involving over 100 variables. By conducting value of information calculations and Monte Carlo simulations, along with using other methods, we came to a conclusion that surprised our client but naturally didn’t surprise us: many of the variables had very little predictive power – and some had far more predictive power than anyone thought.

One of the latter variables wound up reducing uncertainty in price by 46% for up to a year in advance, meaning the firm could more accurately predict price changes – giving them a serious competitive advantage.

Knowing what to measure and what data to gather can give you a competitive advantage as well. However, one common source of data – inputs from subject-matter experts, agents, and analysts – is fraught with error if you’re not careful. Unfortunately, most organizations aren’t.

How to Convert Your Professional Estimates From a Weakness to a Strength

We mentioned earlier how algorithms can outperform human judgment. The reasons are numerous, and we talk about some of them in our free guide, Calibrated Probability Assessments: An Introduction.

The bottom line is that there are plenty of innate cognitive biases that even knowledgeable and experienced professionals fall victim to. These biases introduce potentially disastrous amounts of error that, when left uncorrected, can wreak havoc even with a sophisticated quantitative model. (In The Quants, Scott Patterson’s best-selling chronicle of quantitative wizards who helped engineer the 2008 collapse, the author explains how overly-optimistic, inaccurate, and at-times arrogant subjective estimates undermined the entire system – to disastrous results.)

The biggest threat is overconfidence, and unfortunately, the more experience a subject-matter expert has, the more overconfident he/she tends to be. It’s a catch-22 situation.

You need expert insight, though, so what do you do? First, understand that human judgments are like anything else: variables that need to be properly defined, measured, and incorporated into the model.

Second, these individuals need to be taught how to control for their innate biases and develop more accuracy with making probability assessments. In other words, they need to be calibrated.

Research has shown how calibration training often results in measurable improvements in accuracy and predictive power when it comes to probability assessments from humans. (And, at the end of the day, every decision is informed by probability assessments whether we realize it or not.) Thus, with calibration training, CRE analysts and experts can not only use their experience and wisdom, but quantify it and turn it into a more useful variable. (Click here for more information on Calibration Training.)

Including calibrated estimates can take one of the biggest weaknesses firms face and turn it into a key, valuable strength.

Putting It All Together: Producing an ROI-Boosting Commercial Real Estate Model

How do you overcome this challenge? Unfortunately, there’s no magic button or piece of software that you can buy off the shelf to do it for you. A well-built CRE model, incorporating the right measurements and a few basic statistical concepts based on probabilistic assessments, is what will improve your chances of generating more ROI – and avoiding costly pitfalls that routinely befall other firms.

The good news is that CRE investors don’t need an overly-complicated monster of a model to make better investment decisions. Over the years we’ve taught companies how incorporating just a few basic statistical methods can improve decision-making over what they were doing at the time. Calibrating experts, incorporating probabilities into the equation, and conducting simulations can, just by themselves, create meaningful improvements.

Eventually, a CRE firm should get to the point where it has a custom, fully-developed commercial real estate model built around its specific needs, like the model mentioned previously that we built for our NYC client.

There are a few different ways to get to that point, but the ultimate goal is to be able to deliver actionable insights, like “Investment A is 35% more likely than Investment B at achieving X% ROI over the next six months,” or something to that effect.

It just takes going beyond the usual suspects: ill-fitting variables, uncalibrated human judgment, and doing what everyone else is doing because that’s just how it’s done.

 



Contact Hubbard Decision Research

  • This field is for validation purposes and should be left unchanged.

References   [ + ]

1. Dawes, R. M., Faust, D., & Meehl, P. E. (n.d.). Statistical Prediction versus Clinical Prediction: Improving What Works. Retrieved from http://meehl.umn.edu/sites/g/files/pua1696/f/155dfm1993_0.pdf
2. N. Nohria, W. Joyce, and B. Roberson, “What Really Works,” Harvard Business Review, July 2003
3. Heinig, S., Nanda, A., & Tsolacos, S. (2016). Which Sentiment Indicators Matter? An Analysis of the European Commercial Real Estate Market. ICMA Centre, University of Reading

Applied Information Economics in ISACA Journal Feature on Quantitative Risk Management

quantitative risk management

Risk management methodology, until very recently, was based mostly on pseudo-quantitative tools like risk matrices. The use of these tools has actually introduced more error into decision-making than they removed, as research has shown, and organizations are steadily coming around to more scientific quantitative methods – like Applied Information Economics (AIE). AIE is prominently cited in a new piece in the ISACA Journal, the publication of ISACA, a nonprofit, independent association that advocates for professionals involved in information security, assurance, risk management and governance.

The feature piece doesn’t hide the lede. The author says, in the opening paragraph, what Doug has been preaching for years: that “rick matrices do not really work. Worse, they lead to a false sense of security.” This was one of the main themes Doug talks about in The Failure of Risk Management, the second edition of which that is due for publication this year. The article sums up several of the main reasons Doug and others have given as to why risk matrices don’t work, ranging from lacking clear definitions to a failure to assign meaningful probabilities and cognitive biases that lead to poor assessments of probability and risk.

After moving through explanations of various aspects of effective risk models – i.e. tools like decomposition, Monte Carlo simulations, and the like – the author of the piece concludes with a simple statement that sums up the gist of what Applied Information Economics is designed to do: “There are better alternatives to risk matrices, and, with a little time and effort, it is possible to manage risk using terminology and methods that everyone can, at least intuitively, understand.”

The entire piece can be found here. For an introduction into AIE and how our methods are based on sound scientific research and offer measurable improvement over other systems, check out our Intro to Applied Information Economics webinar.

Doug Hubbard to Give Public Lecture on “How to Measure Anything” at University of Bonn June 28, 2019

Many things seem impossible to measure – so-called “intangibles” like employee engagement, innovation, customer satisfaction, transparency, and more – but with the right mindset and approach, you can measure anything. That’s the lesson of Doug’s book How to Measure Anything: Finding the Value of Intangibles in Business, and that’s the focus of his public lecture at the University of Bonn in Bonn, Germany on June 28, 2019.

In this lecture, Doug will discuss:

  • Three misconceptions that keep people from measuring what they should measure, and how to overcome them;
  • Why some common “quantitative” methods – including many based on subjective expert judgment – are ineffective;
  • How an organization can use practical statistical methods shown by scientific research to be more effective than anything else.

The lecture is hosted by the university’s Institute of Crop Science and Resource Conservation, which studies how to improve agriculture practices in Germany and around the world. Doug previously worked in this area when he helped the United Nations Environmental Program (UNEP) determine how to measure the impact of and modify restoration efforts in the Mongolian desert. You can view that report here. You can also view the official page for the lecture on the university’s website here.

Top Under-the-Radar Cybersecurity Threats You May Not See Coming

cybersecurity threat

In every industry, the risk of cyber attack is growing.

In 2015, a team of researchers forecasted that the maximum number of records that could be exposed in breaches – 200 million – would increase by 50% from then to 2020. According to the Identity Theft Resource Center, the number of records exposed in 2018 was nearly 447 million – well over 50%. By 2021, damages from cybersecurity breaches will cost organizations $6 trillion a year. In 2017, breaches cost global companies an average of $3.6 million, according to the Ponemon Institute.

It’s clear that this threat is sufficiently large to rank as one of an organization’s most prominent risks. To this end, corporations have entire cybersecurity risk programs in place to attempt to identify and mitigate as much risk as possible.

The foundation of accurate cybersecurity risk analysis begins with knowing what is out there. If you can’t identify the threats, you can’t assess their probabilities – and if you can’t assess their probabilities, your organization may be exposed by a critical vulnerability that won’t make itself known until it’s too late.

Cybersecurity threats may vary in specific from entity to entity, but in general, there are several common dangers that may be flying under the radar – and may be some you haven’t seen coming until now.

A Company’s Frontline Defense Isn’t Keeping Up the Pace

Technology is advancing at a more rapid rate than at any other point in human history: concepts such as cloud computing, machine learning, artificial intelligence, and Internet of Things (IoT) provide unprecedented advantages, but also introduce distinct vulnerabilities.

This rapid pace requires that cybersecurity technicians stay up to speed on the latest threats and mitigation techniques, but this often doesn’t occur. In a recent survey of IT professionals conducted by (ISC)^2, 43% indicated that their organization fails to provide adequate ongoing security training.

Unfortunately, leadership in companies large and small have traditionally been reluctant to invest in security training. The primary reason is mainly psychological; decision-makers tend to view IT investment in general as an expense that should be limited as much as possible, rather than as a hedge against the greater cost of failure.

Part of the reason why this phenomenon exists is due to how budgets are structured. IT investment adds to operational cost. Decision-makers – especially those in the MBA generation – are trained to reduce operational costs as much as possible in the name of greater efficiency and higher short-term profit margins. This mindset can cause executives to not look at IT investments as what they are: the price of mitigating greater costs.

Increases in IT security budgets also aren’t pegged to the increase of a company’s exposure, which isn’t static but fluctuates (and, in today’s world of increasingly-sophisticated threats, often increases).

The truth is, of course, that investing in cybersecurity may not make a company more money – a myopic view – it can keep a company from losing more money.

Another threat closely related to the above is how decision-makers tend to view probabilities. Research shows that decision-makers often overlook the potential cost of a negative event – like a data breach – in favor of its relatively-low probability (i.e. “It hasn’t happened before, or it probably won’t happen, so we don’t have to worry as much about it.”). These are called tail risks, risks that have disproportionate costs to their probabilities. In other words, they may not happen as frequently, but when they do, the consequences are often catastrophic.

There’s also a significant shortfall in cybersecurity professionals that is inducing more vulnerability into organizations that already are stressed to their maximum capacity. Across the globe, there are 2.93 million fewer workers than are needed. In North America, that number, in 2018, was just under 500,000.

Nearly a quarter of respondents in the aforementioned (ISC)^2 survey said they had a “significant shortage” in cybersecurity staff. Only 3% said they had “too many” workers. Overall, 63% of companies reported having fewer workers than they needed. And 59% said they were at “extreme or moderate risk” due to their shortage. (Yet, 43% said they were either going to not hire any new workers or even decrease the number of security personnel on their rosters.)

A combination of less training, inadequate budgets, and fewer workers all contribute to a major threat to security that many organizations fail to appreciate.

Threats from Beyond Borders Are Difficult to Assess – and Are Increasing

Many cybersecurity professionals correctly identify autonomous individuals and entities as a key threat – the stereotypical hacker or a team within a criminal organization. However, one significant and overlooked vector is the threat posed by other nations and foreign non-state actors.

China, Russia, and Iran are at the forefront of countries that leverage hacking in a state-endorsed effort to gain access to proprietary technology and data. In 2017, China implemented a law requiring any firm operating in China to store their data on servers physically located within the country, creating a significant risk of the information being accessed inappropriately. China also takes advantage of academic partnerships that American universities enjoy with numerous companies to access confidential data, tainting what should be the purest area of technological sharing and innovation.  

In recent years, Russia has noticeably increased its demand to review the source code for any foreign technology being sold or used within its borders. Finally, Iran contains numerous dedicated hacking groups with defined targets, such as the aerospace industry, energy companies, and defense firms.

More disturbing than the source of these attacks are the pathways they use to acquire this data – including one surprising method. A Romanian source recently revealed to Business Insider that when large companies sell outdated (but still functional) servers, the information isn’t always completely wiped. The source in question explained that he’d been able to procure an almost complete database from a Dutch public health insurance system; all of the codes, software, and procedures for traffic lights and railway signaling for several European cities; and an up-to-date employee directory (including access codes and passwords) for a major European aerospace manufacturer from salvaged equipment.

A common technique used by foreign actors in general, whether private or state-sponsored, is to use legitimate front companies to purchase or partner with other businesses and exploit the access afforded by these relationships. Software supply chain attacks have significantly increased in recent years, with seven significant events occurring in 2017, compared to only four between 2014 and 2016. FedEx and Maersk suffered approximately $600 million in losses from a single such attack.

The threat from across borders can be particularly difficult to assess due to distance, language barriers, a lack of knowledge about the local environment, and other factors. It is, nonetheless, something that has to be taken into consideration by a cybersecurity program – and yet often isn’t.

The Biggest Under-the-Radar Risk Is How You Assess Risks

While identifying risks is the foundation of cybersecurity, appropriately analyzing them is arguably more important. Many commonly used methods of risk analysis can actually obscure and increase risk rather than expose and mitigate it. In other words, many organizations are vulnerable to the biggest under-the-radar threat of them all: a broken risk management system.

Qualitative and pseudo-quantitative methods often create what Doug Hubbard calls the “analysis placebo effect,”(add footnote) where tactics are perceived to be improvements but offer no tangible benefits. This can increase vulnerabilities by instilling a false sense of confidence, and psychologists have shown that this can occur even when the tactics themselves increase estimate errors. Two months before a massive cyber attack rocked Atlanta in 2018, a risk assessment revealed various vulnerabilities, but the fix actions to address these fell short of actually resolving the city’s exposure—although officials were confident they had adequately addressed the risk.

Techniques such as heat maps, risk matrices, and soft scoring often fail to inform an organization regarding which risks they should address and how they should do so. Experts indicate that “risk matrices should not be used for decisions of any consequence,1Thomas, Philip & Bratvold, Reidar & Bickel, J. (2013). The Risk of Using Risk Matrices. SPE Economics & Management. 6. 10.2118/166269-MS.” and they can be even “worse than useless.2Anthony (Tony) Cox, L. (2008), What’s Wrong with Risk Matrices?. Risk Analysis, 28: 497-512. doi:10.1111/j.1539-6924.2008.01030.x” Studies have repeatedly shown, in numerous venues, that collecting too much data, collaborating beyond a certain point, and relying on structured, qualitative decision analyses consistently produce worse results than if these actions had been avoided.

It’s easy to assume that many aspects of cybersecurity are inestimable, but we believe that anything can be measured. If it can be measured, it can be assessed and addressed appropriately. A quantitative model that circumvents overconfidence commonly seen with qualitative measures, uses properly-calibrated expert assessments, knows what information is most valuable and what isn’t, and is built on a comprehensive, multi-disciplinary framework can provide actionable data to guide appropriate decisions.

Bottom line: not all cybersecurity threats are readily apparent, and the most dangerous ones can easily be ones you underestimate, or don’t see coming at all. Knowing which factors to measure and how to quantify them can help you identify the most pressing vulnerabilities, which is the cornerstone of effective cybersecurity practices

For more information on how to create a more effective cybersecurity system based on quantitative methods, check out our How to Measure Anything in Cybersecurity Risk webinar

References   [ + ]

1. Thomas, Philip & Bratvold, Reidar & Bickel, J. (2013). The Risk of Using Risk Matrices. SPE Economics & Management. 6. 10.2118/166269-MS.
2. Anthony (Tony) Cox, L. (2008), What’s Wrong with Risk Matrices?. Risk Analysis, 28: 497-512. doi:10.1111/j.1539-6924.2008.01030.x

Subscribe To Our Newsletter

Subscribe to get the latest news, insights, courses, discounts, downloads, and more - all delivered straight to your inbox. 

You are now subscribed to Hubbard Decision Research.