The risks of uncertainty

The risks of uncertainty

Relevant to Paper P4 and P5

In this second article on the risks of uncertainty, we build upon the basics of risk and uncertainty addressed in the first article published in the April 2009 issue of Student Accountant to examine more advanced aspects of incorporating risk into decision making.

In particular, we return to the use of expected values and examine the potential impact of the availability of additional information regarding the decision under consideration. Initially, we examine a somewhat artificial scenario, where it is possible to obtain perfect information regarding the future outcome of an uncertain variable (such as the state of the economy or the weather), and calculate the potential value of such information. Subsequently, the analysis is revisited and the more realistic case of imperfect information is assumed, and the initial probabilities are adjusted using Bayesian analysis.

Some decision scenarios may involve two uncertain variables, each with their own associated probabilities. In such cases, the use of data/decision tables may prove helpful where joint probabilities are calculated involving possible combinations of the two uncertain variables. These joint probabilities, along with the payoffs, can then be used to answer pertinent questions such as what is the probability of a profit/(loss) occurring?

The other main topic covered in the article is that of Value-at-Risk (VaR), which has been referred to as ‘the new science of risk management’. The principles underlying VaR will be discussed along with an illustration of its potential uses.

EXPECTED VALUES AND INFORMATION

To illustrate the potential value of additional information regarding the likely outcomes resulting from a decision, we return to the example given in the first article, of the ice cream seller who is deciding how much ice cream to order but is unsure about the weather. We now add probabilities to the original information regarding whether the weather will be cold, warm or hot, as shown in Table 1.

TABLE 1: ASSIGNING PROBABILITIES TO WEATHER

 

We are now in a position to be able to calculate the expected values associated with the three sizes of order, as follows:

Expected value (small) = 0.2 ($250) + 0.5 ($200) + 0.3 ($150) = $195

 Expected value (medium) = 0.2 ($200) + 0.5 ($500) + 0.3 ($300) = $380

Expected value (large) = 0.2 ($100) + 0.5 ($300) + 0.3 ($750) = $395

On the basis of these expected values, the optimal decision would be to order a large amount of ice cream with an expected value of $395. However, it may be possible to improve upon this value if better information regarding the weather could be obtained. Exam questions often make the assumption that it is possible to obtain perfect information, ie to predict exactly what the outcome of the uncertain variable will be.

THE VALUE OF PERFECT INFORMATION

In the case of the ice cream seller, perfect information would be certainty regarding the outcome of the weather.

If this was the case, then the ice cream seller would purchase the size of order which gave the highest payoff for each weather outcome – in other words, purchasing a small order if the weather was forecast to be cold, a medium order if it was forecast to be warm, and a large order if the forecast was for hot weather. The resulting expected value would then be:

Expected value  =  0.2 ($250) + 0.5 ($500) + 0.3 ($750) = $525

The value of the perfect information is the difference between the expected values with and without the information, ie

Value of information =  $525 - $395  =  $130

Exam questions are often phrased in terms of the maximum amount that the decision maker would be prepared to pay for the information, which again is the difference between the expected values with and without the information.

However, the concept of perfect information is somewhat artificial since, in the real world, such perfect certainty rarely, if ever, exists. Future outcomes, irrespective of the variable in question, are not perfectly predictable. Weather forecasts or economic predictions may exhibit varying degrees of accuracy, which leads us to the concept of imperfect information.

THE VALUE OF IMPERFECT INFORMATION

With imperfect information we do not enjoy the benefit of perfect foresight. Nevertheless, such information can be used to enhance the accuracy of the probabilities of the possible outcomes and therefore has value. The ice cream seller may examine previous weather forecasts and, on that basis, estimate probabilities of future forecasts being accurate. For example, it could be that when hot weather is forecast past experience has suggested the following probabilities:

P (forecast hot but weather cold) 0.3
P (forecast hot but weather warm) 0.4
P (forecast hot and weather hot) 0.7

The probabilities given do not add up to 1 and so, for example, P (forecast hot but weather cold) cannot mean P (weather cold given that forecast was hot), but must mean P (forecast was hot given that weather turned out to be cold).

We can use a table to determine the required probabilities.  Suppose that the weather was recorded on 100 days. Using our original probabilities, we would expect 20 days to be cold, 50 days to be warm, and 30 days to be hot. The information from our forecast is then used to estimate the number of days that each of the outcomes is likely to occur given the forecast (see Table 2):

TABLE 2: LIKELY WEATHER OUTCOMES

 

* From past data, cold weather occurs with probability of 0.2, ie on 0.2 of the 100 days in the sample = 20 days. Other percentages are also derived from past data.

** If the actual weather is cold, there is a 0.3 probability that hot weather had been forecast. This will occur on 0.3 of the 20 days on which the weather was poor = 6 days (0.3 x 20). Similarly, 20 = 0.5 x 40 and 21 = 0.7 x 30.

The revised probabilities, if the forecast is hot, are therefore:

P(Cold)       =  6/47 =  0.128 P(Warm)      = 20/47 =  0.425 P(Hot)  =  21/47 =  0.447

The expected values can then be recalculated as:

Expected value (small) = 0.128 ($250) + 0.425 ($200) + 0.447 ($150) = $184

Expected value (medium) = 0.128 ($200) + 0.425 ($500) + 0.447 ($300) = $372

Expected value (large) = 0.128 ($100) + 0.425 ($300) + 0.447 ($750) = $476

Value of imperfect information =  $476 - $395 = 81

The estimated value for imperfect information appears reasonable, given that the value we had previously calculated for perfect information was $130.

BAYES’ RULE

Bayes’ rule is perhaps the preferred method for estimating revised (posterior) probabilities when imperfect information is available. An intuitive introduction to Bayes’ rule was provided in The Economist, 30 September 2000:

‘The essence of the Bayesian approach is to provide a mathematical rule explaining how you should change your existing beliefs in the light of new evidence. In other words, it allows scientists to combine new data with their existing knowledge or expertise. The canonical example is to imagine that a precocious newborn observes his first sunset, and wonders whether the sun will rise again or not. He assigns equal prior probabilities to both possible outcomes, and represents this by placing one white and one black marble into a bag. The following day, when the sun rises, the child places another white marble in the bag. The probability that a marble plucked randomly from the bag will be white (ie the child’s degree of belief in future sunrises) has thus gone from a half to two-thirds. After sunrise the next day, the child adds another white marble, and the probability (and thus the degree of belief) goes from two-thirds to three-quarters. And so on. Gradually, the initial belief that the sun is just as likely as not to rise each morning is modified to become a near-certainty that the sun will always rise.’

In mathematical terms, Bayes’ rule can be stated as:

 

For example, consider a medical test for a particular disease which is 90% accurate, ie if you test positive then there is a 90% probability that you have the disease and a 10% probability that you have been misdiagnosed. If we further assume that 3% of the population actually have this disease, then the probability of having the disease (given that you have tested positive) is shown by:

 

This result suggests that you have a 22% probability of having the disease, given that you tested positive. This may seem a low probability but only 3% of the population have the disease and we would expect them to test positive. However, 10% of tests will prove positive for people who do not have the disease. Therefore, if 100 people are tested, approximately three out of the 13 positive tests will actually have the disease.

Bayes’ rule has been used in a practical context for classifying email as spam on the basis of certain key words appearing in the text.

DATA TABLES

Data tables show the expected values resulting from combinations of uncertain variables, along with their associated joint probabilities. These expected values and probabilities can then be used to estimate, for example, the probability of a profit or a loss.

To illustrate, assume that a concert promoter is trying to predict the outcome of two uncertain variables, namely:

  1. The number of people attending the concert, which could be 300, 400, or 600 with estimated probabilities of 0.4, 0.4, and 0.2 respectively.
  2. From each person attending, the profit on drinks and confectionary, which could be $2, $4, or $6 with estimated probabilities of 0.3, 0.4 and 0.3 respectively.

As each of the two uncertain variables can take three values, a 3 x 3 data table can be constructed. We shall assume that the expected values have already been calculated as follows:

 

The probabilities can be used to calculate joint probabilities as follows:

 

The two tables could then be used to answer questions such as:

  1. The probability of making a loss?  =  0.12 + 0.12 + 0.16  =  0.40
  2. The probability of making a profit of more than $3,500? = 0.08 + 0.12 + 0.06 = 0.26 

VALUE-AT-RISK (VaR)

Although financial risk management has been a concern of regulators and financial executives for a long time, Value-at-Risk (VaR) did not emerge as a distinct concept until the late 1980s. The triggering event was the stock market crash of 1987 which was so unlikely, given standard statistical models, that it called the entire basis of quantitative finance into account.

VaR is a widely used measure of the risk of loss on a specific portfolio of financial assets. For a given portfolio, probability, and time horizon, VaR is defined as a threshold value such that the probability that the mark-to-market loss on the portfolio over the given time horizon exceeds this value (assuming normal markets and no trading) is the given probability level. Such information can be used to answer questions such as ‘What is the maximum amount that I can expect to lose over the next month with 95%/99% probability?’.

For example, large investors, interested in the risk associated with the FT100 index, may have gathered information regarding actual returns for the past 100 trading days. VaR can then be calculated in three different ways:

  1. The historical method
    This method simply ranks the actual historical returns in order from worst to best, and relies on the assumption that history will repeat itself. The largest five (one) losses can then be identified as the threshold values when identifying the maximum loss with 5% (1%) probability.
  2. The variance-covariance method
    This relies upon the assumption that the index returns are normally distributed, and uses historical data to estimate an expected value and a standard deviation. It is then a straightforward task to identify the worst 5 or 1% as required, using the standard deviation and known confidence intervals of the normal distribution, ie –1.65 and –2.33 standard deviations respectively.
  3. Monte Carlo simulation
    While the historical and variance-covariance methods rely primarily upon historical data, the simulation method develops a model for future returns based on randomly generated trials. Admittedly, historical data is utilised in identifying possible returns but hypothetical, rather than actual, returns provide the data for the confidence levels.

Of these three methods, the variance-covariance is probably the easiest as the historical method involves crunching historical data and the Monte Carlo simulation is more complex to use.

VaR can also be adjusted for different time periods, since some users may be concerned about daily risk whereas others may be more interested in weekly, monthly, or even annual risk. We can rely on the idea that the standard deviation of returns tends to increase with the square root of time to convert from one time period to another. For example, if we wished to convert a daily standard deviation to a monthly equivalent then the adjustment would be :

σ monthly = σ daily  x  √T  where  T = 20 trading days

For example, assume that after applying the variance-covariance method we estimate that the daily standard deviation of the FT100 index is 2.5%, and we wish to estimate the maximum loss for 95 and 99% confidence intervals for daily, weekly, and monthly periods assuming five trading days each week and four trading weeks each month:

95% confidence
Daily = –1.65 x 2.5% = –4.125% Weekly = –1.65 x 2.5% x √5 = –9.22% Monthly = –1.65 x 2.5% x √20 = –18.45%

99% confidence
Daily = –2.33 x 2.5% = –5.825% Weekly = –2.33 x 2.5% x √5 = –13.03% Monthly = –2.33 x 2.5% x √20 = –26.05%

Therefore we could say with 95% confidence that we would not lose more than 9.22% per week, or with 99% confidence that we would not lose more than 26.05% per month.

On a cautionary note, New York Times reporter Joe Nocera published an extensive piece entitled Risk Mismanagement on 4 January 2009, discussing the role VaR played in the ongoing financial crisis. After interviewing risk managers, the author suggests that VaR was very useful to risk experts, but nevertheless exacerbated the crisis by giving false security to bank executives and regulators. A powerful tool for professional risk managers, VaR is portrayed as both easy to misunderstand, and dangerous when misunderstood.

CONCLUSION

These two articles have provided an introduction to the topic of risk present in decision making, and the available techniques used to attempt to make appropriate adjustments to the information provided. Adjustments and allowances for risk also appear elsewhere in the ACCA syllabus, such as sensitivity analysis, and risk-adjusted discount rates in investment appraisal decisions where risk is probably at its most obvious. Moreover in the current economic climate, discussion of risk management, stress testing and so on is an everyday occurrence.

Michael Pogue is assessor for Paper P5

REFERENCES

  • Jorion, P (2006) Value at Risk: The New Benchmark for Managing Financial Risk, 3rd edition, McGraw Hill
  • Nocera, J (2009), Risk Management, New York Times
  • Taleb, N (2007), Black Swan, Random House Publishing