Thank you for your comment. I tried to capture the view that wolves had a net positive effect on the environment by allowing for the value of the life of a deer given current circumstances to be negative. My lower bound is that the life of a deer is worth negative $3. Another way to put this is that every time a wolf kills a deer they are providing a $3 service or $60,000/year service. The upper bound tries to capture the opposing view that one more deer that a wolf kills is one less deer that a hunter might kill (and benefit from the meat yield).

I have read about successful wolf reintroduction and agree that there may be more positive pass-on effects – so perhaps the $3/deer figure is too low, and perhaps there need to be additional variables such as “Other ecosystem contributions” and “eco-tourism benefits.”

Note that my recommendations would remain the same – the population of wolves should be allowed to grow and the license cost should increase to maximize revenue and benefit.

As for the GIGO comment – this is true of any modeling approach: there is no perfect model and a model will only be as good as the modeler and Subject Matter Experts involved. To me the more relevant question is – does AIE allow you to capture *more* variables than a standard analysis? Answer: it does. And is this analysis an improvement on what is currently being done to set wolf license quotas and prices? It is, and it provides a template to include any intangible either side would care to think up.

Thanks for your interest in my work. Actually, yes, I have at least done training for that kind of audience. And, as you’ve seen, I’ve also consulted for the insurance industry.

My writing schedule is pretty full but this publication does sound interesting. Feel free to contact me to discuss it.

Doug Hubbard

dwhubbard@hubbardresearch.com

I found you through a press release for ACORD, and read with interest information about your book, How To Measure Anything. I’m wondering if you’ve ever look at my industry, financial service, and if you might want to contribute an article to an audience of financial planners & advisors. I’m not sure what you might have to say, but in looking at your resume, I see the potential for getting this audience to ‘think outside the box.’ I often go to tangential sources in search of fresh and off-the-beat content.

In any event, I look forward to reading your book, and I hope you might consider contributing.

Thanks,

Peter Kelley, Managing Editor

LIFE&Health Advisor & L&HA e-newsLink

http://www.lifehealth.com

/boston

Thanks for your question.

Since you read my book, you know my position on these ordinal “1 to 10” scales. They are never really necessary once someone figures out what the real problem is. They simply gloss over the problem making managers feel like it was solved in some way.

In the book example you cite, the 14% does not indicate a discrete “all or nothing” outcome. I wouldn’t model a discrete, binary chance that an entire customer group would reject a given price. A more realistic model is that some uncertain percentage of that group will not purchase the product at a given price. For example “At $200, our 90% CI is that 10% to 35% of customers will decline to purchase product X”. This is, in fact, what all price optimization models are doing directly or indirectly.

But recall another one of the maxims I mention in the book: no matter what you are measuring, assume it has been measured before. This certainly turns out to be true in this case. Not only is there a large body of academic work on estimating price elasticity and then computing optimal prices, there are well-established and proven tools available to you on the market now.

Is your business in the B2B sales area? If so, one of my current clients is Zilliant and your problem is exactly the sort their software addresses. Zilliant has a large number of very able “price scientists” who have developed the algorithms for price optimization for customers in many B2B situations. Their customers include many of the largest manufacturers and distributors you can think of. I would start by giving them a call (go to Zilliant.com). Then you can do real price science and drop the whole “1 to 10” activity.

Thanks again,

Doug Hubbard

Thanks for your comment and sorry for the delayed response. Answering the threshold and the loss rate are easier if all of this is put together in a spreadsheet that computes an NPV for the investment in question. Remember, the first step is defining and modeling the decision. Create a spreadsheet model to compute the NPV just as you normally would for a business case for this code-improvement project. The simplest answer is that the threshold is the value for a variable that would make the NPV = 0 while holding all other variables at their mean value (the mean of the estimated range). There are some reasons why this might not always be a good estimate but its usually very close.

Then estimate your loss function by setting the variable in question equal to one unit below its threshold (or above, if the loss occurs when you are above the threshold). If you are 1% point below the threshold, and the NPV = -$10,000 (negative ten thousand dollars) then the loss rate is $10,000 per unit. (In this case, express a percentage point range and threshold as a whole number in the VOI spreadsheet since it assumes a “unit” for the loss function is a “1” not “.01”).

Let me know if that answers your question and thanks for your interest!

Doug Hubbard

]]>I have a value of information (VOI) question related to robinhfoster’s. Here is a specific example from software. A project is supposed to improve the performance of our code. To estimate the positive impact this benefit will have we decomposed it into the following pieces (using hours as the base unit): Performance Benefit time savings per year = (Number of users) x (Number of simulations/year/user) x (Run-time of typical simulation) x (% reduction in runtime due to this project).

Say we use a calibrated estimator and produce a 90% CI for each of these pieces. Then we use your spreadsheet to compute the VOI for each of them. The spreadsheet requires two additional pieces of information: a threshold for loss and a loss rate. In the example of “% reduction in runtime due to this project”, say the calibrated 90% CI is [5%,25%]. I could assign a threshold of 10% to say that this project should be canceled if the improvement is that small. Then I have to figure out the loss-rate. How much do we lose (in hours) for each percentage point under 10%? I’m not sure how to estimate this. It clearly involves the other pieces that are multiplied together to get the total benefit from the performance improvements. I could use the mean values from the 90% CI estimates to compute a loss rate of (mean number of users) x (mean number of simulations/year/user) x (mean runtime of typical simulation). It seems to me that the loss rate should have its own 90% CI.

To further complicate things, how do I compute the threshold and loss-rate for the “Number of users”? I have an estimate for the total number of users, [8,35] with 90% confidence. Where do I get a number for a threshold related to this project? If we have less than 10 users then this project is not worth funding? Where do I get a loss-rate if the actual number of users is below this threshold because the loss rate is related to how much I’m investing in this project and how many simulations they run and the runtime of a typical simulation, not to mention the anticipated performance improvement, all of which are currently estimated with 90% CI.

I would appreciate any help you can offer.

I am thoroughly impressed with your book “How To Measure Anything” and I’m already using it to compute ROI with Monte Carlo simulations. It has already changed our conversations from “force of opinions” to “is that the right list and breakdown of benefits” which is a huge improvement already.

Thank you!

Todd