Post your general questions here by Douglas Hubbard | Aug 25, 2009 | General Topics, News | 17 comments Welcome. Feel free to enter your questions as comments. 17 Comments admin on August 25, 2009 at 8:22 pm Test jkrupa213 on November 1, 2009 at 5:56 pm Doug, I found the book fascinating. I would like to present on this at my next INCOSE Chapter meeting. We do a lot of risk analysis using Crystal Ball, so the concept of getting calibrated is important but unrecognized. I plan to pull a lot from your PPT presentation, with appropriate attribution of course. Thanks Joe Krupa PhD Principal Technical Advisor Savannah River Nuclear Solutions Past President CSRA INCOSE Chapter dwhubbard on November 21, 2009 at 4:45 pm Joe, Thanks for the support and sorry for the delayed response. It seems you advise on a very important issue and I’m glad to hear I influence your advice in some small way. By the way, I’ve got plenty of PPT material so let me know if you need anything. Feel free to keep my readers and me posted on the kinds of measurement issues you encounter. I’m sure they must be fascinating! Doug Hubbard Sanjay on December 8, 2009 at 11:13 pm Doug, Wanted to bring this interesting piece of information to you and your readers attention from Washington Post – How was the TARP amount derived? “We have $11 trillion residential mortgages, $3 trillion commercial mortgages. Total $14 trillion. Five percent of that is $700 billion. A nice round number.” “Seven hundred billion was a number out of the air,” Kashkari recalls, wheeling toward the hex nuts and the bolts. “It was a political calculus. I said, ‘We don’t know how much is enough. We need as much as we can get [from Congress]. What about a trillion?’ ‘No way,’ Hank shook his head. I said, ‘Okay, what about 700 billion?’ We didn’t know if it would work. We had to project confidence, hold up the world. We couldn’t admit how scared we were, or how uncertain.” The complete article is “The $700 Billion Man” and the link is: http://www.washingtonpost.com/wp-dyn/content/article/2009/12/04/AR2009120402016_3.html?sid=ST2009120402037 Regards, Sanjay dwhubbard on December 10, 2009 at 10:11 am Thanks, Sanjay. Anytime I see I nice, round number like that, I assume it wasn’t precisely calculated. But it is an interesting problem. Did someone even attempt to model the alternatives? Did they think more thoughtful calculations would take too long? The inner workings of this decision will be discussed for much longer than the decision itself took. Doug Hubbard Namik on December 14, 2009 at 5:54 am Regarding the source above about TARP number. This is avaibale, almost verbatim, in “Too Big to Fail” (Kindle location 8646), a book by A.R.Sorkin that contains much more on risk perception and awareness. Namik on December 14, 2009 at 6:32 am A ve-ery minor point concerning the quote about the difference btw theory & practice. The wikiquote/YogiBerra lists it among unsourced and adds it “has also been attributed to van de Snepscheut and Einstein.” May I suggest just “ATTRIBUTED TO Yogi Berra”, at the beginning of Chapter 8 of a future edition of this excellent book? (As an aside, when making the same quote in his 2009 book “Predictably Irrational” Dan Ariely just mentioned the name of a friend from whom he heard it.) dwhubbard on December 17, 2009 at 6:39 pm Thanks for both of your notes. I’ll check out the source on that quote again. I actually was able to “debunk” a couple of very popular quotes that are routinely misatributed because I insisted on finding original sources for the quotation. But perhaps one still slipped through. Doug Hubbard Namik on December 20, 2009 at 6:40 am Let me add that the hammer-and-nail problem (Chapter 7, HTMA) is attributed to Abraham Maslow. I really like the fine collection of quotes in both books. dwhubbard on December 20, 2009 at 7:02 am Actually, I do include that attibution in The Failure of Risk Management and I update in the upcoming second edition of How to Measure Anything. Thanks again. Doug Hubbard dwhubbard on December 20, 2009 at 7:15 am Regarding the unidentified source of the quote at the beginning of your comment, the source is me – which I believe I make clear. In that part of that chapter I was explaining what is wrong with the claim “You can prove anything with statistics”. I explain that this claim is not literally meant to be true (given the absurd consequences) and then I offer the alternative connotation. I can’t disagree with anything John A. Paulos says. His earlier book – Innumeracy – was an inspiration to me. This was the first book I had read that directly discussed the rampant lack of mathematical literacy and its cultural roots in America. At one point he mentioned that the claim “I’m not a numbers person” only seems like self-promotion to Western ears. He points out that to someone from India this might sound like saying “I’m illiterate”. Exactly. Doug Hubbard Namik on December 20, 2009 at 6:43 am Regarding a statement in Chapter 3 of HTMA that “…”numbers can be used to confuse people, especially the gullible ones lacking basic skills with numbers.” With this, I completely agree.” I hasten to add that I also complete agree, for my own part, with the author and, therefore, with the unidentified source. Let me suggest, as a reinforcement reader for this amicable complete agreement, a book by J.A. Paulos “A mathematician reads the newspaper”, full of instructive examples. Especially relevant here is its chapter on Advertising and Numerical Craftiness invoking, e.g., an “ad that proclaims that you’ll pay 50% more at a competitor’s store rather than announce [a] less impressive 33%-sale” (p. 87). sujoymitra17 on August 1, 2011 at 8:37 am Doug, This is important, urgent and I am stuck :-(. I am working on a project wherein I am supposed to evaluate various product vendors on a set of parameters, and select one that fits the client’s functional and technical needs. The parameters on which the products will be evaluated are: Business Functionality Technical Functionality Pricing Financial Sustainability of the vendor Post-Implementation Service Capability of the vendors I dont want to go the rudimentary method of assigning arbitrary weights (based on so called judgement) to the parameters, to get a weighted average score for each product. I want to have a certain logic behind the weights. Although, I have used the concept of Lens Model elsewhere, I have typically validated the model using historical data. In this case, I dont have any historical data. What do you suggest should be the right approach to this problem (I did see a discussion on SCDigest, which does quote you on the above problem, but I dont see the resolution). My apologies for hurrying you into the potential resolution (if any), but somehow I am bound by time. As I write this, even I am trying to find ways to get some historical data/secondary information on the parameters to figure out if I can use any credible info/historical data to compute the weights. However, as of now, I am stuck. dwhubbard on August 1, 2011 at 4:57 pm This is a curious situation because when I lack historical data is the only time when I use the Lens method. If I have historical data I don’t use a Lens because regression models based directly on historical data usually outperform Lens models (as I explain in HTMA). So there is never a need to “validate” a Lens model with historical data because if you have historical data, you don’t need a Lens model. Lens models are specifically for those situations when you don’t have any historical data, so your current situation sounds like the ideal time to use it. The validation of the model comes from a different set of historical data – the historical data from other studies that show that Lens methods outperform unaied human judgement on a variety of tasks. (I cite some of these sources in HTMA) Your validation problem is similar to the validation problem of someone who believes they will die if they jump off of a cliff when they have never jumped off a cliff before and noone has ever jumped off that specific cliff (i.e. there is no historical data for that person jumping off of that cliff). The historical data we can refer to is that other people who jumped off of other cliffs have met the same fate. So your historical data validating the Lens is the history of Lens models in a variety of other situations. But there is one simple way to show an immediate improvement with the Lens against unaided human intuition. When you build your list of hypothetical vendors (a key step in the Lens), each with hypothetical parameters of the type you describe, be sure to include some duplicate pairs in the list. That is, if you make a list of 50 product vendors make #7 and #43 on the list have the same value in each of the parameters. Do that again for, say, #3 and #49. By the end of the list of scenarios, most experts will forget that they answered an identical situations earlier in the list and they will tend to give slightly different answers. Measure the difference in answers between duplicate pairs. This is the “inconsistency” of the experts and this error would be completely removed by the Lens (since the Lens will always give the same answer in identical situations). Remember, “validation” simply needs to mean a measurable improvement over the alternative. Duplicate pairs will show this especially if you have several experts each evaluating a long list of scenarios with 2 or more duplicate pairs. Thanks for you input, Doug Hubbard sujoymitra17 on August 2, 2011 at 8:48 am This is an absolute case of wrong concepts on my part. Somehow I would always take expert judgements and try correcting that with historical data. I referred to this entire exercise as Lens Model. Need to go back to the text. Thanks a lot Doug. Btw, I have been using various concepts of AIE in mulitple client engagements. And let me tell you – the results have been amazing. This entire methodology of “Why Care” has somehow gone into my and my team’s system and it definitely has changed the way we look at a problem. All in all….I am glued in. Thanks again as always. Note: I did not get any email notification of your reply to my post. 🙁 Regards Sujoy gchesterton on December 9, 2015 at 10:33 am General question about the Lens Method approach. SMEs are asked to provide a bunch of different estimates (response data) as a function of various combinations of variables (independent variables) and then we conduct regression to derive their implicit weights. But why wouldn’t a SME simply recognize that the only way they can respond consistently would be to apply their own set of weights intrinsically? The Lens model suggests that the SMEs are good at providing a response variable and we derive the weights. A more traditional multi-objective method would ask the SMEs to provide the weights and then we produce the weighted addition of all the independent variables to produce the response. Just wondering when the Lens method is the more appropriate approach.