The great IT risk measurement debate

CSO, 2/28/2011

Risk evaluation models in IT are broken, but we can do more with available data than you might think by correcting for known errors in risk perception. Those are a few of the conclusions Alex Hutton and Doug Hubbard came to in their dissection of risk management. CSO Senior Editor Bill Brenner sat in on the conversation. Here are some highlights. [view article]

Analysis Placebos: The Difference Between Perceived and Real Benefits of Risk Analysis and Decision Models

by Douglas Hubbard and Douglas Samuelson
Analytics, 10/28/2009

The article I coauthored with Doug Samuelson in Analytics Magazine just came out with the fall issue. “Analysis Placebos: The Difference Between Perceived and Real Benefits of Risk Analysis and Decision Models.” explains why many popular analysis methods and models may have entirely illusory benefits. [view article]

Modeling Without Measurements: How the Decision Analysis Culture’s Lack of Empiricism Reduces Its Effectiveness

by Douglas Hubbard and Douglas Samuelson OR/MS
Today, 10/09/2009

In this article my coauthor and I point out a general lack of willingness to measure the actual effectiveness of many quantitative models. Just as doctors are often the worst patients, quants are often the last to measure their own performance or the performance of the models they create. We argue that this leads to the unquestioned and continued use of many models that are deeply flawed. We discuss several sources of those problems and what to do about them. [view article]

It’s All an Illusion

It’s All an Illusion by Douglas Hubbard
Boston Society of Architects, Sept/Oct 2008.

Doug Hubbard reviews his original “anything can be measured” concept for an audience of architects. The message of the reasons why some things still seem intangible or immeasurable are refined and restated in the first article since the release of Hubbard’s seminal book, How to Measure Anything. [inactive link]