description abstract | To address the need for efficient and unbiased experimental testing of methods for modeling uncertainty that are used for decision making, we devise an approach for probing weaknesses of these methods by running numerical experiments on arbitrary data. We recommend using readily available data recorded in real-life activities, such as competitions, student design projects, medical procedures, or business decisions. Because the generating mechanism and the probability distribution of this data is often unknown, the approach adds dimensions, such as fitting errors and time dependencies of data that may be missing from tests conducted using computer simulations. For an illustration, we tested probabilistic and possibilistic methods using a database of results of a domino tower competition. The experiments yielded several surprising results. First, even though a probabilistic metric of success was used, there was no significant difference between the rates of success of the probabilistic and possibilistic models. Second, the common practice of inflating uncertainty when there is little data about the uncertain variables shifted the decision differently for the probabilistic and possibilistic models, with the latter being counter-intuitive. Finally, inflation of uncertainty proved detrimental even when very little data was available. | |