April 2006 to August 2012
The development of game-theoretic models for homeland security has advanced quite rapidly, to the point where those models are almost ready for use in real-world decisions. The primary focus of the research at the University of Wisconsin-Madison has been to provide decision makers with practical yet rigorous ways to effectively and efficiently quantify and solve game-theoretic models of realistic size and complexity. In particular, this work address two of the most significant hurdles to making the economic tool of game theory applicable in homeland security in practice; namely, the need to quantify uncertain adversary preferences using subject-matter experts, and the need for sufficiently powerful computational tools to solve realistic defender problems within acceptable time constraints and levels of accuracy. In previous years, we developed an elicitation process in which subject-matter experts are asked to give only ordinal judgments on the attractiveness of terrorist targets or strategies. Estimates of uncertain terrorist preferences (represented by a multi-attribute utility function) are then derived mathematically through the use of probabilistic inversion (PI). In the past year, our primary theoretical accomplishment was to adapt a related but different mathematical approach, Bayesian density estimation (BDE), to apply to ordinal judgments in a similarly rigorous manner. BDE complements PI; in particular, it extracts more information from the experts (especially regarding “schools of thought,” or subgroups of experts with positively correlated judgments). Hence, it is appropriate for use when there are sufficient experts for such correlations to be statistically meaningful, but may be less useful than PI when only small numbers of experts are available. We developed an efficient algorithm based on Gibbs sampling to estimate the BDE model, whose computation time grows only linearly with the size of the problem. Moreover, we explored the theoretic relationship between PI and BDE, elucidating under which conditions they give identical versus different results. Finally, we showed how PI and BDE differ in handling expert consensus or disagreement. These comparisons provide useful guidance on when and whether to use PI or BDE in real-world elicitation practice. In additional analyses, we applied PI to ordinal judgments obtained from “proxy experts” (e.g., graduate students knowledgeable about terrorism, from parts of the world where support for terrorism is reasonably common, and found that the results compared reasonably well to those obtained by direct elicitation. We also found that both PI gave realistic results when applied to partial rankings of only a subset of targets, rather than full tanking of all targets; this is important for the application of PI in practice, since experts are unlikely to be willing or able to give detailed rankings of large numbers of targets. Then, we developed two extensions to both PI and BDE, to enhance their applicability to real-world elicitation tasks. In particular, we extended both approaches to allow for negative attribute weights, to account for the possibility that experts may disagree about whether a larger value of a particular attribute yields higher or lower utility to the adversary (e.g., whether an adversary is seeking or is averse to high fatalities resulting from an attack). This is important, especially when the number of experts is large (e.g., when using on-line surveys), because it makes it possible to analyze even results from experts with opposing views in an automated manner. We also extended both PI and BDE to handle ties in ordinal judgments, since in practice, experts may believe that multiple targets or attack strategies are equally attractive to the adversary (or may be unable to distinguish between their attractiveness), and hence give tied rank orderings for them. Finally, we are also working on models of attacker deterrence. Achieving effective attack deterrence generally requires decreasing the success probability of an attack (or, conversely, increasing the cost of a successful attack) beyond the threshold considered tolerable by the prospective attacker. However, that threshold is not known. Therefore, we are using a new method known as target-oriented utility theory that makes it possible to minimize the probability of failing to achieve an uncertain target (e.g., in our case, the deterrence threshold). In the past year, we have applied this method to problems of network security, taking it beyond simple series or parallel systems to more realistic problems of interest in the real world.