Human Adversaries in Opportunistic Crime Security Games: Evaluating Competing Bounded Rationality Models

Publication Type: 
Yasaman Dehghani Abbasi
Martin Short
Arunesh Sinha
Milind Tambe
Nicole Sintov
Chao Zhang
There are a growing number of automated decision aids based on game-theoretic algorithms in daily use by security agencies to assist in allocating or scheduling their limited security resources. These applications of game theory, based on the “security games” paradigm, are leading to fundamental research challenges: one major challenge is modeling human bounded rationality. More specifically, the security agency, assisted with an automated decision aid, is assumed to act with perfect rationality against a human adversary; it is important to investigate the bounded rationality of these human adversaries to improve effectiveness of security resource allocation. This paper for the first time provides an empirical investigation of adversary bounded rationality in opportunistic crime settings, where modeling bounded rationality is particularly crucial. We conduct extensive human subject experiments, comparing ten different bounded rationality models, and illustrate that: (a) while previous research proposed the use of the stochastic choice “quantal response” model of human adversary, this model is significantly outperformed by more advanced models of “subjective utility quantal response”; (b) Combinations of the well-known prospect theory model with these advanced models lead to an even better performance in modeling human adversary behavior; (c) while it is important to model the non-linear human weighing of probability, as advocated by prospect theory, our findings are the exact opposite of prospect theory in terms of how humans are seen to weigh this non-linear probability.