Satisficing in multi-armed bandit problems
Paul Reverdy, Vaibhav Srivastava, and Naomi Ehrich Leonard
IEEE Transactions on Automatic Control, Vol. 62, No. 8, August 2017, pp, 3788-3803.
Satisficing is a relaxation of maximizing and allows for less risky decision-making
in the face of uncertainty.
We propose two sets of satisficing objectives for the multi-armed bandit problem, where the
objective is to achieve reward-based decision-making performance above a given threshold.
We show that these new problems are equivalent to various standard multi-armed bandit problems
with maximizing objectives and use the equivalence to find bounds on performance. The different
objectives can result in qualitatively different behavior; for example, agents explore their
options continually in one case and only a finite number of times in another. For the case of
Gaussian rewards we show an additional equivalence between the two sets of satisficing objectives
that allows algorithms developed for one set to be applied to the other. We then develop variants
of the Upper Credible Limit (UCL) algorithm that solve the problems with satisficing objectives and
show that these modified UCL algorithms achieve efficient satisficing performance.
Back to home page
Back to publications page