Two-Armed Restless Bandits with Imperfect Information: Stochastic Control and Indexability

Published Online:https://doi.org/10.1287/moor.2017.0863

We present a two-armed bandit model of decision making under uncertainty where the expected return to investing in the “risky arm” increases when choosing that arm and decreases when choosing the “safe” arm. These dynamics are natural in applications such as human capital development, job search, and occupational choice. Using new insights from stochastic control, along with a monotonicity condition on the payoff dynamics, we show that optimal strategies in our model are stopping rules that can be characterized by an index which formally coincides with Gittins’ index. Our result implies the indexability of a new class of restless bandit models.

INFORMS site uses cookies to store information on your computer. Some are essential to make our site work; Others help us improve the user experience. By using this site, you consent to the placement of these cookies. Please read our Privacy Statement to learn more.