Marrying Stochastic Gradient Descent with Bandits: Learning Algorithms for Inventory Systems with Fixed Costs

Published Online:https://doi.org/10.1287/mnsc.2020.3799

We consider a periodic-review single-product inventory system with fixed cost under censored demand. Under full demand distributional information, it is well known that the celebrated (s, S) policy is optimal. In this paper, we assume the firm does not know the demand distribution a priori and makes adaptive inventory ordering decisions in each period based only on the past sales (a.k.a. censored demand). Our performance measure is regret, which is the cost difference between a feasible learning algorithm and the clairvoyant (full-information) benchmark. Compared with prior literature, the key difficulty of this problem lies in the loss of joint convexity of the objective function as a result of the presence of fixed cost. We develop the first learning algorithm, termed the (δ,S) policy, that combines the power of stochastic gradient descent, bandit controls, and simulation-based methods in a seamless and nontrivial fashion. We prove that the cumulative regret is O(logTT), which is provably tight up to a logarithmic factor. We also develop several technical results that are of independent interest. We believe that the developed framework could be widely applied to learning other important stochastic systems with partial convexity in the objectives.

This paper was accepted by Chung Piaw Teo, optimization.

INFORMS site uses cookies to store information on your computer. Some are essential to make our site work; Others help us improve the user experience. By using this site, you consent to the placement of these cookies. Please read our Privacy Statement to learn more.