Marrying Stochastic Gradient Descent with Bandits: Learning Algorithms for Inventory Systems with Fixed Costs
We consider a periodic-review single-product inventory system with fixed cost under censored demand. Under full demand distributional information, it is well known that the celebrated (s, S) policy is optimal. In this paper, we assume the firm does not know the demand distribution a priori and makes adaptive inventory ordering decisions in each period based only on the past sales (a.k.a. censored demand). Our performance measure is regret, which is the cost difference between a feasible learning algorithm and the clairvoyant (full-information) benchmark. Compared with prior literature, the key difficulty of this problem lies in the loss of joint convexity of the objective function as a result of the presence of fixed cost. We develop the first learning algorithm, termed the policy, that combines the power of stochastic gradient descent, bandit controls, and simulation-based methods in a seamless and nontrivial fashion. We prove that the cumulative regret is , which is provably tight up to a logarithmic factor. We also develop several technical results that are of independent interest. We believe that the developed framework could be widely applied to learning other important stochastic systems with partial convexity in the objectives.
This paper was accepted by Chung Piaw Teo, optimization.