Abstract

We study the classical newsvendor problem in which the decision-maker must trade-off underage and overage costs. In contrast to the typical setting, we assume that the decision-maker does not know the underlying distribution driving uncertainty but has only access to historical data. In turn, the key questions are how to map existing data to a decision and what type of performance to expect as a function of the data size. We analyze the classical setting with access to past samples drawn from the distribution (e.g., past demand), focusing not only on asymptotic performance but also on what we call the transient of learning, i.e., performance for arbitrary data sizes. We evaluate the performance of any algorithm through its worst-case relative expected regret, compared to an oracle with knowledge of the distribution. We provide the first finite sample exact analysis of the classical Sample Average Approximation (SAA) algorithm for this class of problems across all data sizes. This allows to uncover novel fundamental insights on the value of data: it reveals that tens of samples are sufficient to perform very efficiently but also that more data can lead to worse out-of-sample performance for SAA. We then focus on the general class of mappings from data to decisions without any restriction on the set of policies and derive an optimal algorithm as well as characterize its associated performance. This leads to significant improvements for limited data sizes, and allows to exactly quantify the value of historical information.

Authors
Omar Besbes and Omar Mouchtaki
Format
Working Paper
Publication Date

Full Citation

Besbes, Omar and Omar Mouchtaki
. How Big Should Your Data Really Be? Data-Driven Newsvendor and the Transient of Learning. March 15, 2021.