Abstract

Motivated by applications in financial services, we consider a seller who offers prices sequentially to a stream of potential customers, observing either success or failure in each sales attempt. The parameters of the underlying demand model are initially unknown, so each price decision involves a trade-off between learning and earning. Attention is restricted to the simplest kind of model uncertainty, where one of two demand models is known to apply, and we focus initially on performance of the myopic Bayesian policy (MBP), variants of which are commonly used in practice. Because learning is passive under the MBP (that is, learning only takes place as a by-product of actions that have a different purpose), it can lead to what we call an indeterminate equilibrium, where learning ceases prematurely and profit performance is poor. However, two variants of the myopic policy are shown to have the following strong theoretical virtue: The expected performance gap relative to a clairvoyant who knows the underlying demand model is bounded by a constant as the number of sales attempts becomes large. These modifications of the MBP perform so well in simulation experiments that the pursuit of an exactly optimal policy appears pointless for all practical purposes.

Authors
Assaf Zeevi, N. Bora Keskin, and J. Richard Harrison
Format
Working Paper
Publication Date

Full Citation

Zeevi, Assaf, N. Bora Keskin, and J. Richard Harrison
. Bayesian dynamic pricing policies: Learning and earning under a binary prior distribution. January 14, 2010.