Abstract

We consider a bandit problem which involves sequential sampling from two populations (arms). Each arm produces a noisy reward realization which depends on an observable random covariate. The goal is to maximize cumulative expected reward. We derive general lower bounds on the performance of any admissible policy, and develop an algorithm whose performance achieves the order of said lower bound up to logarithmic terms. This is done by decomposing the global problem into suitably "localized" bandit problems. Proofs blend ideas from nonparametric statistics and traditional methods used in the bandit literature.

Authors
Assaf Zeevi and Philippe Rigolet
Format
Chapter
Publication Date
Book
Proceedings of the 23rd conference on learning theory (COLT)

Full Citation

Zeevi, Assaf and Philippe Rigolet
. “Nonparametric bandits with covariates.” In
Proceedings of the 23rd conference on learning theory (COLT)
, edited by
A. T. Kalai and M. Mohri
,
54
-
66
.
New York
:
Association for Computing Machinery
, 2010.