Over the course of a repeated game, players often exhibit learning in selecting their best response. Research in economics and marketing has identified two key types of learning rules: belief and reinforcement. It has been shown that players use either one of these learning rules or a combination of them, as in the Experience-Weighted Attraction (EWA) model. Accounting for such learning may help in understanding and predicting the outcomes of games. In this research, we demonstrate that players not only employ learning rules to determine what actions to choose based on past choices and outcomes, but also change their learning rules over the course of the game. We investigate the degree of state dependence in learning and uncover the latent learning rules and learning paths used by the players. We build a non-homogeneous hidden Markov mixture of experts model which captures shifts between different learning rules over the course of a repeated game. The transition between the learning rule states can be affected by the players' experiences in the previous round of the game. We empirically validate our model using data from six games that have been previously used in the literature. We demonstrate that one can obtain a richer understanding of how different learning rules impact the observed strategy choices of players by accounting for the latent dynamics in the learning rules. In addition, we show that such an approach can improve our ability to predict observed choices in games.
Article reprinted with permission from Quantitative Marketing and Economics, published by the American Marketing Association, Asim Ansari, Ricardo Montoya, and Oded Netzer, volume 10, no. 4 (December 2012): 475-503.