We explore the dynamic cognitive hierarchy (CH) theory proposed by Lin and Palfrey (2022) in the setting of multi-stage games of incomplete information. In such an environment, players will learn other players’ payoff-relevant types and levels of sophistication at the same time as the history unfolds. For a class of two-person dirty faces games, we fully characterize the dynamic CH solution, predicting that lower-level players will figure out their face types in later periods than higher-level players. Finally, we re-analyze the dirty faces game experimental data from Bayer and Chan (2007) and find the dynamic CH solution can better explain the data than the static CH solution.
The cognitive hierarchy (CH) approach posits that players in a game are heterogeneous with respect to levels of strategic sophistication. A level-k player believes all other players in the game have lower levels of sophistication distributed from 0 to k-1, and these beliefs correspond to the truncated distribution of a “true” distribution of levels. We extend the CH framework to extensive form games, where these initial beliefs over lower levels are updated as the history of play in the game unfolds, providing information to players about other players’ levels of sophistication. For a class of centipede games with a linearly increasing pie, we fully characterize the dynamic CH solution and show that it leads to the game terminating earlier than in the static CH solution for the centipede game in reduced normal form.
The recipient of John O. Ledyard Prize for Graduate Research in Social Science '21.
Invited presentation at 2022 annual SAET conference: solution concept session
(Co-first Author; with Zhi Li, Si-Yuan Kong, Dongwu Wang, and John Duffy*)
We demonstrate the possibility of conducting synchronous, repeated, multi-game economic decision-making experiments with hundreds of subjects in-person or remotely with live streaming using entirely mobile platforms. Our experiment provides important proof-of-concept that such experiments are not only possible, but yield recognizable results as well as new insights, blurring the line between laboratory and field experiments. Specifically, our findings from 8 different experimental economics games and tasks replicate existing results from traditional laboratory experiments despite the fact that subjects play those games/task in a specific order and regardless of whether the experiment was conducted in person or remotely. We further leverage our large subject population to study the effect of large (N = 100) versus small (N = 10) group sizes on behavior in three of the scalable games that we study. While our results are largely consistent with existing findings for small groups, increases in group size are shown to matter for the robustness of those findings.
Evidence of General Economic Principles of Bargaining and Trade from 2,000 Classroom Experiments (2020) Nature Human Behaviour
(First Author; with Alexander L. Brown, Taisuke Imai, Joseph Tao-yi Wang, Stephanie W. Wang, and Colin F. Camerer*)
Standardized classroom experiments provide evidence about how well scientific results reproduce when nearly identical methods are used. We use a sample of around 20,000 observations to test reproducibility of behaviour in trading and ultimatum bargaining. Double-auction results are highly reproducible and are close to equilibrium predictions about prices and quantities from economic theory. Our sample also shows robust correlations between individual surplus and trading order, and autocorrelation of successive price changes, which test different theories of price dynamics. In ultimatum bargaining, the large dataset provides sufficient power to identify that equal-split offers are accepted more often and more quickly than slightly unequal offers. Our results imply a general consistency of results across a variety of different countries and cultures in two of the most commonly used designs in experimental economics.
Artificial Intelligence, the Missing Piece of Online Education? (2018) IEEE Engineering Management Review
(with Andrew Wooders, Joseph Tao-yi Wang, and Walter M. Yuan*)
Despite the recent explosive growth of online education, it still suffers from suboptimal learning efficacy, as evidenced by low student completion rates. This deficiency can be attributed to the lack of facetime between teachers and students, and amongst students themselves. In this article, we use the teaching and learning of economics as a case study to illustrate the application of artificial intelligence (AI) based robotic players to help engage students in online, asynchronous environments, therefore, potentially improving student learning outcomes. In particular, students could learn about competitive markets by joining a market full of automated trading robots who find every chance to arbitrage. Alternatively, students could learn to play against other humans by playing against robotic players trained to mimic human behavior, such as anticipating spiteful rejections to unfair offers in the Ultimatum Game where a proposer offers a particular way to split the pot that the responder can only accept or reject. By training robotic players with past data, possibly coming from different country and regions, students can experience and learn how players in different cultures might make decisions under the same scenario. AI can thus help online educators bridge the last mile, incorporating the benefit of both online and in-person learning.
Using Machine Learning to Understand Bargaining Experiments in (2022) Bargaining: Current Research and Future Directions, edited by Kyle B. Hyndman and Emin Karagözoğlu
(with Colin F. Camerer, Hung-Ni Chen, Gideon Nave, Alec Smith*, and Joseph Tao-yi Wang)
We study dynamic unstructured bargaining with deadlines and one-sided private information about the amount available to share (the "pie size"). "Unstructured" means that players can make or withdraw any offers and demands they want at any time. Such paradigms, while lifelike have been displaced in experimental studies by highly structured bargaining because they are hard to analyze. Machine learning comes to the rescue because the players' wide range of choices in unstructured bargaining can be taken as "features" used to predict behavior. Machine learning approaches can accommodate a large number of features and guard against overfitting using test samples and methods such as penalized LASSO regression. In previous research we found that LASSO could add power to theoretical variables in predicting whether bargaining ended in disagreement. We replicate this work with higher stakes, subject experience, and special attention to gender differences, demonstrating the robustness of this approach.
Experiment 2 data and online appendix: https://osf.io/9j4cm/
This article was presented at 2020 ASSA annual meeting: Machine Learning in Experiments.
Work in Progress
Measuring Higher-Order Rationality with Belief Control (2022)
(with Meng-Jhang Fong, and Wei James Chen)
Using choice data to infer an individual's strategic reasoning ability is challenging since a sophisticated player may form non-equilibrium beliefs about others and thus exhibit non-equilibrium behavior. We conduct an experiment to identify individual rationality bound by matching human subjects with computer players that are known to be fully rational. By introducing robot players, we can disentangle the effect of limited reasoning ability from belief formation and social preferences. Overall, we find that individual rationality levels are stable across games when subjects are matched with robots, which suggests the validity of our robot approach.
The earlier version is the recipient of John O. Ledyard Prize for Graduate Research in Social Science '20.
Status: Draft preparation.