Jump to content

Experimental economics

From Wikipedia, the free encyclopedia

Experimental economics is the application of experimental methods[1] to study economic questions. Data collected in experiments are used to estimate effect size, test the validity of economic theories, and illuminate market mechanisms. Economic experiments usually use cash to motivate subjects, in order to mimic real-world incentives. Experiments are used to help understand how and why markets and other exchange systems function as they do. Experimental economics have also expanded to understand institutions and the law (experimental law and economics).[2]

A fundamental aspect of the subject is design of experiments. Experiments may be conducted in the field or in laboratory settings, whether of individual or group behavior.[3]

Variants of the subject outside such formal confines include natural and quasi-natural experiments.[4]

Experimental topics

[edit]

One can loosely classify economic experiments using the following topics:

Within economics education, one application involves experiments used in the teaching of economics. An alternative approach with experimental dimensions is agent-based computational modeling. It is important to consider the potential and constraints of games for understanding rational behavior and solving human conflict.[7]

Coordination games

[edit]

Coordination games are games with multiple pure strategy Nash equilibria. There are two general sets of questions that experimental economists typically ask when examining such games: (1) Can laboratory subjects coordinate, or learn to coordinate, on one of multiple equilibria, and if so are there general principles that can help predict which equilibrium is likely to be chosen? (2) Can laboratory subjects coordinate, or learn to coordinate, on the Pareto best equilibrium and if not, are there conditions or mechanisms which would help subjects coordinate on the Pareto best equilibrium? Deductive selection principles are those that allow predictions based on the properties of the game alone. Inductive selection principles are those that allow predictions based on characterizations of dynamics. Under some conditions at least groups of experimental subjects can coordinate even complex non-obvious asymmetric Pareto-best equilibria. This is even though all subjects decide simultaneously and independently without communication. The way by which this happens is not yet fully understood.[8]

Learning experiments

[edit]

Economic theories often assume that economic incentives can shape behavior even when individual agents have limited understanding of the environment. The relationship between economic incentives and outcomes may be indirect: The economic incentives determine the agents’ experience, and these experiences may then drive future actions.

Learning experiments can be classified as individual choice tasks or games, where games typically refer to strategic interactions of two or more players. Oftentimes, the general patterns of learning behavior can be best illustrated with individual choice tasks.[9]

In games of two players or more, the subjects often form beliefs about what actions the other subjects are taking and these beliefs are updated over time. This is known as belief learning. Subjects also tend to make the same decisions that have rewarded them with high payoffs in the past. This is known as reinforcement learning.

Until the 1990s, simple adaptive models, such as Cournot competition or fictitious play, were generally used. In the mid-1990s, Alvin E. Roth and Ido Erev demonstrated that reinforcement learning can make useful predictions in experimental games.[10] In 1999, Colin Camerer and Teck-Hua Ho introduced Experience Weighted Attraction (EWA), a general model that incorporated reinforcement and belief learning, and shows that fictitious play is mathematically equivalent to generalized reinforcement, provided weights are placed on past history.

Criticisms of EWA include overfitting due to many parameters, lack of generality over games, and the possibility that the interpretation of EWA parameters may be difficult. Overfitting is addressed by estimating parameters on some of the experimental periods or experimental subjects and forecasting behavior in the remaining sample (if models are overfitting, these out-of-sample validation forecasts will be much less accurate than in-sample fits, which they generally are not). Generality in games is addressed by replacing fixed parameters with "self-tuning" functions of experience, allowing pseudo-parameters to change over the course of a game and to also vary systematically across games.

Modern experimental economists have done much notable work recently. Roberto Weber has raised issues of learning without feedback. David Cooper and John Kagel have investigated types of learning over similar strategies. Ido Erev and Greg Barron have looked at learning in cognitive strategies. Dale Stahl has characterized learning over decision making rules. Charles A. Holt has studied logit learning in different kinds of games, including games with multiple equilibria. Wilfred Amaldoss has looked at interesting applications of EWA in marketing. Amnon Rapoport, Jim Parco and Ryan Murphy have investigated reinforcement-based adaptive learning models in one of the most celebrated paradoxes in game theory known as the centipede game.

Market games

[edit]

Edward Chamberlin is thought to have conducted "not only the first market experiment, but also the first economic experiment of any kind."[11] Vernon Smith, drawing on Chamberlin's work, but also modifying it in key respects, conducted pioneering economics experiments on the convergence of prices and quantities to their theoretical competitive equilibrium values in experimental markets.[11] Smith studied the behavior of "buyers" and "sellers", who are told how much they "value" a fictitious commodity and then are asked to competitively "bid" or "ask" on these commodities following the rules of various real world market institutions (e.g., the Double auction as well the English and Dutch auctions). Smith found that in some forms of centralized trading, prices and quantities traded in such markets converge on the values that would be predicted by the economic theory of perfect competition, despite the conditions not meeting many of the assumptions of perfect competition (large numbers, perfect information).

Over the years, Smith pioneered – along with other collaborators – the use of controlled laboratory experiments in economics, and established it as a legitimate tool in economics and other related fields. Charles Plott of the California Institute of Technology collaborated with Smith in the 1970s and pioneered experiments in political science, as well as using experiments to inform economic design or engineering to inform policies. In 2002, Smith was awarded (jointly with Daniel Kahneman) the Bank of Sweden Prize in Economic Sciences "for having established laboratory experiments as a tool in empirical economic analysis, especially in the study of alternative market mechanisms".

Finance

[edit]

Experimental finance studies financial markets with the goals of establishing different market settings and environments to observe experimentally and analyze agents' behavior and the resulting characteristics of trading flows, information diffusion and aggregation, price setting mechanism and returns processes. Presently, researchers use simulation software to conduct their research.

For instance, experiments have manipulated information asymmetry about the holding value of a bond or a share on the pricing for those who don't have enough information, in order to study stock market bubbles.

Social preferences

[edit]

The term "social preferences" refers to the concern (or lack thereof) that people have for each other's well-being, and it encompasses altruism, spitefulness, tastes for equality, and tastes for reciprocity. Experiments on social preferences generally study economic games including the dictator game, the ultimatum game, the trust game, the gift-exchange game, the public goods game, and modifications to these canonical settings. As one example of results, ultimatum game experiments have shown that people are generally willing to sacrifice monetary rewards when offered low allocations, thus behaving inconsistently with simple models of self-interest. Economic experiments have measured how this deviation varies across cultures.

Contracts

[edit]

Contract theory is concerned with providing incentives in situations in which some variables cannot be observed by all parties. Hence, contract theory is difficult to test in the field: If the researcher could verify the relevant variables, then the contractual parties could contract on these variables, hence any interesting contract-theoretic problem would disappear. Yet, in laboratory experiments it is possible to directly test contract-theoretic models. For instance, researchers have experimentally studied moral hazard theory,[12] adverse selection theory,[13] exclusive contracting,[14] deferred compensation,[15] the hold-up problem,[16][17] flexible versus rigid contracts,[18] and models with endogenous information structures.[19]

Agent-based computational modeling

[edit]

Agent-based computational modeling is a relatively recent method in economics with experimental dimensions.[20] Here the focus is on economic processes, including whole economies, as dynamic systems of interacting agents, an application of the complex adaptive systems paradigm.[21] The "agent" refers to "computational objects modeled as interacting according to rules," not real people.[20] Agents can represent social and/or physical entities. Starting from initial conditions determined by the modeler, an ACE model develops forward through time driven solely by agent interactions.[22] Issues include those common to experimental economics in general[23] and by comparison[24] as well as development of a common framework for empirical validation and resolving open questions in agent-based modeling.[25]

Methodology

[edit]

Guidelines

[edit]

Experimental economists generally adhere to the following methodological guidelines:

  • Incentivize subjects with real monetary payoffs.
  • Publish full experimental instructions.
  • Do not use deception.
  • Avoid introducing specific, concrete context.

Critiques

[edit]

The above guidelines have developed in large part to address two central critiques. Specifically, economics experiments are often challenged because of concerns about their "internal validity" and "external validity", for example, that they are not applicable models for many types of economic behavior, so the experiments simply aren't good enough to produce useful answers. However, none of the critiques towards this methodology are specific to it, as they are immediately applicable to either theoretical or empirical approaches or both.[26] [citation needed]

Software tools

[edit]

The most famous software for conducting experimental economics research is z-Tree, which is developed by Urs Fischbacher from 1998 on.[27] It had about 9460 citation results counted on Google Scholar in February 2020.[28] It transcripts as Zurich Toolbox for Readymade Economic Experiments and was one of the reasons for the Joachim Herz Research prize for "Best research work" awarded to Fischbacher in Dezember 2016.[29] z-Tree is a software, which runs on a network of computers in a research lab.[30] One of the computers is used by experimenters and the other computers are used by the subjects of experiment. The setup of an experiment is variable and can be defined in the imperative language z-Tree programming language.[31] This language allows the experimenter to set up a variety of experiments and additional surveys.

Alternatively, there is a big number of competing alternative software.[32] Following table presents a growing list of software tools for experimental economics:

Name Citation Year
z-Tree [27] 1998
FactorWiz [33] 2000
Wextor [34] 2002
EconPort [35] 2005
MIT Seaweed project [36] 2009
FRAMASI [37] 2009
MWERT [38] 2014
ConG [39] 2014
oTree [40] 2014
CLOSE project [41] 2015
Breadboard [42] 2016
nodeGame [32] 2016

See also

[edit]

Notes

[edit]
  1. ^ Including statistical, econometric, and computational. On the latter see Alvin E. Roth, 2002. "The Economist as Engineer: Game Theory, Experimentation, and Computation as Tools for Design Economics," Econometrica, 70(4), pp. 1341–1378 Archived 2004-04-14 at the Wayback Machine.
  2. ^ See, e.g., Grechenig, K., Nicklisch, A., & Thöni, C. (2010). Punishment despite reasonable doubt—a public goods experiment with sanctions under uncertainty. Journal of Empirical Legal Studies, 7(4), 847–867 (link).
  3. ^ Vernon L. Smith, 2008a. "experimental methods in economics," The New Palgrave Dictionary of Economics, 2nd Edition, Abstract.
       • _____, 2008b. "experimental economics," The New Palgrave Dictionary of Economics, 2nd Edition. Abstract.
       • Relevant subcategories are found at the Journal of Economic Literature classification codes at JEL: C9.
  4. ^ J. DiNardo, 2008. "natural experiments and quasi-natural experiments," The New Palgrave Dictionary of Economics, 2nd Edition. Abstract.
  5. ^ • Vernon L. Smith, 1992. "Game Theory and Experimental Economics: Beginnings and Early Influences," in E. R. Weintraub, ed., Towards a History of Game Theory, pp. 241– 282.
       • _____, 2001. "Experimental Economics," International Encyclopedia of the Social & Behavioral Sciences, pp. 5100–5108. Abstract per sect. 1.1 & 2.1.
       • Charles R. Plott and Vernon L. Smith, ed., 2008. Handbook of Experimental Economics Results, v. 1, Elsevier, Part 4, Games, ch. 45–66 preview links.
       • Vincent P. Crawford, 1997. "Theory and Experiment in the Analysis of Strategic Interaction," in Advances in Economics and Econometrics: Theory and Applications, pp. 206–242. Cambridge. Reprinted in Colin F. Camerer et al., ed. (2003). Advances in Behavioral Economics, Princeton. 1986–2003 papers. Description, contents, and preview., Princeton, ch. 12.
  6. ^ Martin Shubik, 2002. "Game Theory and Experimental Gaming," in Robert Aumann and Sergiu Hart, ed., Handbook of Game Theory with Economic Applications, Elsevier, v. 3, pp. 2327–2351. Abstract.
  7. ^ Rapoport, A. (1962). The use and misuse of game theory. Scientific American, 207(6), 108–119. http://www.jstor.org/stable/24936389
  8. ^ Gunnthorsdottir Anna, Vragov Roumen, Seifert Stefan and Kevin McCabe 2010 "Near-efficient equilibria in contribution-based competitive grouping," Journal of Public Economics, 94, pp. 987–994. [1]
  9. ^ Erev, Ido; Haruvy, Ernan (2016). "Learning and the economics of small decisions". Handbook of Experimental Economics. 2: 638–716.
  10. ^ "Predicting how people play games: Reinforcement learning in experimental games with unique, mixed strategy equilibria", Ido Erev, Alvin E Roth, The American Economic Review, September 1998, 848–881 JSTOR 117009
  11. ^ a b Ross Miller (2002). Paving Wall Street: experimental economics and the quest for the perfect market. New York: John Wiley & Sons. pp. 73–74. ISBN 978-0471121985.
  12. ^ Hoppe, Eva I.; Schmitz, Patrick W. (2018). "Hidden action and outcome contractibility: An experimental test of moral hazard theory". Games and Economic Behavior. 109: 544–564. doi:10.1016/j.geb.2018.02.006. ISSN 0899-8256.
  13. ^ Hoppe, Eva I.; Schmitz, Patrick W. (2015). "Do sellers offer menus of contracts to separate buyer types? An experimental test of adverse selection theory". Games and Economic Behavior. 89: 17–33. doi:10.1016/j.geb.2014.11.001. ISSN 0899-8256.
  14. ^ Landeo, Claudia M.; Spier, Kathryn E. (2016). "Stipulated Damages as a Rent-Extraction Mechanism: Experimental Evidence" (PDF). Journal of Institutional and Theoretical Economics. 172 (2): 235–273. doi:10.1628/093245616x14534707121162. ISSN 0932-4569.
  15. ^ Huck, Steffen; Seltzer, Andrew J; Wallace, Brian (2011). "Deferred Compensation in Multiperiod Labor Contracts: An Experimental Test of Lazear's Model". American Economic Review. 101 (2): 819–843. doi:10.1257/aer.101.2.819. ISSN 0002-8282. S2CID 16415006.
  16. ^ Hoppe, Eva I.; Schmitz, Patrick W. (2011). "Can contracts solve the hold-up problem? Experimental evidence". Games and Economic Behavior. 73 (1): 186–199. doi:10.1016/j.geb.2010.12.002. ISSN 0899-8256. S2CID 7430522.
  17. ^ Morita, Hodaka; Servátka, Maroš (2013). "Group identity and relation-specific investment: An experimental investigation". European Economic Review. 58: 95–109. CiteSeerX 10.1.1.189.3197. doi:10.1016/j.euroecorev.2012.11.006. ISSN 0014-2921.
  18. ^ Fehr, Ernst; Hart, Oliver; Zehnder, Christian (2014). "How do Informal Agreements and Revision Shape Contractual Reference Points" (PDF). Journal of the European Economic Association. 13 (1): 1–28. doi:10.1111/jeea.12098. ISSN 1542-4766. S2CID 39821177.
  19. ^ Hoppe, Eva I.; Schmitz, Patrick W. (2013). "Contracting under Incomplete Information and Social Preferences: An Experimental Study" (PDF). The Review of Economic Studies. 80 (4): 1516–1544. doi:10.1093/restud/rdt010. ISSN 0034-6527.
  20. ^ a b Scott E. Page, 2008. "agent-based models," The New Palgrave Dictionary of Economics, 2nd Edition. Abstract.
  21. ^ Leigh Tesfatsion, 2003. "Agent-based Computational Economics: Modeling Economies as Complex Adaptive Systems," Information Sciences, 149(4), pp. 262–268. Abstract.
  22. ^ Leigh Tesfatsion, 2006. "Agent-Based Computational Economics: A Constructive Approach to Economic Theory," ch. 16, Handbook of Computational Economics, v. 2, pp. 831–880. Abstract/outline. 2005 prepublication version Archived 2017-08-11 at the Wayback Machine.
      • Kenneth Judd, 2006. "Computationally Intensive Analyses in Economics," Handbook of Computational Economics, v. 2, ch. 17, pp. 881– 893.
      • Leigh Tesfatsion and Kenneth Judd, ed., 2006. Handbook of Computational Economics, v. 2. Description Archived 2012-03-06 at the Wayback Machine & and chapter-preview links.
  23. ^ Vernon L. Smith, 2008b. "experimental economics," The New Palgrave Dictionary of Economics, 2nd Edition. Abstract.
  24. ^ John Duffy, 2006. "Agent-Based Models and Human Subject Experiments," ch. 19, Handbook of Computational Economics, v.2, pp. 949–1011. Abstract.
  25. ^ • Leigh Tesfatsion, 2006. "Agent-Based Computational Economics: A Constructive Approach to Economic Theory," ch. 16, Handbook of Computational Economics, v. 2, sect. 5. Abstract and pre-pub PDF Archived 2017-08-11 at the Wayback Machine.
       • Akira Namatame and Takao Terano (2002). "The Hare and the Tortoise: Cumulative Progress in Agent-based Simulation," in Agent-based Approaches in Economic and Social Complex Systems. pp. 3– 14, IOS Press. Description Archived 2012-04-05 at the Wayback Machine.
       • Giorgio Fagiolo, Alessio Moneta, and Paul Windrum, 2007 "A Critical Guide to Empirical Validation of Agent-Based Models in Economics: Methodologies, Procedures, and Open Problems," Computational Economics, 30(3), pp. 195–226.
  26. ^ Camerer, Colin F. (2011-12-30). "The Promise and Success of Lab-Field Generalizability in Experimental Economics: A Critical Reply to Levitt and List". Working Paper Series. {{cite journal}}: Cite journal requires |journal= (help)
  27. ^ a b "UZH – z-Tree – Zurich Toolbox for Readymade Economic Experiments". www.ztree.uzh.ch. Retrieved 8 February 2020.
  28. ^ "Google Scholar z-Tree Citations". scholar.google.com. Retrieved 8 February 2020.
  29. ^ "Southwest Press, in 8 December 2016". Archived from the original on 23 December 2016. Retrieved 17 February 2018.
  30. ^ Fischbacher, Urs. "z-Tree 4.1 TutorialandReferenceManual" (PDF). www.ztree.uzh.ch. Retrieved 9 February 2020.
  31. ^ Altman, Morris (2015). Real-world decision making : an encyclopedia of behavioral economics. Bloomsbury Academic. p. 141. ISBN 978-1440828157.
  32. ^ a b Balietti, Stefano (18 November 2016). "nodeGame: Real-time, synchronous, online experiments in the browser". Behavior Research Methods. 49 (5): 1696–1715. doi:10.3758/s13428-016-0824-z. PMID 27864814.
  33. ^ "PsycNET". psycnet.apa.org. Retrieved 8 February 2020.
  34. ^ Reips, Ulf-Dietrich; Neuhaus, Christoph (May 2002). "WEXTOR: A Web-based tool for generating and visualizing experimental designs and procedures". Behavior Research Methods, Instruments, & Computers. 34 (2): 234–240. doi:10.3758/BF03195449. PMID 12109018.
  35. ^ Cox, James C.; Swarthout, J. Todd (2006). "Econport: Creating and Maintaining a Knowledge Commons". Andrew Young School of Policy Studies Research. SSRN 895546.
  36. ^ Chilton, Lydia B. (2009). Seaweed : a Web application for designing economic games (Thesis). Massachusetts Institute of Technology. hdl:1721.1/53094.
  37. ^ Tagiew, Rustam (2009). Filipe, Joaquim; Fred, Ana; Sharp, Bernadette (eds.). Towards a framework for management of strategic interaction [Proceedings of the International Conference on Agents and Artificial Intelligence] (PDF). Porto, Portugal. pp. 587–590. ISBN 978-9898111661.{{cite book}}: CS1 maint: location missing publisher (link)
  38. ^ Hawkins, Robert X. D. (1 October 2014). "Conducting real-time multiplayer experiments on the web". Behavior Research Methods. 47 (4): 966–976. doi:10.3758/s13428-014-0515-6. PMID 25271089. S2CID 41817757.
  39. ^ Pettit, James; Friedman, Daniel; Kephart, Curtis; Oprea, Ryan (8 January 2014). "Software for continuous game experiments". Experimental Economics. 17 (4): 631–648. doi:10.1007/s10683-013-9387-3. S2CID 17160579.
  40. ^ Chen, Daniel L.; Schonger, Martin; Wickens, Chris (March 2016). "oTree—An open-source platform for laboratory, online, and field experiments". Journal of Behavioral and Experimental Finance. 9: 88–97. doi:10.1016/j.jbef.2015.12.001. hdl:20.500.11850/111641.
  41. ^ Lakkaraju, Kiran; Medina, Brenda; Rogers, Alisa N.; Trumbo, Derek M.; Speed, Ann; McClain, Jonathan T. (2015). "The Controlled, Large Online Social Experimentation Platform (CLOSE)". Social Computing, Behavioral-Cultural Modeling, and Prediction. Lecture Notes in Computer Science. Vol. 9021. Springer International Publishing. pp. 339–344. doi:10.1007/978-3-319-16268-3_40. ISBN 978-3319162676. OSTI 1315021.
  42. ^ "Breadboard". breadboard.yale.edu.

References

[edit]