Agent computing in economics: a rough path towards policy applications

This is the first of two posts on agent computing in economics by Omar Guerrero.
You can read the second part here. 

Agent computing is a simulation tool that has been successfully adopted in many fields where policy interventions are critical. Economics, however, has failed in doing so. Today, there are new opportunities for bringing agent computing into economic policy. In this post, I discuss why this technology has not been adopted for economic policy and point out new opportunities to do it.

Introduction 

Agent computing, popularly known in the social sciences as agent-based modeling, has a long history dating back to the 1940’s. Broadly speaking, the principle behind it is to create software agents that interact with each other according to certain behavioral principles and rules of the environment. In the social sciences, the idea of explicit interactions between heterogeneous agents is particularly salient due to the daunting challenge of understanding how collective behavior emerges from individual interactions. Some examples across different social sciences include, why individual agents give rise to political parties (political science); why social networks enhance or mitigate social stratification (sociology); why herding behavior amplifies the movement of stock prices (economics); why urban segregation emerges even within well-mixed populations (human geography); why individuals develop group identity (social psychology); and why a flourishing civilization suddenly collapses (anthropology and archaeology). All these questions are central to their disciplines and have been studied through many different theories and methods. Agent computing has been particularly useful to formalize the social mechanisms underpinning such theories and to discriminate between alternative explanations. Such granular resolution allows researchers to study micro-macro causal channels in great detail. Since agent-computing models do not require to impose assumptions about the aggregate dynamics of the system, they are able to generate non-trivial phenomena that is difficult to study with other methods. Depending on the discipline, you may hear that agent-based models are good to study things like feedback loops, evolutionary processes, collective dynamics, learning (individual and social), non-linear dynamics, punctuated equilibria, adaptive behavior, complex networks, out-of-equilibrium dynamics, criticality, scaling, etc. Regardless of the specific phenomena under study, we can say that users of agent computing broadly agree on its capability to produce emergence, in other words, to generate bottom-up aggregate behavior that cannot be inferred by decomposing the system into its individual parts.

The Early Years

From the perspective of a policymaker, emergence is an extremely useful concept. This is so because policymaking is not only about aggregate relationships between variables, but about details, causal mechanisms, unintended consequences, behavioral responses, etc. Therefore, any method capable of producing emergence should be very valuable. Yet, when we look into the toolkit of policymakers in the socioeconomic sphere we will hardly find any of this technology at use. This is not the case however, in fields like warfare studies, epidemiology or traffic analysis, where agent computing has become the protagonist of major policy interventions. So, why does a proven technology with obvious benefits to policymaking remains marginalized to niche academic journals and conferences? In order to answer this question, I will focus on the particular case of economics, a discipline where its mainstream has proven especially resistant to this method.

When the first agent computing models appeared, there were two other computational methods that were quickly adopted by some social scientists: system dynamics and microsimulation. The former was developed by Jay Forrester to study the non-linear dynamics of stocks-and-flows systems. The latter was created by Guy Orcutt to simulate the evolution of multiple interacting variables subject to stochastic changes. In my view, the appearance of these techniques in conjunction with the absence of object oriented programming and modest computational power dampened the early adoption of agent computing. In addition, agent-based modeling lacked a theoretical framework that could relate it to specific economic questions; this would only come a few decades later through the work of Herbert Simon—the father of the computational social sciences (CSS)—and, later on, through the complexity sciences championed by the Santa Fe Institute. Although agent computing, system dynamics and microsimulation are fundamentally different, today, their boundaries have blurred, so it is common to find mixtures of these approaches under the label of agent-based models.

Proof of Concept 

Fast-forward into the 1990’s, game theory, behaviorism, object oriented programming and personal computers have become relatively accessible to social scientists. During this decade, agent-based modeling had its spring by producing major breakthroughs in understanding the complexity of socioeconomic systems through the lens of computation. Some of these contributions include the prisoner’s dilemma tournaments of Axelrod, the Sugarscape of Epstein and Axtell and the electricity market models of Tesfatsion. The ability to generate high-resolution models that produced rich and realistic bottom-up dynamics was unprecedented in the social sciences (with the exception of earlier notable works such as the Schelling segregation model). However, this spring was missing an important ingredient that would only come a decade later: big data. With a few exceptions, most agent-computing models of this generation were not ready to be used in concrete empirical applications, remaining largely theoretical, or as some would say, toy models. Nevertheless, the proof of concept about the method’s capabilities was path-breaking and defined a generation of social scientists. These pioneers gave CSS the shape that we know today.

During the turn of the 21st century, agent-computing became more accessible to social scientists due to the advent of big data, increased computational power, high-level programming languages like Java and Python and agent-based modeling libraries such as Netlogo and MASON. This time, the challenge was moving from toy models to empirically-calibrated large-scale simulations. Overcoming this challenge involved obstacles such as, building models that were realistic but not too complicated, developing frameworks for large-scale simulations, improving the methods to estimate parameters, rethinking issues such as verification and validation, etc. In economics, two ambitious projects became the flagship of this generation: EURACE and CRISIS. EURACE aimed to model entire economies at the level of each household and firm, in order to provide a policymaking tool. CRISIS, on the other hand, was motivated by the global financial crisis of 2008 and attempted to provide tools that would help to foresee similar events in the future. Both projects were extremely ambitious. They were milestones in bridging the gap between toy models and policymaking tools. From different conversations with several economists, it seems that EURACE and CRISIS promised too much and delivered too little since they were too complicated for the average consultant. Usually, these large-scale models are seen as black boxes; a criticism that, in my opinion, is quite weak since there exist many methods to understand complex simulations like the ones used by cosmologists and climatologists. While these projects might not have lived up to their expectations, they advanced the adoption of agent-computing in important ways, for example, by interacting with policymakers and demonstrating the empirical capabilities of the method.

New opportunity for agent computing 

Currently, a new window of opportunity for disseminating agent computing in the economic policy world has opened. This opportunity emerged from a confluence of different factors, for example, the growth of big data and data science has made the limitations of traditional methods more evident; the new generation of social scientists are more savvy about computing and programming; the industrial benefits from exploiting data and computation have become obvious; and the computational social sciences are becoming a well-established field, in fact, after George Mason University created the first graduate program in CSS more than a decade ago, several other universities around the world have followed their example. In addition to these factors, there is an extra element that makes this window different from others in the past: governments and policymakers are more critical about traditional methods and open to new alternatives. For example, the UK has created the Alan Turing Institute, the national institute for data science and AI. Its Research Fellows are the core of the Institute’s research, developing independent agendas where computing and data science are used to tackle major societal challenges. Another example is the Bank of England, which has been openly supportive about the need for agent computing in economic policymaking and has already conducted a few projects using this method. In some developing countries, on the other hand, there is a growing dissatisfaction with the type of policy advice that economists usually provide, so governments have started to inquire about alternative methodologies to inform policy. One example is the Mexican National Laboratory of Public Policies, which has the goal of informing policymaking through data science, agent-based modeling and experimental methods.

In summary, the adoption of agent computing in economic policymaking has faced significant challenges. Some of them have had to do with ideologies, others with premature computational technologies and some others with data scarcity. Today, however, we are living an unprecedented growth of different data-driven methods and sciences. In order to take advantage of this opportunity, economic research that uses agent-based models has to move beyond proofs of concepts and complicated large-scale models, and demonstrate that the method can clearly overcome specific limitations of traditional analytic tools. Just as important, governments and policymaking agencies have to grow their investment to build this capability, just as many central banks did in the past when they bet on general equilibrium models. The path to disseminate agent computing through academic discussions is far more difficult and it will not change until entire generations of scholars are replaced. Hence, capitalizing on this opportunity will be largely determined by the openness and commitment of policymakers; the ones who can create a demand for this technology.

Read the second part of this blog post here

About

Web: oguerr.com

Omar A. Guerrero has a PhD in Computational Social Science from George Mason University. He is as senior research fellow at the departments of Economics and STEaPP from University College London, and at the Alan Turing Institute. Previously, he was a fellow at the Oxford Martin School, the Saïd Business School and the Institute of New Economic Thinking at the University of Oxford.

Find out more about Policy Priority Inference, one of the latest research projects from the Alan Turing Institute.

Previous
Previous

Critical & Creative Thinking in Research

Next
Next

Pre-conference on Politics & Computational Social Science 2018 - Roundup