At last! Agent computing for economics policy

This is the second post on agent computing in economics by Omar Guerrero. You can read the first part here

Today, there is a new window of opportunity to adopt agent computing as a mainstream analytic tool in economics. Here, I discuss four major aspects in which this technology can improve economic policymaking: causality and detail, scalability and response, unobservability and counterfactuals, and separating design from implementation. In addition, I highlight the crucial role that policy agencies and research funders have in this endeavor by supporting a new generation of computationally-enabled social scientists.

In my previous post, I discussed some of the reasons why agent computing (also known as agent-based modeling) has not become a mainstream analytic tool for economic policy and pointed out a new opportunity for it to happen. Part of the challenge that economics agent-based modelers face today is convincing policymakers through very clear, relatively simple and well defined empirical applications. As I mentioned in my previous contribution, the last generation of agent-based models that got close to policymakers consisted of large-scale simulations that were too complicated for this purpose. While relatively complicated models will eventually become the norm for realistic policy advice, they are not ideal to capitalize on the current window of opportunity. In my opinion, the best way to take advantage is through simple and extremely well defined projects where the benefits of agent computing are self-evident. Achieving this, of course, is subject to a system of incentives and research funding. Hence, research funding agencies will also play an important part in this process by supporting a new generation of younger and more innovative social scientists. In this post, I would like to elaborate on how agent-computing projects can be more convincing in the eyes of a policymaker. In my opinion, there are four major aspects that agent-based modelers can exploit for this purpose: (1) causality and detail, (2) scalability and response, (3) unobservability and counterfactuals, and (4) separating policy design from implementation.

Causality and detail 

Policymaking is about causality and detail. Successful interventions are those made in the right moment, with the right pace, to the right target population and through the right channels. Inevitably, the level of detail and understanding of causal mechanisms are fundamental for success. In the absence of experimental data, causality can be inferred from observational sources through different statistical methods. However, as a policymaker, how comfortable are you with statistical tools that are unable to specify the socioeconomic processes that make up such detailed causal channels? Statisticians might think about this problem in terms of incorporating prior knowledge or expert advice into their models. Explicit process modeling, however, is more than that. It involves specifying the data-generating mechanisms in terms of social behaviors and interactions that are backed by empirically-grounded theory. Explicitly modeling social processes has several virtues that help testing causal mechanisms. One of them is that, if a policy fails, we can always go back to the model and identify the specific mechanism that was missing or incorrectly specified. Another is that, if the environment has changed, the ‘training’ dataset will not contain updated structural information. In this case, statistical models that measure aggregate relationships will fail. In economics, a similar idea was discussed in 1976 by Robert Lucas, who argued that policy announcements affected the expectations of the target population. Hence, statistical macroeconomic models were doomed to fail because they could not capture explicit agent-level economic behavior. A more general idea about this problem was first treated in the 1962 by Ludwig von Mises, who argued that human behavior could hardly be captured in models, so public policy is an elusive endeavor. Of course, Mises did not live to see the growth of computer science, behaviorism and experimentalism.

For many decades, economists have tried to overcome Lucas’ critique by capturing the structural aspects of social systems through models in the form systems of equations with closed-form solutions. Like every mathematical model, this comes with caveats regarding the assumptions that we need to make in order to obtain solutions. Often, these assumptions impose some agent level and aggregate behavior (e.g., agent-level rationality and general equilibrium) that destroys crucial information about the policymaking process. For example, until 2008, most macroeconomic models lacked realistic details about the financial sector. The US housing market bubble that eventually triggered the financial crisis was the result of risk misperception due to the very specific structure of financial products that were interconnected across the financial system. Clearly, the cause of the problem could not be understood without a detailed picture. For traditional macroeconomic models it is impossible to capture these details, so it is not surprising that the US government lacked relevant tools by the time of the crash. In order to demonstrate this flaw, Axtell et al. built a large-scale agent-computing model of the housing bubble in Washington DC. They took advantage of the granularity of a large dataset from financial and real estate companies. By specifying highly detailed causal mechanisms, their model was able to emerge the bubble from bottom-up and became a pioneering tool to provide counterfactual estimates for different policy interventions.

Scalability and response 

Do we need agent computing when we can run an experiment? Without any doubt, experimental methods offer an invaluable tool to understand causality in social systems. While field experiments and randomized controlled trials can be extremely insightful about highly specific situations, they have important limitations when it comes to scaling and response. By scaling I mean that it is economically inviable to perform country-wide experiments. Furthermore, even if we could resort to excellent sampling procedures, it would be unfeasible to conduct experiments for every single policy that a government needs to implement. On the other hand, there are situations that demand a quick response from the policymaker. In these cases, experimentation is not possible since setting and conducting this kind of studies can take up a couple of years. In order to provide a fast response to socioeconomic problems, the most common tool is still the experience and intuition of policymakers, while fewer more sophisticated decision makers might use already-built statistical models lacking explicit processes. Perhaps a timely example about the importance of scalability and response is Brexit. Since the moment in which the UK government formalized the process to exit the EU, British policymakers have been under pressure to cope with the complex and highly uncertain economic outcomes that Brexit might bring. Clearly, experimental studies are not feasible, while statistical models do not have an adequate training dataset for this event. Ideally, economic decision making would be supported by highly resolved computational models, in the same way in which policy responses to epidemiological outbreaks are backed by large-scale computational simulations. Perhaps if research funding agencies such as the ESRC would have been more supportive of this line of research in the past two decades, the UK would have better tools available. However, it is not too late to make this change, and I hope that, with the new window of opportunity, research funders will become more proactive in filling this gap.

Unobservability and counterfactuals 

The third area of opportunity for agent-computing in economic policymaking has to do with phenomena where socioeconomic behavior is not observable. In economics and other social sciences, there is a broad class of problems where interactions and decisions are intentionally hidden, for example tax evasion, corruption, money laundering, informal economic activities, human trafficking, etc. When direct measurement through surveys is not possible, there exist several statistical methods to measure these phenomena, for example, tax evasion can be approximated through residuals of national accounts; informal economic activity can be inferred through night-light satellite imagery; and money laundering can be estimated from anomalies in financial-transaction data. However, if performing detailed counterfactuals is difficult enough using observational data, indirect measurements of hidden behavior imposes an additional layer of complexity.

One of the reasons why microsimulation (see my previous post) became popular was its ability to produce counterfactuals via synthetic data, especially in problems related to taxation and the distribution of income. Today, however, policymakers face problems that have to consider highly complex objects such as networks and fine grained agent-level behavior. For example, going back to the Brexit case, how will unemployment peak and percolate throughout the UK when specific firms move to Europe? Building a counterfactual for this question requires a great level of detail, for example, understanding how labor flows from firm to firm, how different companies adapt their hiring behavior in the presence of a shock, what is the speed of this adaptation, how the mobility structure of the labor market generates bottlenecks in certain regions or industries, how fast do aggregate variables (e.g., unemployment) respond to localized interventions, etc. Considering these elements in conjunction with the relevant socioeconomic mechanisms can only be achieved through agent computing, at least if we want to provide detailed enough counterfactuals. In my previous research, I developed the method of labour flow networks to address this type of question. For example, the interactive computational framework laborSim allows the user to introduce highly specific shocks in a network of firms in order to produce granular counterfactuals about the evolution and dispersion of unemployment. This technology is similar to the one used to produce counterfactuals of epidemiological outbreaks.

An important element of any quantitative policy recommendation that relates to counterfactuals is the null hypothesis. Broadly speaking, a null hypothesis is a description of the world where the phenomenon of interest has no effect. In a linear regression, this usually takes the form of a zero-coefficient; in a network, it could be a randomly rewired graph. These approaches, however, rely on having data where the phenomenon of interest is observed or identified. So, what do we do when we suspect our phenomenon is in a dataset but we cannot identify it? This is one of the challenges of my research program on vote-trading networks.  Here, legislators agree to exchange votes while keeping these deals secret. Extensive studies have documented logrolling in the US Congress via anonymous interviews. In empirical terms, this activity must be captured by publicly-available roll call data, but it remains hidden since there are no existing datasets that identify specific trades in a systematic way. Agent computing has helped us to formalize different economic and political theories on how legislators vote and cooperate, in order to generate synthetic datasets where we can effectively identify and control vote trading. This allows us to construct synthetic data without vote trading (the null hypothesis) and to measure the performance of our estimation strategy. This approach is extremely helpful in problems where there is understanding of human behavior and social interaction, but no training dataset exists.

Policy Design versus implementation

The fourth aspect where agent computing can significantly improve policymaking is separating policy design from implementation. The differentiation between these two processes has been long discussed by researchers in public policy and social intervention, but hardly acknowledge by economists. This is so because traditional economic models (econometric and structural) do not specify the adaptive behavior of the government, its multi-agent nature and the inefficiencies that arise when implementing a policy. The relevance of separating design from implementation becomes evident in the context of economic development, where policy implementation is often hindered by corruption through the misuse of public funds. Any respectable development economist would agree that part of the slow progress of developing countries has to do with a poor rule of law and monitoring mechanisms, yet we do not have many analytic methods to take this into account in an explicit and quantitative way. An initiative to solve this problem is a research program that I am developing at the Alan Turing Institute: Policy Priority Inference (PPI). PPI is a quantitative framework to advise governments on how to prioritize public policies. It achieves this through an agent-computing model of the interactions between the policy designer (the central authority) and the implementers (the public servants or agencies in charge of the policies). Furthermore, it captures the recently-discussed interdependencies between the sustainable development goals through a spillover network. PPI allows us to estimate the non-observable priorities that governments establish when trying to reach development goals as well as the inefficiencies that emerge during the implementation process. Just like in the context of development, there are other socioeconomic problems where the separation between design and implementation is crucial to improve policy advice, and agent computing is the ideal method to explicitly model it.

To conclude

In summary, we are living in an unprecedented era of growth for different data-driven methods and sciences. Together with a computationally-literate generation of researchers and decision makers, agent-based models have a new and more promising opportunity to become a mainstream tool for economic policymaking. Nevertheless, in order to capitalize on this opportunity, the new research initiatives have to focus on smaller and well-defined problems instead of grandiose large-scale initiatives. This is necessary to demonstrate how agent computing can overcome specific limitations that conventional models cannot, and to provide clear empirical applications. The openness and vision of research funding agencies will also play a central role in whether this opportunity is seized. Perhaps, instead of giving colossal grants to a few senior researchers, they could support a larger and younger community of social scientists who have well defined and more adventurous ideas. Hopefully, the never-before-seen support of major policy agencies around the world can generate and create the necessary momentum to, at last, incorporate agent computing into economic policy.

Find out more about Policy Priority Inference, one of the latest research projects from the
Alan Turing Institute.

About

Web: oguerr.com

Omar A. Guerrero has a PhD in Computational Social Science from George Mason University. He is as senior research fellow at the departments of Economics and STEaPP from University College London, and at the Alan Turing Institute. Previously, he was a fellow at the Oxford Martin School, the Saïd Business School and the Institute of New Economic Thinking at the University of Oxford.

Previous
Previous

Social Media Research, Then and Now

Next
Next

Critical & Creative Thinking in Research