Many of us who work on environmental issues have faced the challenge of conveying their importance to nonspecialists. Perhaps we have found ourselves defending the robustness of climate science to a contrarian uncle, or extolling the importance of biodiversity to a dubious aunt. Our relations and friends are often educated people, skilled in their own professions, who have formed strong opinions about the science underlying environmental problems, their potential consequences for their own lives, and the appropriate policy response to them. If pushed, they will present arguments to support their views, which are often influenced by their exposure to the media and the beliefs of friends, public commentators, and religious or secular authority figures.
Many of us have also experienced another common response to environmental issues: indifference. Surveys of public opinion show a low level of concern about environmental problems. The International Social Survey Program, a project of the independent research organization NORC at the University of Chicago, found that 3.6 percent of Americans ranked the environment as their chief policy concern, behind the economy (25 percent), health care (22.2 percent), education (16 percent), poverty (11.6 percent), and crime (8.6 percent) (Smith 2013). Moreover, people’s beliefs about specific environmental problems are often heterogeneous and not stable over time. The proportion of Americans who believe that the global climate has been warming over the past few decades has fluctuated from a high of 78 percent in July 2006 to a low of 57 percent in October 2009 (Shapiro 2014). Between 50 percent (in 2010) and 61 percent (in 2003 and 2007) of Americans believe that any perceived warming is due to human activity (Saad 2013). These large swings in public opinion have occurred despite no comparable fluctuations in our basic scientific understanding of climate change.
The beliefs of the general public are “ground zero” for the battle to implement environmental policies. Politicians are unlikely to be able to support policy choices that conflict with the worldview held by a majority of their constituents.Moreover, policymakers are themselves overwhelmingly drawn from the ranks of the lay public.1 Most politicians are not research scientists, and although they may have access to more information than the average citizen, their opinions on environmental issues are shaped by many of the same forces that determine the beliefs of the average member of the public.
Historically, environmental economics has not focused on how the public arrives at its beliefs about environmental problems, whether social forces and democratic institutions aggregate information in an efficient manner, or the implications of belief heterogeneity for the policy process. In this article we argue that the process of social belief formation, and its consequences for the political economy of environmental policy, should be an integral part of the positive study of environmental regulation. The issues raised by this line of inquiry precede many of the questions our profession has traditionally focused on. Whereas most economic analysis takes the level of demand for regulation of environmental externalities as given, we ask how the demand for intervention arises. Will this demand, and the policies it ultimately gives rise to, reflect unbiased knowledge or can we expect systematic under- or overregulation of environmental externalities? A rich literature in political economics and social learning has recently emerged, yielding important insights into these questions. Our aimis to provide an overview of this literature and demonstrate its relevance for understanding how beliefs might influence environmental policy choices.
Several excellent articles have reviewed the literature on the political dimensions of environmental regulation. Oates and Portney (2003) provide perhaps the best coverage of this literature. They examine both theoretical and empirical studies of the regulatory process, highlighting the role of special interests, and the consequences of environmental federalism for policy choices. Hahn and Stavins (1992) survey political obstacles to the implementation of market-based instruments, and Anthoff and Hahn (2010) examine inefficiencies in existing environmental regulations. Similarly, Stavins, Keohane, and Revesz (1998) investigate why the practice of policy selection diverges from the normative first best. Detailed case studies of specific regulations include Stavins (1998), Joskow and Schmalensee (1998), and Ellerman et al. (2000), who examine the political economy of the U.S. acid rain program, and Hahn (2009) and Ellerman and Buchner (2007), who examine greenhouse gas regulations. Our approach, while related to the issues raised by these authors, is both narrower and more conceptual. We are concerned exclusively with the informational aspects of the positive theory of environmental regulation—how are beliefs formed and how do political institutions aggregate them? Our survey of the literature will show that although progress has been made in understanding the mechanisms that influence public beliefs and policy choices, more work is needed for us to be able to provide a satisfactory answer to these questions.
The article is structured as follows. We begin by discussing characteristics of global environmental problems that make them particularly susceptible to misunderstanding. We then discuss the mechanisms of belief formation. On the demand side, we consider the processes of individual inference and social learning, and on the supply side we consider the role of the media. Each of these mechanisms can introduce biases or errors into both individual and collective beliefs. Finally, we ask how the political process might moderate or exacerbate biases in beliefs. We conclude with a discussion of key issues for future research.
(Mis)Understanding Global Environmental Problems
Many global environmental problems have characteristics that make them inherently difficult to understand.2 A primary reason for this is that very few of us have first-hand sensory experience of their consequences (Weber and Stern 2011; Myers et al. 2013). Consider the following issues: climate change, biodiversity loss, and the decline in world fisheries. Howmany of us have actually “seen” any of these problems with our own eyes? They are not localized events, but rather long-run trends, or slow changes in the distribution of events, and thus removed from our experience. Compare this to the immediacy of an oil spill, whose consequences are easy to capture on film. Oil spills are also immediately detectable. This is in stark contrast to our three global problems, each of which plays out over a long time scale, on the order of a human lifetime. This long lag between damaging actions and their consequences requires us to think abstractly and to project the consequences of current behaviors into the distant future in order to appreciate their magnitude.
A second way in which oil spills and other industrial accidents have an advantage over global environmental problems in the competition for public concern is that they are causally “focused.” A small number of easily identified parties (e.g., the rig or tanker operators and their parent company) does harm to innocent bystanders. The global environmental problems we listed earlier are all causally “diffuse”—they arise from the cumulative actions of many parties (not least ourselves) and all of us are affected. There is no clear victim, and no clear villain.
A further challenging aspect of global environmental problems is that one often needs to follow long chains of causal reasoning to understand their consequences. Their effects on the things most people care about are indirect, rather than direct. Consider the following example: industrial toxins and pesticides have been shown to reduce the size of bee populations, which in turn affects pollination rates, crop yields, and ultimately food prices. What most of us care about is the price of food, not bees.However, we need to understand the role of bees in the food production process in order to appreciate how industrial activity may be affecting our budgets. Similarly complicated chains of reasoning are required to understand how current greenhouse gas emissions may increase future political conflicts (Hsiang, Meng, and Cane 2011) or how reduced biodiversity may make us more vulnerable to infectious diseases (Keesing et al. 2010).
The upshot of these characteristics—remoteness fromfirst-hand experience, slow changes in trends and distributions of events, diffused causality, and logical complexity—is that understanding many of the world’s major environmental problems requires considerable cognitive effort. Few people invest the time necessary to absorb all of this complexity, as the costs of becoming informed far outweigh their potential benefits at the ballot box (Downs 1957). A large literature on risk perception and public understanding of science provides empirical support for this claim. For example, Bostrom et al. (1994), Read et al. (1994), and Reynolds et al. (2010) document a series of public misunderstandings about the science and causes of climate change, including widespread confusion about the distinction between weather and climate. Herrnstadt and Muehlegger (2014) show that Internet searches for “climate change” and “global warming” increase when the weather is unusually warm, and suggest that members of the U.S. Congress are more likely to vote for proenvironment measures when their home state has recently experienced unusual weather. This suggests that voters assign too much significance to short-run fluctuations in local weather patterns when forming their beliefs about climate change (see also Zaval et al. 2014).Moreover, these changes in voters’ perceptions may affect the legislative process.
However, the fact that people may not understand environmental problems does not necessarily mean that their overall perceptions of the severity of these problems are inaccurate or that society willmake regulatory decisions based onmisguided beliefs. Even though peoplemay not fully grasp the conceptual basis for concern, they could nevertheless arrive at broadly accurate beliefs by aggregating disparate information sources. This could occur at the individual or societal level. In order to investigate whether this is likely to occur, we need to examine the belief formation process itself and the ‘rules of the game’ that determine how beliefs influence policy choices.
The Belief Formation Process: Why Do We Believe What We Believe?
In this section we discuss three factors that determine the public’s beliefs: individual inference, social learning, and the media. Individual inference determines how our beliefs are updated by signals from the media, peers, or nature itself.We discuss both rational and behavioral models of inference and identify the potential for biased individual beliefs to arise. Social learning concerns how the interactions between groups of people affect beliefs. We examine whether such social interactions can be expected to lead to unbiased collective beliefs. Finally, we examine how the media shapes the public’s informational landscape.
The classical economic model of belief formation assumes that economic agents process new information in a Bayesian fashion. In the Bayesian model, agents have some subjective probabilistic beliefs about which of a set of hypotheses is likely to be true.With each new observation of an informative “signal,” these beliefs are updated in accordance with Bayes’ rule.3 For example, an observation of a snow-free northern winter might cause a Bayesian agent to increase her subjective probability on the hypothesis “Climate change is real” and decrease her subjective probability on the hypothesis “Climate change is not real.” This would occur only if the agent believes snow-free winters aremore likely in a world in which climate change is real than they are in a world in which it is not.
Bayesian learning has many attractive normative properties. One of its implications is that, after enough observations, all heterogeneity in people’s prior beliefs will disappear and everyone will hold the same beliefs (Blackwell and Dubins 1962). Moreover, if one of the hypotheses that agents evaluate happens to be correct, people’s beliefs will eventually converge to the truth (Doob 1948).4
Although this sanguine view of belief formation dominatesmuch economic modeling, there is reason to doubt the descriptive power of Bayes’ rule. Psychologists have identified numerous biases in human judgment under uncertainty (Kahneman, Slovic, and Tversky 1982), many of which affect the belief formation process. We focus on a few that seem especially relevant.
Overreacting to information
The first set of biases leads people to overreact to information. Base-rate neglect (Kahneman and Tversky 1973) suggests that most people do not obey Bayes’ rule when reasoning about conditional probabilities. For example, the philosopherMichael Sandel has observed that a high proportion of Harvard students are the first-born children in their families, and he uses this observation to suggest that first-born children are more likely to go to Harvard than their siblings (Millner and Calel 2012). This argument neglects the “base rate” of being born first: a child picked at random ismore likely to be first born than to have any other rank in the birth order. People who are subject to base-rate neglect overreact to new information because their inferences are notmoderated by prior beliefs. Probability neglect is another common behavioral bias. People who are subject to this bias express a willingness to pay to avoid emotionally impactful events (e.g., nuclear disasters) that is not sensitive to the likelihood of the event occurring (Sunstein and Zeckhauser 2011). This is also a form of overreaction, as assessments of the severity of an issue are not moderated by the likelihood of its occurrence. Conversely, the availability heuristic suggests that people’s estimates of the risk of a given activity are overly sensitive to rare high-impact events (Tversky and Kahneman 1973). For example, the public’s estimates of the risks of air travel are likely to be overly sensitive to infrequent airline disasters. Viscusi (1998) comprehensively documents how these biases distort the public’s perceptions of the risks of accidents, diseases, and environmental contamination.
Selective responses to new information
A second set of biases causes people’s responses to new information to differ depending on whether the information conflicts with their prior beliefs or their values. Thismay lead to either under- or overreaction to new information. Confirmation bias is perhaps the best known of these phenomena. People subject to this bias tend to put too much weight on information that confirms their prior beliefs and too little weight on information that conflicts with them (Nickerson 1998). Rabin and Schrag (1999) show that this tendency leads to overconfidence (people believe in their favored hypothesis more strongly than they should) and that false beliefs can persist even after exposure to an infinite amount of information. Diverse literature also suggests that people’s values influence their interpretation of information. Such ‘motivated reasoning’ (Kunda 1990), so-called because beliefs are constructed to fulfill a desire or support an identity, are likely to play an important role in explaining public attitudes toward global environmental problems, as they are highly emotive issues. For example, Kahan, Jenkins-Smith, and Braman (2011) show that people’s perceptions of the degree of scientific consensus on climate change are strongly correlated with their political values.
Inattention to information
Finally, we may simply not pay attention to all the information at our disposal. As our world has become increasingly informationally complex, we often do not have the mental capacity to consider every new piece of information. The psychology and economics literature confirms that “attention is a limited resource” (DellaVigna 2009). In the environmental psychology literature, this idea is sometimes referred to as the “finite pool of worry” hypothesis, which has been used to help explain people’s lack of concern about global climate change (Weber 2006).
These biases in belief formation suggest that individuals may not accurately assess environmental risks, even if their informational inputs are unbiased. Given that individuals’ beliefs are subject to biases, we next examine whether social interactions between people can help to moderate biases in individuals’ perceptions.
Human beings are social animals—our social and family ties largely determine our informational environment. Most of us inherit our political and religious values from our parents, and we are more likely to make new social connections with those who share our attitudes and beliefs, a phenomenon known as homophily (McPherson, Smith-Lovin, and Cook 2001). This perspective is summed up by North (2010): “Much of what passes for rational choice is not so much individual cogitation as the embeddedness of the thought process in the larger social and institutional context.” The question is, will the social aggregate reach better judgments than its constituent parts?
A baseline model of social beliefs
Recent popular books, including The Wisdom of Crowds (Surowiecki 2004) and The Difference (Page 2007), put forward the optimistic view that collectives have more accurate beliefs on average, and make better decisions, than individuals. The theoretical structure that underlies this view originates with the French political theorist Marquis de Condorcet and the English polymath Francis Galton. We illustrate their argument using Galton’s own example. Suppose that a group of people is trying to guess the weight of a cow at a livestock fair. Each person submits a guess, and the closest guess wins the cow. The law of large numbers implies that as long as people’s guesses do not share common biases and are statistically independent, themean guess will be an increasingly accurate predictor of the cow’s weight as the number of guesses increases. Results of this kind are known as Condorcet’s Jury Theorem (CJT). They have been argued to legitimate the very idea of democracy (List and Goodin 2001).
Challenges to the baseline model of social learning
The CJT makes several assumptions: people’s information is statistically independent, they do not share biases, they reveal their information truthfully and simultaneously, and they do not engage in strategic thinking. The results of the theorem are overturned when these assumptions are relaxed. To illustrate, let us consider the assumption of statistical independence of information. In reality, people’s beliefs are often determined by their social environment—they communicate with their friends and families and their beliefs may reflect some aggregate of the information they glean from these social interactions. Thus, rather than being statistically independent, everyone’s beliefs actually depend on the beliefs of everyone else. How will this communication and dependence between people affect the accuracy of the group’s beliefs?
Golub and Jackson (2010) study this question by extending a classic model (DeGroot 1974) of social learning on networks. In their model, all agents in a social network receive an independent signal about an event, and then each individual communicates with her neighbors and updates her beliefs by forming a weighted sum of her neighbors’ information. One can then ask whether the group’s beliefs will converge to the truth. Golub and Jackson (2010) showthat if the “influence” of each agent goes to zero as the size of the network grows, then the group will converge on true beliefs. Influence here is a measure of how important an agent’s initial beliefs are in determining everyone’s final beliefs. The condition they identify thus requires that as the size of the network grows, no single agent can be an opinion leader. If this condition fails, the group may end up converging on false beliefs.
It is also worth examining what happens when agents’ actions are not nonstrategic and simultaneous, as assumed in the CJT. Consider a simplified version of Galton’s guessing game, in which the cow can have two weights (“high” and “low”), and agents receive idiosyncratic private signals about whether the cow has a high or a low weight. Suppose that agents now reveal their guesses sequentially rather than simultaneously. When an agent’s turn to guess arrives, she has access to the history of past guesses and will make the best guess she can, given her private information signal and the observed sequence of guesses by others. Versions of this setup have been studied by Bikhchandani, Hirshleifer, and Welch (1992) and Banerjee (1992), who show that a strong form of path dependence occurs in this situation. Fully rational agents find it optimal to neglect their private signals and simply copy the guesses of those who have gone before them, an effect known as herding.5 Once again, relaxing the assumptions of the default model of social belief aggregation leads us to reject its optimistic findings.
Behavioral effects in social learning
Behavioral evidence highlights further biases due to group interactions. Eyster and Rabin (2010) study a behavioral variation on the herding phenomenon. In their model, agents consider the fact that their predecessors’ choices reflect their private information, but ignore the fact that these choices are themselves based on inferences fromthe actions of even earlier agents. This results in “naive herding,” whereby agents can become extremely confident in incorrect beliefs, to the point where they would have been better off if they had not observed anyone else’s choices.
A related effect, known as group polarization, also shows that imperfect inferences about the actions of others can distort social beliefs. Schkade, Sunstein, and Hastie (2007) demonstrate this phenomenon in an experimental study of two groups of subjects fromColorado, one group from Boulder (a liberal town) and the other from Colorado Springs (a conservative town). The subjects were asked to discuss contentious political issues, including climate change, affirmative action, and same-sex partnerships. The study found that the groups’ beliefs about these issues became more extreme after deliberation (i.e., liberals became more liberal and conservatives became more conservative). Glaeser and Sunstein (2009) emphasize that confidence and polarization increase even when the deliberation process results in very little new information. The authors suggest that people are “credulous Bayesians,” treating the contribution of each member of the group as if it were a truthful revelation of an independent private signal. Thus people insufficiently adjust for the fact that information sources in the group are dependent, that the group may not be a representative sample of the population, and that people may strategically manipulate the messages they send to the group. This phenomenon is also known as correlation neglect (Ortoleva and Snowberg 2015).
Thus far we have examined how individuals and groups respond to informational inputs. Here we consider the supply side of the informationmarket—themedia—and ask whether it is likely to provide an accurate picture of environmental risks. There can be no doubt that the media have a big impact on both people’s beliefs and the political process.6 However, the media are subject to economic incentives, competitive pressures, and norms of best practice, all of which influence which information they report and how they report it. It is thus important to understand how these factors might affect the quality of reporting about environmental risks.
Supply-driven reporting biases
Economists have studied how supply-driven biases in reporting may arise in models of media capture by governments or special interests (Besley and Prat 2006) and in models of the economic incentives and ideological preferences of journalists (Baron 2006). Both of these effects can lead to persistent biases in coverage. However, competition in media markets can help to alleviate these problems (Gentzkow and Shapiro 2006). Competition between media firms makes it more difficult for vested interests to suppress stories (they would need to “buy off” many firms simultaneously), and gives firms incentives to invest in news quality, thus increasing the costs of bias. On the other hand, Cage´ (2014) shows that an increase in the number of newspapers could actually decrease the quality and quantity of news provided. She suggests that the more competitive the news market, the greater is the media’s incentive to entertain rather than inform.
In addition to these purely economicmotivations, the norm of journalistic balancemay lead to slanted coverage of environmental issues. For example, Boykoff and Boykoff (2004) argue that the adoption of this norm in the U.S. print media from 1988 to 2002 led to a misrepresentation of the scientific consensus on climate change, which in turn contributed to the disconnect between popular opinion and the state of scientific knowledge.
The demand side of the media market may induce its own distortions. For example, confirmation bias—the fact that people prefer to receive information that confirms their prior beliefs— has been observed directly in news markets (Gentzkow and Shapiro 2010). The implications of confirmation bias for media markets have been studied by Mullainathan and Shleifer (2005). They show that if consumers prefer to hear news that confirms their prior beliefs, and have diverse beliefs about a given topic, competitive media firms will slant their coverage towards extreme positions. More optimistically, if the number of market participants in their model is very large, it is possible that even though individual outlets are biased, an individual who reads all sources may nevertheless be able to piece together accurate information. Given that most people consume a small sample ofmedia, however, it seems unlikely that individuals will receive accurate information about complex environmental problems.
Gentzkow and Shapiro (2006) consider a related model of demand-driven media bias and show that even rational Bayesian consumers will believe that information that confirms their prior beliefs is of high quality. Thus the media have incentives to pander to the beliefs of consumers in order to demonstrate their quality. Gentzkow and Shapiro (2006) also show thatmedia bias can be ameliorated if it is possible to explicitly verify a story after the fact. They emphasize, however, that this is much more likely to be feasible for short-run events (sports outcomes, weather forecasts) than complex long-run issues.
This discussion suggests that although competitive media markets may provide checks on bias for some issues, it is unlikely that these checks will be effective at ruling out informational distortions for complex environmental problems. Since the media play such an important part in deciding who gets elected, and which policies governments are likely to implement, this is a telling finding.
Beliefs and Politics
The previous section highlighted the facts that (1) individuals are unlikely to process the information they receive in a Bayesian manner, (2) group interactions can reinforce individual biases, and (3) the most important source of information on environmental problems—the media—is subject to bias. These biases only matter if they translate into inadequate policy choices. In order to understand how this might occur, we need to understand how public decision-making is affected by the distribution of beliefs in society.
Beliefs influence policy through many channels in modern democracies. First, we investigate how political parties’ electoral platforms might relate to the distribution of voters’ beliefs.7 Elections are, however, only one element of democratic decision-making. We usually delegate decision-making power to our elected representatives, and their beliefs and political incentives will have amajor impact on which policies are implemented. We examine these incentives next, focusing on whether policy choices by parliaments and congresses are likely to aggregate information efficiently, and on governments’ incentives to distort policy choices because of the heterogeneity in voters’ beliefs. Finally, we examine the supply side of the information market in the political process—experts and lobbies. Politicians are influenced by these persuasive actors, and we ask whether competition between opposing viewpoints is likely to result in unbiased information being provided to policymakers.
Belief Aggregation in Elections
One of the advantages of democratic systems of government overmore autocratic alternatives is that they provide amechanismfor bringing information that is dispersed across the population into the political process. The argument goes that since the public knows more about how a policy will affect them than distant government officials, centralized decision-making is informationally inefficient (Hayek 1945). Thus people’s votes provide valuable information about their preferences over public policies, and elections provide a means for collecting that information. But how exactly are people’s beliefs represented by electoral outcomes?
The median voter theorem
The median voter theorem (Black 1948; Downs 1957) provides a “base-case” model of how preferences and beliefs might be aggregated in elections. Suppose that beliefs about the severity of an environmental problem can be mapped into preferences over a one-dimensional policy variable (e.g., the level of a carbon tax), that people’s preferences over this variable are “single peaked,”8 and that two office-seeking parties compete to win amajoritarian election over which policy to implement. The theorem states that the unique equilibrium of this electoral game is for both parties to announce that they will implement the preferred policy of the median voter.9 In the case of a carbon tax, this means that both candidates would propose a policy for which exactly half the population would prefer a higher tax and exactly half the population would prefer a lower tax.
Let us take the median voter result at face value for themoment and ask what it means for the belief aggregation properties of elections. The result provides an optimistic view of the ability of elections to balance out opposing extreme viewpoints. As long as there are equal numbers of people with opposing biases (some overestimating and some underestimating environmental risks), their beliefs will have no effect on the electoral outcome. If, however, there is a tendency for biases to be asymmetric—for example, for more people to underestimate risks than to overestimate them—the median voters’ preferences will also be biased. Thus the median voter theorem provides only a partial antidote to voter bias—we need biases to be symmetrically distributed if they are to cancel out. This symmetry assumption is strong. Voluminous behavioral evidence, some of which we have discussed here, suggests that people are prone to common biases in assessing and understanding environmental risks (Margolis 1996; Weber and Stern 2011).
Belief biases and electoral outcomes
In the standard setup of themedian voter result, voters’ preferences do not depend explicitly on their beliefs about some uncertain “state of the world,” for example, whether the climate is changing or not. This means that we cannot say whether the heterogeneity in their preferences derives from differing tastes (e.g., political ideologies), differing beliefs, or both. In order to separate tastes from beliefs, we need to extend voters’ preferences to depend explicitly on the “state of the world.” For example, a votermight prefer a higher carbon tax if climate change is a serious problem, and a lower tax if it is not. In suchmodels, voters’ beliefs about the state of the world are explicit determinants of parties’ electoral incentives and can be separated from their political ideologies.
Levy and Razin (2014) use this type of model to analyze how biases in voters’ information processing may influence information aggregation in elections. Surprisingly, they find that elections can aggregate information more effectively, and lead to better policy choices, when voters are subject to correlation neglect10 than when they are strict Bayesians. This is because biased voters overreact to information, causing their policy preferences to be more dependent on their information than on their ideological preferences. This can be beneficial to society if voters’ ideological preferences are sufficiently at odds with optimal policy choices. Ashworth and Bueno de Mequista (2014) also emphasize that in order to understand how behavioral biases affect democratic outcomes, we must understand how these biases affect the behavior of strategic political parties.
Signaling in elections
Political parties often possess their own information about the benefits of policies. Strategic political parties know that voters might be able to infer the party’s private information from the election platform it chooses—the party’s electoral platform signals its private information. If voters are able to infer parties’ information from their platforms, this will cause them to update their beliefs and will thus also change their policy preferences. Anticipating that this will occur changes the electoral incentives of strategic parties, as they know that any platform they choose will convey information to the voter, thus changing the nature of the electoral contest.
A growing literature examines such electoral signaling games. For example, Heidhues and Lagerlo¨f (2003) show that office-motivated parties will not propose policies that reflect their private information; rather, they choose platforms that pander to the electorate’s beliefs. Kartik et al. (2015) generalize and refine these results, showing that there is an equilibrium in which parties choose platforms that are more extreme than justified by their private information; that is, they anti-pander.11 Moreover, they show that in any equilibrium of the signaling game between voters and parties, voters are no better off than if they had ignored the information of one of the parties. This suggests that office-seeking behavior by political parties that have private information can result in a substantial loss of information in equilibrium.
Strategy and Information in Legislatures and Governments
The findings we have just discussed present a mixed picture of the ability of elections to aggregate voters’ or parties’ information about which policies will “work.” In the real world, however, people rarely vote directly on policies (except in occasional referenda and ballot propositions). Rather, they vote for representatives to a legislative body who then decide on policies on their behalf. Representatives who wish to be reelected will be constrained by their constituents’ beliefs, but they also have their own political objectives andmay act strategically in order to further these goals. In the discussion that follows, we investigate how political outcomes might be affected by such behaviors.
Strategic voting in legislatures
First, let us consider a very simple model of a legislative assembly voting on an environmental policy initiative. Each member has her own beliefs about whether the policy is harmful or beneficial and they simply vote “yes” or “no” on the policy. For the sake of simplicity, let us assume (optimistically) that all members have the same beneficent objectives, that is, they will vote “yes” if the policy is beneficial, and “no” if it is harmful, but they differ in their beliefs about the policy’s consequences. Will the legislature make the right choice about whether to implement the policy?
We have already discussed one approach, the CJT, which suggests that groups of individuals with heterogeneous beliefs will make more accurate choices on average than any single individual. We noted, however, that strategic behavior could alter the conclusions of the CJT if votes are cast sequentially rather than simultaneously. This issue may be of less concern for votes in parliaments or congresses, which are nearly simultaneous. Nevertheless, strategic behavior can strongly influence electoral outcomes even in simultaneous voting contexts. Austen-Smith and Banks (1996) show that even if everyone has the same objectives, truthful revelation of private information by all voters in a majority rule contest is not necessarily an equilibrium outcome. They go so far as to show that accounting for strategic behavior can in fact lead to group decisions being less accurate on average than simply allowing a single individual to decide, thus overturning the CJT results.12
Another feature of information aggregation in legislative bodies is the ability of representatives to abstain from voting on policy initiatives. The option to simply remove oneself from the policy decision clearly has consequences for information aggregation. Feddersen and Pesendorfer (1996) show that when representatives are informed to different degrees, those who believe themselves to be less informed than others will strategically abstain from voting on policy measures. The intuition for this is that strategic representatives realize that their vote will only matter if it is “pivotal,” that is, it tips the vote one way or the other. This means that a representative who believes herself to be relatively uninformed will prefer to abstain and let the vote be determined by her more informed colleagues than to have a tie broken by her own poorly informed vote; this is known as the swing voter’s curse.13 Although excluding less informed opinions may sound like a good thing, this is not necessarily the case. Representatives with well thought out beliefs, who nevertheless are more uncertain about policy consequences than their more confident peers, may choose to abstain. Strategic abstention may thus moderate the effectiveness of voting as an information aggregation mechanism.
This discussion shows that even if we assume that elected representatives act for the common good, strategic behavior can disturb the information aggregation properties of the CJT. This conclusion is not based solely on theoretical models. The empirical literature suggests that people do indeed act strategically when voting (Guarnaschelli et al. 2000; Battaglini et al. 2010).
Strategic policy selection by political parties
Thus far we have assumed that the policies voted on by legislatures are exogenously given. However, policy proposals are generally endogenous outcomes of the political process and are thus also subject to strategic effects and informational distortions. This is especially true of “long-run” policy issues, such as global environmental problems. Long-run policymaking in democracies requires incumbents to deal with a time-inconsistency problem: a party with different beliefs or values may replace them in the future. This lack of control over future policy choices creates a strategic incentive for current governments to choose policies that influence both who gets elected in the future and the policies future governments will implement.
These strategic policy manipulation effects have traditionally been studied by assuming that different parties have common beliefs, but heterogeneous objectives (e.g., Persson and Svensson 1989). However, heterogeneous beliefs also give rise to strategic incentives for policy manipulation. Millner, Ollivier, and Simon (2014) demonstrate that when parties have good faith disagreements about the consequences of “long-run” policies, incumbents have an incentive to “overexperiment”—that is, to do more than they would like in order to reduce uncertainty about policy consequences in the future. The intuition behind this is simple: experimentation reveals common information about the benefits of alternative policies, which brings opposing parties’ beliefs about which policies are likely to be successful closer together. Incumbents prefer to face opponents with beliefs closer to their own in future political contests, as this reduces the time-inconsistency problem. They thus have an incentive to use current policy choices to reduce the disagreements between parties, hence they overexperiment. To illustrate this mechanism, consider the case of fracking, which provides short-run economic benefits, but uncertain long-run environmental costs. These could arise due to groundwater contamination from the chemicals used in the fracking process. These costs depend on the chemical mix in the fracturing fluid and the geology of the site, and are difficult to predict ex ante. The only sure way to resolve uncertainty about costs is to observe them ex post. The strategic overexperimentation effect predicts that if parties disagree on the likely magnitude of these costs, even well-intentioned incumbents will have an incentive to regulate fracking less stringently than they would prefer. Having less stringent regulation will reveal more information about costs, thus allowing incumbents to avoid future political contests against opponents whose beliefs about the costs of fracking are very different from their own.14
Persuasion: Experts and Lobbies
Where do policymakers get their information? Although communities of experts such as the National Academy of Sciences or the Intergovernmental Panel on Climate Change provide the scientific background for policy debates on environmental issues, politicians are also strongly influenced by lobby groups. Unfortunately, the boundaries between “experts” and “lobbies” can be blurred, with some politicians viewing prominent scientific groups as “lobbies” and scientists hired by organized commercial interests as “experts” (Oreskes and Conway 2012).
The difficulty for the politician is that she knows that everyone wants something from her. Given this, how can she believe that the information being fed to her is not distorted to serve the provider’s interests? A large literature in economics addresses precisely this kind of strategic information transmission problem. Crawford and Sobel (1982) introduced the classicmodel of “cheap talk,” in which a more informed sender costlessly transmits a signal to a less informed receiver, who then takes an action that affects both parties’ (nonidentical) payoffs. Although the receiver knows that the sender aims to manipulate her actions, some information can still be revealed in equilibrium. Precisely howmuch information is revealed depends on the alignment between the sender’s and the receiver’s objectives—the more closely they are aligned, the more information is revealed. This classic result suggests that we need not be entirely pessimistic about the possibility of information transmission between strategic parties with different objectives, provided those objectives are not too different. More optimistically, Krishna and Morgan (2001) show that when the decision maker can sequentially consultmultiple informed experts with opposing objectives, she may be able to extract all their information. Battaglini (2002) produces a similar result when policies are multidimensional.15
Competing special interests
These results assume that experts have reliable information that the planner would actually find useful. Unfortunately, some “experts” may have more mercenary motives, adapting their policy message to the interests of their employer. Alternatively, they may have fringe views based on dubious science but which policymakers are unable to distinguish from scientifically sound opinions. Shapiro (2014) examines these effects in a model in which competing special interest groups seek to influence opinion through the media. Special interests can create the illusion of scientific disagreement where none exists by hiring contrarian “experts.” Their incentive to do so is greater when the scientific consensus is strong and the economic stakes are high. This means that the public and policymakers may remain uninformed, despite growing scientific agreement about a policy issue. Empirical evidence suggests this has occurred in the case of climate change. Shapiro (2014) shows that people who consume more news are not more likely to believe that there is “solid evidence that the earth is getting warmer,” but are substantially more likely to be informed about other current affairs issues. This suggests that special interests have successfully planted doubt in the public’smind, although little exists in the scientific community. These effects are stronger in countries and states where journalists believe that objectivity means “expressing fairly the position of each side in a political dispute,” regardless of the veracity of different viewpoints.
The legal scholar Cass Sunstein has observed that “the government currently allocates its limited resources poorly, and it does so partly because it is responsive to ordinary judgments about the magnitude of risks” (Sunstein 2000). There are two components to this claim. First, that the public’s beliefs about risks are likely to be inaccurate and, second, that these inaccuracies filter through the political system to affect government decision-making. While both of these components may seem uncontroversial, neither of them is self-evident. Perhaps social interactions can ameliorate individual biases, and perhaps political institutions can aggregate opposing viewpoints, resulting in policy choices that reflect a balanced consensus?
Our survey of the literature suggests that behavioral biases, social interactions, and the influence of the media are indeed likely to lead both individuals and groups to misestimate environmental risks. In addition, although democratic institutions can provide some checks on the influence of biased beliefs, they can also introduce their own informational distortions into the policy process. Biases in risk perception and informational distortions to the policy process are issues that are relevant to many policy issues, but they are especially important for global environmental problems. We have argued that these problems have a peculiar set of characteristics that make them unlikely to be accurately assessed by nonspecialists, perhaps including policymakers themselves.
While we have sketched several mechanisms that may distort the informational inputs to environmental policymaking, many open questions remain. What determines whether we are likely to under- or overreact to a given environmental threat? Are some governance systems more likely than others to adopt policies informed by the best available scientific information? Which behavioral biases are the most important determinants of misperception of environmental problems? How important are belief distortions as an explanation of suboptimal regulation relative to more familiar explanatory factors such as international free-riding problems and domestic special interest politics?
Finally, while we have argued that understanding the factors that shape beliefs is a central task of the positive theory of environmental regulation, what we are ultimately interested in is what can be done to guard against the influence of both flawed “ordinary judgments” and willful misinformation. Ensuring that policymakers and the public have access to a basic scientific education is a necessary first step towards amoremeasured response to risks, but it is unlikely to be sufficient. Kahan et al. (2012) show that a high level of numeracy is no guarantee that people’s assessments of environmental risks will be closer to the scientific consensus.
A more effective approach will likely require reforms in the way the media and the government engage with scientific information. We would argue that the journalistic norm of equal treatment of opposing viewpoints has done considerable damage to the public’s understanding of complex issues such as climate change. Fair reporting does not mean allotting an equal number of column inches to every viewpoint, but rather a careful assessment of the merits of different arguments. It is unclear, however, how this normcan be changed, except by themedia itself. It is also notable that while central banks have apolitical influence over monetary policy, and social security programs help to counteract savers’ self-control problems, there are currently no comparably powerful institutions to ensure that environmental policies are informed by the best available scientific knowledge and that behavioral biases do not overly influence policy choices.16 It remains to be seen whether institutions such as the National Academy of Sciences and the Royal Society will be able to play a more formal role in distilling scientific consensus and providing the informational basis for environmental policy decisions.
Beliefs, Politics, and Environmental Policy