Courage to Ask: A Resolution to Counteract Hyperpartisanship

A long-form essay exploring an expected answer to the question of how to counteract hyperpartisanship
essay
Author

Zach Duey

Published

January 3, 2021

I originally published this essay on Medium. While the framing (2021 resolutions) is now outdated, the issue remains as do the original conclusions.

Introduction

There is a problem with the state of political discourse in the United States. While intra-party discourse can and should be improved, what has me more concerned, what keeps me awake at night, and what should trouble us all is how the current level of political polarization has all but destroyed inter-party discourse. From the family dinner table to Washington, DC, productive political dialogue is rare, basic factual statements are not recognized as such, and, even in the face of compelling evidence, we are generally unwilling to change our minds. While I recognize that we are faced with many important issues today — from COVID-19 to climate change to racial and economic injustice — I am particularly concerned about hyperpartisanship because I believe it is a foundational issue that precludes effective solutions to other critically-important problems.

I expect that not everyone shares my view that hyperpartisanship is a foundational problem, however, I hope we can agree that it is a problem worthy of our attention. If we start on this common ground, then we can collectively address both hyperpartisanship and the many other challenges we face. I stress the collective aspect because I believe that hyperpartisanship neither originated with nor can be solved by a single group. Instead, I contend that political polarization arose organically from the information overload we experience as part of the modern world — a world defined by nearly instantaneous communication and an endless barrage of information. While I recognize that systemic forces also play an important role, fully addressing them is beyond the scope of this essay. Instead, I want to focus on what we, as individuals, can do to combat hyperpartisanship.1

My proposed solution involves three steps. First, each of us must alleviate information overload by judiciously but dramatically restricting the information we access. Only then will we be in a position to effectively process new information. The second step is to engage in honest self-reflection about how we form opinions and then take corrective action to improve that process. These two changes will help us to take the third step, which is to approach others with empathy. In the sections that follow, I will explain why these three steps are necessary and how they can help us to move forward.

In the first section below, I explain how information overload has led to a self-reinforcing cycle of political polarization, culminating in the widespread hyperpartisanship we experience today. I suggest that this cycle exists because we often fail to think critically about the information we consume. Thus, we must define what it means to think critically. Because critical thinking is a nebulous term, I spend the majority of this essay exploring a framework taken from Bayesian statistics that guides us in this effort. Before I lose you, let me be clear that I am not claiming that statistics, in itself, is the solution. Instead, if we take an abstract, perhaps philosophical view of the principles underlying Bayesian statistics, we can repurpose them to develop a model of how we form opinions. This framework provides important guideposts for becoming better critical thinkers and, just as importantly, illuminates the multitude of ways in which individuals who are thinking critically can arrive at different conclusions. With a clear understanding of what we mean by critical thinking, in the final section I propose a plan that seeks to counteract political polarization by addressing its underlying causes.

I recognize that this essay is rather lengthy, so if you only have a few minutes, I encourage you to skip to the final section. There, I give concrete recommendations about what you can do to help counteract hyperpartisanship. In order to affect the kind of change that is required to make a dent in this issue, I need a critical mass of people willing to join me in this effort. In this time of reflection and planning for the new year, I challenge you to join me in adding “The Countdown” to your 2021 resolutions.

Information Overload and Hyperpartisanship

The full history of how hyperpartisanship developed in the United States is well beyond the scope of this essay. Yet, if we can identify at least one significant contributing factor, then we can start to address the problem by focusing our attention there. In this section, we will see how information overload causes all of us, to varying degrees, to rely on time-saving heuristics that perpetuate political polarization.

In order to have well-grounded opinions about a topic, we should aim to incorporate all relevant information. However, this ideal is unattainable because we have limited time to acquire and critically assess that information. To critically evaluate information from any given source, we must have access to it, have the expertise to evaluate it, and believe that the source is trustworthy. Since most people in the United States have nearly instantaneous access to historically-unprecedented amounts of information, “access” more precisely refers to what information we choose to access in our limited time. Because there are so many news sources, we all have our favorites; yet, these preferences can become problematic, given how news sources present the same information in different ways.2 In order to critically evaluate information, we must also have some amount of background knowledge, or expertise. For example, the vast majority of us are not climate scientists, and therefore we cannot fully and independently evaluate the existing data about climate change and come to our own conclusions. As a result, we are forced to either develop that expertise, or to trust the evaluations provided by the news sources that we consume. Intuitively, our level of trust in a given source should correlate with the quality of the source’s evaluations. However, in order to do that, we need another source that is universally trusted to use as a benchmark, thereby leading us into a vicious cycle of seeking out new trusted sources to establish an appropriate level of trust in each preceding source. Alternatively, we can assess a source’s trustworthiness by looking at how that source presents information that we do have the expertise to independently evaluate. This approach effectively turns each of us into our own “universally-trusted” source, which leads to two problems. First, we do not always evaluate information correctly; second, we are rarely willing to accept others as universally-trusted sources, so it is unreasonable to assume that we will be accepted as such.

The components required to critically assess new information — access, expertise, and trust — are affected by the amount of time we have available. Because time is a limited resource, it also limits our ability to think critically about important issues; we move further away from the ideal of incorporating all available information the more time constrained we are. While time has always been a factor, it has become a limiting factor as the amount of available information has exploded. In response to this information overload, we are forced to resort to time-saving heuristics. In the political realm, we often adopt the view held by the political party that most closely aligns with our beliefs. We also selectively consume information from sources that reinforce our beliefs. Any time we engage one of these heuristics, we fail to think critically because we are effectively ignoring new information. As a result, we become dangerously overconfident in our views, incorrectly assuming that we have critically considered the information that informs them.

Information overload alone is insufficient to cause hyperpartisanship. Yet when combined with a small number of dominant political ideologies and biased sources of information, it leads to political polarization and ultimately hyperpartisanship. In the United States, we have exactly this set of conditions. The two-party system gives us a limited number of divergent political ideologies; both parties must differentiate themselves from one another in order to expand their respective bases. In addition, the majority of our news sources lean towards one or the other of these political ideologies. As time-constrained individuals, we all use one or both of the time-saving heuristics mentioned earlier at least some of the time, causing us to drift towards one of these two ideological extremes. This ideological drift underlies political polarization and, because it is rooted in information overload, is a self-sustaining process so long as information overload remains a problem.

The solution to hyperpartisanship hinges on being able to break out of this vicious cycle. As individuals, we must find an alternative “heuristic” that alleviates information overload but does not cause us to ignore information. One approach is to restrict the number of topics towards which we direct our attention. Even though we still have a limited amount of time, we can at least incorporate the information related to this restricted set of topics. In the final section of this essay, I provide some additional details about a resolution based on this approach. However, we must first address the glaring problem with this analysis; I have been discussing the importance of “critically evaluating” information but have not provided any details about how to do so. We will arrive at these details in an unexpected way, by delving into Bayesian statistics.

Bayesian Statistics

In order to understand how I arrived at my proposed plan for counteracting hyperpartisanship and why I think it is a viable solution, we must first review the key components of Bayesian statistics.3 I suspect that most of us have encountered word problems like this one: “There is a bag with 1 blue marble and 3 white marbles. Jill reaches into the bag and pulls out 1 marble. What is the probability that Jill selects a blue marble?” The problem tells us the number of marbles of each color in the bag, so the probability that Jill selects a blue marble is 1/4 or 25%. Now consider a different scenario in which we are told that the bag contains 4 marbles that are either blue or white. In this scenario, we are tasked with determining the number of marbles of each color in the bag, and we are allowed to do so by picking one marble from the bag at a time, observing its color, and putting it back. Without picking any marbles, we already know that the bag must contain one of the following five combinations: 4 blue, 3 blue and 1 white, 2 blue and 2 white, 1 blue and 3 white, or 4 white. Because the bag’s contents are unknown, we refer to each of these possibilities as a conjecture. In this new problem, our goal is to determine which of these conjectures is the most plausible.

In order to determine the most plausible conjecture, we need data—the first key component of Bayesian statistics. We collect data by following the procedure outlined previously: select a marble, observe its color, put it back, and repeat. To solve the problem, we count all of the ways that our observed sequence of picked marbles (our data) could have occurred if we assume, in turn, that each one of the five conjectures were true. For example, assume that we have observed two blue marbles and one white marble as our data. If we start with the conjecture that the bag contains all white marbles, then we can say that there are zero ways that this sequence could have occurred. We then repeat this exercise for the remaining four conjectures, and the conjecture with the most possibilities is the most plausible. This approach of using data to learn about something unknown is a core idea in Bayesian statistics.

To make things clearer, we can take this example further. While we are allowed to repeat the marble-picking process any number of times, assume that we only do it three times. Since the marbles are replaced each time and there are four of them, we know that there are 4 x 4 x 4 = 64 possible sequences. In general, if we repeat the marble-picking process n times, there are 4n possible sequences. Here, we must introduce the second component of Bayesian statistics: the prior distribution (prior). The prior is how we encode our preexisting knowledge about how likely each possible bag configuration is before, or prior to, observing any data. We do not have any such prior information in our scenario, so we should consider each conjecture to be equally likely. This type of prior is called a flat prior, a term we will explore later. As an additional note, we can imagine a variation of this scenario in which we do have prior information. For example, if we are told that the marble bag manufacturer produces twice as many bags with equal numbers of blue and white marbles as every other kind of bag, then it no longer makes sense to treat each bag configuration as equally likely. Again, we will explore this idea later in the essay. For now, we will consider the scenario as originally stated, with no prior information.

After we encode our belief that each bag configuration is equally likely, we will update this belief based on the data. Assume again that when we were picking marbles, we selected two blue marbles, followed by a white marble. Our goal is to count how many times this particular sequence (blue, blue, white) could have occurred for each of the five possible bag configurations. Once we have this information, we can rank the possible bag configurations from least plausible to most plausible. From there, we can make informed statements about the contents of the bag.

Bag (Conjecture) Count (Likelihood) Posterior Probability
Blue, Blue, Blue, Blue 0 0
Blue, Blue, Blue, White 27 27 / 38 = 71%
Blue, Blue, White, White 8 8 / 38 = 21%
Blue, White, White, White 3 3 / 38 = 8%
White, White, White, White 0 0

The first column in the table shows the five possible bag configurations. The second column contains the number of different ways that the Blue, Blue, White sequence could have occurred. These counts represent the likelihood of each bag configuration—the third key component of Bayesian statistics. After adding the counts in the second column, we see that there are a total of 38 different ways that the sequence could have occurred. The last column turns the counts from the second column into probabilities. Whereas the prior probabilities (20% each) were set before observing the data, the probabilities in the third column were calculated after, and they are therefore aptly named posterior probabilities. Even though we still do not know exactly how many marbles of each color are in the bag, we can use these posterior probabilities to make more informed statements about the contents of the bag. These posterior probabilities become our new prior, replacing the flat prior we used initially. Using this new prior, we could further reduce our uncertainty about the bag’s contents by picking more marbles and following the same procedure to calculate updated posterior probabilities. This process of selecting a prior, collecting data, combining the two through a likelihood, and computing posterior probabilities is Bayesian statistics in a nutshell. We are now in a position to explore in more detail how the abstracted view of this process, which I call the Bayesian framework, relates to critical thinking, and by extension, hyperpartisanship.

Bayesian Framework

When our opinions differ, we often assume that others are either not thinking carefully about the issue or not considering the available information. As a result, one commonly proposed solution is to improve how we teach critical thinking skills so that the next generation is better equipped to evaluate issues. I agree that this task is important, but it suffers from two limitations. First, it only applies to school-aged individuals while neglecting the rest of the population. Second, it leaves “critical thinking” undefined, and we need to agree on what it means to think critically if we want to teach this skill to others. I suggest that Bayesian statistics can help us solve these problems by leading us to a definition that applies to all individuals. In this section, I will outline the connection between Bayesian statistics and a framework for understanding how people form opinions—the Bayesian framework, or simply the framework. This framework both allows us to more accurately reflect on our own and others’ critical thinking skills and illuminates the many factors that can drive differences in opinion, even in a world in which all individuals are thinking critically.

In the previous section, we saw how the three key components of Bayesian statistics—priorrior, likelihood, and data—interact to form a posterior distribution, which can then be used to make informed decisions. If we define critical thinking as the process of analyzing new information in order to make a judgment, then we can already see how it relates to Bayesian statistics. Our preexisting beliefs, which are shaped by past experience and knowledge, are the prior. We update these beliefs in response to new information: our data. The likelihood is how we process this information; it is the ideology through which we combine our existing beliefs and new information. The result is an updated set of beliefs that we use to make decisions until new information arrives and the process repeats.

Prior

In Bayesian statistics, a prior is a way of encoding existing knowledge before observing data. The corresponding notion in the framework is quite similar. Our preexisting beliefs inevitably come into play when we assess new pieces of information. Importantly, these beliefs consist of not only our background knowledge but also our past experiences and biases. In our marble-picking example, we had no background knowledge about the bag’s contents and therefore selected a flat prior, which treated all bag configurations as equally likely. The underlying principle was that we wanted to select a prior that was consistent with what we knew about the bag’s contents. This same principle applies to the framework; we want to incorporate what we know about a topic when assessing new information. In Bayesian statistics, the prior is something that we explicitly choose. Similarly, we have some influence over the role that our preexisting beliefs play when assessing new information.

In Bayesian statistics, priors fall along a spectrum from non-informative to informative. A non-informative prior is loosely defined as one that has a minimal impact on posterior inferences (updated beliefs in the framework). As a result, these posterior inferences are largely driven by data and the likelihood. A flat prior is at the furthest end of the non-informative side of the spectrum. In the framework, a flat prior is analogous to having no preexisting beliefs, such that our updated beliefs are formed exclusively by new information in conjunction with our ideologies. This approach may seem reasonable when we form opinions about topics with which we are unfamiliar. However, in Bayesian statistics—as in opinion formation—completely non-informative priors can lead to posterior distributions that are not useful when making decisions. This situation arises for one of two reasons: either there is little data available, or the data that is available is highly variable. As an example of the first case, consider what would happen in the marble example if we were only allowed to select a single marble. We know that it would be blue or white, which would help to rule out either the all-white or all-blue options; other than that, we would have learned little about what the bag contains. The second situation, having highly variable data, is one that most parents have probably encountered. There is a plethora of competing parenting information available, such that reading all the available information is unlikely to result in any clear conclusions. In Bayesian statistics, one way to avoid these pitfalls is to replace the flat, non-informative prior with a more informative one.

A weakly informative prior plays a supporting role in influencing the posterior distribution. In the framework, this type of prior is analogous to allowing our background knowledge to have some influence on our updated beliefs. In the marble scenario, imagine that we are told that bags with equal numbers of blue and white marbles are produced more frequently than other bags. This information is useful, albeit limited. On the one hand, we could ignore it, but then we are no better off than we were before we had this information. A weakly informative prior in this situation is one that assigns, for example, a 4% higher probability to the 2 blue and 2 white marbles bag and therefore a 1% lower probability to the remaining four options. This example exposes an important question that arises in Bayesian statistics: how informative should the prior be? The corresponding question in the framework is: how much should we allow our preexisting beliefs to influence our opinions? Before we try to answer this question directly, we will first consider what happens if we select a prior that is further on the informative end of the spectrum.

In the framework, an informative prior is one that has an outsized impact on how our opinion changes (or does not change) in the presence of new information. The analogue in Bayesian statistics is a prior that assigns a high degree of probability to a narrow range of possible values. Imagine that we assigned a 60% probability to the bag with 2 blue and 2 white marbles and a 10% probability to the remaining four options. Although the data (blue, blue, white) remains the same, the resulting posterior probabilities here will be much closer to our prior probabilities than they were when we used a flat prior. In this situation, we assigned the prior probabilities somewhat arbitrarily based on vague information from the manufacturer. However, this informative prior would be reasonable if the manufacturer had said, “More than half of all bags produced have equal numbers of blue and white marbles,” rather than that these types of bags are produced “more frequently.” Regardless of the phrasing, it would take an incredible amount of marble-picking to diverge from these prior beliefs. Likewise, in the framework, if we have strong preexisting beliefs, it will take an incredible amount of new information to change our minds. There is nothing inherently wrong with allowing our preexisting beliefs to influence our updated beliefs, but it is necessary for us to accurately assess the degree to which these beliefs are informed by our past experiences and biases rather than our prior knowledge.

There can be more than one valid way to choose a prior in both Bayesian statistics and the framework. All priors fall somewhere along the non-informative/informative spectrum and there are many reasonable options. As a general principle, we can say that the less arbitrary the prior, the better. Practically speaking, applying this principle in the framework means taking the time to first understand where along the spectrum our preexisting beliefs fall and what is driving them. Are we so open to new information that we ignore what we already know? Or conversely, do we hold our beliefs so strongly that no amount of new information can change our minds? If we answer “yes” to the second question, then we must dig deeper to understand where these beliefs originated. In going through this introspective exercise, it will likely become clear how difficult it is to consistently avoid both of these extremes. Armed with this self-knowledge, we should feel a stronger sense of empathy for others who are undoubtedly struggling to strike the same balance. If we are committed to fixing the problem of hyperpartisanship, we must all be willing to do the hard work of both practicing self-reflection to understand and correct our own priors and approaching others with empathy.

Likelihood

In Bayesian statistics, the likelihood is a function that tells us how likely we are to observe some data given a particular set of values for the unknown parameters. In other words, it is the mechanism through which data influences the posterior distribution. In the marble-picking example, the unknown parameter is the bag’s contents and the likelihood is the count of the number of ways the observed data could have occurred if a particular conjecture about the bag’s contents was true. The analogue in the Bayesian framework is an ideology, or belief system, through which we interpret new information. Just as the likelihood is fundamental to Bayesian statistics, so are the ideologies that we all, either implicitly or explicitly, bring to the table. Our ideologies may not be something that we actively ponder, but they nonetheless mediate how we interpret information.4 Ideologies come in many forms, are not mutually exclusive, and are not necessarily political in nature. For the purpose of this essay, if it ends in “-ism,” it is probably an ideology; liberalism, conservatism, libertarianism, federalism, stoicism, capitalism, and fascism are all ideologies.

Two individuals thinking critically about the same issue may come to wildly different conclusions because of their differing ideologies. This observation is grounded in Bayesian statistics; if the priors and data are identical, but are passed through two different likelihoods, the posterior distributions will be different. Similarly, two individuals with identical preexisting beliefs who are exposed to the same new information will come to different conclusions if they have different ideologies. Ideologies are an integral part of the process of opinion formation, and they are not inherently good or bad. Instead, just as an informative prior is only problematic when it overshadows relevant information, ideology only becomes a problem when it plays an outsized role in how we update our opinions. In Bayesian statistics, the likelihood is the dominant factor when there is a large amount of data. Analogously, our ideologies have a much greater influence on our opinions when we process a lot of information. This observation suggests that in a world where all individuals are incorporating all available information, we can expect our opinions to diverge along the same lines as our underlying ideologies. If we are aware of this potential outcome, we should hesitate to attribute someone’s conflicting view to that person’s failure to accurately assess the available information. Ultimately, it is important for us to be transparent about our ideological differences so that when we inevitably reach different conclusions, we can determine the root cause.

Data

The role that data plays in Bayesian statistics is equivalent to the role of new information in the framework. Just as information is an integral part of every decision we make, any statistical analysis requires data. If there is no data, then the likelihood becomes unusable, and we are left with only the prior. The prior is sufficient for making inferences, but these inferences would be much better if they were informed by data. While ignoring all data may sound extreme, we run into the same problems when we only incorporate some data. To make things concrete, consider how the posterior probabilities would change in our marble example if we decided to ignore the third draw (blue, blue, rather than blue, blue, white). We would no longer assign zero probability to the conjecture that the bag contains no white marbles. As a result, there is more uncertainty about the bag’s contents because we must consider the possibility that the bag contains all blue marbles. A nearly identical parallel exists in the framework. Without new information, we must rely entirely on our preexisting beliefs to make decisions; if we only incorporate some relevant information, our updated opinions still rely heavily on our preexisting beliefs.

Here we encounter another problem. When multiple individuals choose to incorporate only a subset of the available data, they will often come to different conclusions. In Bayesian statistics, if the priors and likelihoods from two analyses are the same, the posterior distributions are different only if the data is different. Leveraging the marble-picking example again, consider what might happen if a different individual—call him Jack—who has the same background knowledge as Jill also draws three marbles from the bag. If Jack draws three white marbles in a row, he will come to a different conclusion about what the bag likely contains, despite the fact that both he and Jill drew the same number of marbles from the same bag. A particularly stark contrast would develop between their beliefs about whether or not the bag contains all white marbles. Having seen two blue marbles, Jill would vehemently deny the possibility that the bag contains all white marbles, whereas Jack would consider it to be a plausible option. Of course, one obvious solution to this problem would be for Jack and Jill to share their information with each other. We must be similarly open to sharing information when we find ourselves at odds with one another, because even if we are making every effort to critically evaluate the information we access, the mere fact that we access different sources of information can lead us to have divergent viewpoints. Unless we take the time to discuss, we can only make assumptions about the sources of our disagreements and we are left with an incomplete understanding of the situation.

At this point, we have seen how changes to any of the three components of Bayesian statistics (prior, likelihood, and data) result in different posterior distributions. Similarly, when individuals form opinions, differences in preexisting beliefs, ideologies, or information result in divergent beliefs. More importantly, in each instance, there is a reasonable explanation for why two individuals who are thinking critically can come to different conclusions. In order to resolve, or at least understand, these differences, we need to do two things: honestly reflect on what is driving our own opinions and deliberately approach others with empathy in order to understand what is driving their opinions.

A Path Forward

One of my greatest fears in our current political moment is that we will recognize the importance of the issue of hyperpartisanship but neglect to do the work necessary to combat it. I fear that we will instead retreat to the warmth of our social and ideological bubbles. I fear that after the inauguration, there will be widespread complacency among those who supported President-elect Biden’s presidential campaign and despondency, frustration, and anger among those who did not. More importantly, I fear that for the next four years, there will be minimal productive dialogue between individuals from these two groups. Productive dialogues are absolutely necessary if we hope to make any lasting progress towards solving the many problems that we currently face, as well as those that will arise in the future. For most of us, these dialogues will not happen automatically, and therefore we must seek them out. Despite these fears, I also have hope that the 2020 election cycle was enough to shake us out of our stupor and that it will lead us to fully recognize the perilous nature of the current political climate. To those who have placed their hope in the upcoming exchange of power as the event that will “get us on the right track,” I must say that I respectfully disagree. While the president plays a major role in setting the tone for the nation, he alone does not have the capacity to affect the change that is required to solve the problem of hyperpartisanship—the thorn in the side of our democracy.

In order to counteract hyperpartisanship, we must first revisit the conditions that caused it to develop. In the first section, I explained how attempting to incorporate all of the information that is at our fingertips forces us to use time-saving heuristics. These heuristics that mitigate information overload are only problematic when they are inconsistent with thinking critically. We saw that the limited number of political ideologies in the United States, together with ideologically-biased sources of information, prompts us to use heuristics that are not only inconsistent with thinking critically but also lead to political polarization. While we, as individuals, are unable to restructure the two-party system or make news sources ideologically neutral, we do have control over the heuristics we employ in response to information overload. Rather than resorting to time-saving heuristics that cause us to drift towards ideological extremes, we need a new approach that both mitigates information overload and is consistent with the model of critical thinking provided by the Bayesian framework.

With this goal in mind, I propose that each of us choose to focus on one policy area, such as economic, environmental, or education policy. By limiting our focus to a single topic, we put ourselves in a better position to have enough time to survey the available information, evaluate it, and incorporate it into our beliefs. From there, I suggest that we each identify two individuals who hold different ideological beliefs from our own, with whom we will discuss political topics. Friends or family who do not share our beliefs are the perfect place to start. Then, perhaps with the input of these two individuals, we should seek out three new sources of information to help prevent the effects of only accessing information that aligns with our existing beliefs. To promote feelings of empathy towards those with differing political ideologies, I suggest that we compare our views with those of the two dominant political parties; identify two positions held by our political party (or its figurehead) that we find questionable and two positions held by the opposing political party with which we at least partially agree. These four positions do not necessarily have to fall into the single area that we selected initially, although that may be helpful. Finally, since selecting a single policy area may be insufficient to mitigate information overload, I suggest selecting, at most, five specific issues within that area to investigate more deeply. Since every plan needs a name, I am calling this plan “The Countdown”: five specific issues, four positions, three new sources of information, two people, and one policy area.

At a time when many of us pause to reflect on the past year and make resolutions for the next, I hope you will join me in adding “The Countdown” to your list of 2021 resolutions. I suspect that if enough of us take these steps, we will discover both greater common ground and more nuance among our respective ideologies than are captured by the false dichotomy of liberalism versus conservatism. Regardless of whether or not you choose to adopt this resolution, I hope that you will engage others with empathy and, through honest self reflection, have the courage to ask yourself where you fail to think critically.

Footnotes

  1. I have limited my scope in this way based on the premise that each of us, to varying degrees, has the ability to choose how we respond to the incentives and rules promulgated by the institutions in which we participate and the systems that they compose. If we accept this premise, then it follows that many of the properties of our environments are emergent phenomena, therefore we do ourselves a disservice by operating as though “the system” has agency. I will not explore this idea further in this essay, which I expect to be immensely unsatisfying for some. However, I hope that the ideas that I present will be sufficiently thought-provoking that it is worth the time investment to continue reading, even if you reject this premise.↩︎

  2. There is much more that can and should be said about the role of the media in increasing polarization, but, again, this topic is beyond the scope of this essay. ↩︎

  3. In this section, I lean heavily on Richard McElreath’s excellent book, Statistical Rethinking, which provides a great introduction to this topic.↩︎

  4. The irony is that this statement is itself ideologically-grounded and therefore not one that I expect to be shared by every reader. The underlying premise is that complete objectivity is an unattainable ideal, and it is therefore better to aim to be aware of our ideological baggage rather than to eliminate it.↩︎