Take a good look at these parallel lines:
Which one appears longer? If you are a normal human being, then the one on the left appears longer even though the lines are of equal length. This is the famous Müller-Lyer Illusion. If you pay attention to what you’re experiencing right now as you look at these lines, you may feel something like a divided mindset. In this example, two cognitive systems are in tension. On the one hand, your immediate perceptual “gut feeling” says the line on the left is longer. On the other hand, your explicit reasoning judgment immediately corrects that gut feeling: this is an illusory experience. It’s just an appearance, not the reality.
One helpful way to understand what is occurring in this perceptual experience is to think of cognition as always involving a dual process.1 There are two distinct cognitive systems delivering judgments to the agent. Psychologists Keith Stanovich and Richard West have named these System 1 and System 2. Daniel Kahneman has raised this theory of mind to the level of common knowledge.2
System 1 is almost always on auto-pilot, accompanied neither by sensations of agency nor by effort. The kinds of automatic actions attributed to System 1 include detection of the distance of objects in a visual field, spontaneous bodily orientation toward a loud noise, threat detection, and spontaneous memory retrieval behaviors (e.g., completing the phrase “2 + 2 = ?” or “The capital of the USA is...”). These are only examples, but they showcase an evolutionary range of innate cognitive reflexes that we share with other species.
These actions also exemplify the distinctive cultural knowledge acquired through experience, practice, and stability of past association for memory retrieval. The important point is that these characteristic activities of System 1 are involuntary, automatic, and very fast in their delivery of information to the agent.
System 2 operations are diverse but unified by one feature: the activities are attention-intensive and thus are typically associated with voluntary activity. These activities are slower and more methodical than System 1 operations because they require uninterrupted effort that is easily disrupted by distraction.
Here are some typical System 2 activities in order of ascending complexity: bracing for a starter pistol in a race, scanning a large crowd to locate a woman with a blue hat, sifting through one’s memory to match the song on the radio to its proper name, self-monitoring one’s social behavior at an office party to make sure no lines of propriety are crossed, parallel parking on a crowded city block, comparing the pros and cons of an expensive purchase, and assessing the cogency of inferential relations in a complex argument.
Each of these scenarios requires the agent to focus intention and effort; this enables the agent to pay attention to the mental operation in order to maximize chances of a beneficial outcome.
Let’s revisit the Müller-Lyer Illusion.
In the Müller-Lyer Illusion, System 1 scans our visual stereotypes about the way lines and angles are associated in our vast past experience of visual fields. On the basis of these past visual stereotypes, System 1 delivers the wrong perceptual experience. The Müller-Lyer Illusion capitalizes on typical visual stereotypes to intentionally bias our perceptual mechanisms and deliver a very fast, intuitive, natural-feeling—but totally incorrect—judgment. It is a judgment that System 2 immediately disavows, but the left line still looks longer, even if we don’t believe it actually is. In the Müller-Lyer Illusion, there is a pervasive, explicit, conscious variance between how the lines appear intuitively and immediately to the perceiver and the perceiver’s measured, deliberative judgment.
Every instance of cognitive judgment is a product of both System 1 and System 2 processes—hence the name of the cognitive model, “dual process.” For human beings, one area of cognition that is critical for survival is the social world. Just as System 1 and System 2 are operative in our perceptual systems, they are also operative in what’s called “social cognition.”
For example, when I step onto a city bus, I immediately start sorting objects—especially people—into socially and evolutionarily salient valence categories: friend, foe, stranger, familiar, ally, threat, etc. This activity is a part of our nature.
Nature, however, is often tutored by history and social conditioning. Social categories are not just descriptive. They are also often simultaneously valenced with emotional force of attraction or repulsion (associated with evaluations of higher and lower value, quality, worth, etc.).
For example, think of the immediate set of automatic, intuitive, and immediate associations that come to mind when you dwell on the following store name in your mind: Walmart.
For a few seconds, allow the ideas and images to flow freely through your mind. As you do so, System 1 is simply associating together the ideas, images, and thoughts that have been historically grouped together in your past experience with Walmart in order to deliver an actionable representation.
Now, let’s try with another store name: Nordstrom. I wager that Nordstrom elicits a very different set of associations than Walmart. This example shows one particular part of what we do when we engage in social cognition, which involves that we pair our social categories with evaluations valences.
Racial categories are among the most explosive social categories that history has given to us. Historically, racial designations have their origin in ranking and branding systems. In the past, the various racial designations were thought to reflect valence differences as obvious as Walmart versus Nordstrom. This history of semantic usage has a cognitive and socially collective residue in System 1 and System 2 perceptions of race.
In the 1990s, social psychologists Mahzarin Banaji and Anthony Greenwald coined the phrase “implicit social cognition” and later generated a psychological test called the Implicit Association Test (IAT). This test is aimed at measuring the relationship between implicit, automatic associations of System 1 and slow, deliberative self-reports of System 2 with respect to racial perception.3 The IAT has many versions, but they all have in common the detection of System 1 implicit, attitudinal associations. Perhaps the most famous versions are ones that measure racialized bias. (Click here for an online IAT.)
The Race IAT is essentially a combination of a sorting and pairing assessment that measures whether test takers more readily and easily pair a racial categorization with a particular valence. For example, is a test taker faster at pairing a positive valence (e.g., “pleasant”) with white faces or black faces? The possible implication is that such a test taker implicitly associates negative words or concepts with black faces more readily than white ones.
In conducting the IAT, Banaji and Greenwald discovered something similar to the Müller-Lyer Illusion, in that there is a conflict between System 1 and System 2 in racial perception. But there is also a striking difference between the Müller-Lyer Illusion and the IAT. In the Müller-Lyer Illusion, there is a pervasive, explicit, conscious variance between how the lines appear intuitively and immediately to the perceiver and the perceiver’s measured, deliberative judgment. In other words, System 1 and System 2 are explicitly in conflict with one another so that the perceiver is aware of this inner conflict. The perceiver can experience the cognitive dissonance of simultaneously seeing the line in two incompatible ways.
However, in the case of System 1 and System 2 in racial perception, the variance—even if pervasive—goes underground and becomes implicit. In other words, typical participants would self-report that they do not have any racial biases or racialized beliefs. But their test results reveal unconscious, automatic racialized preferences that invisibly guide their social cognition and behavior. Part of the explanation is that because of the damaging history of racial categories and their historical meanings, the very use of these terms—though now explicitly unhooked from ranking and branding purposes—nevertheless drag some of that baggage into present collective consciousness.
So while in the Müller-Lyer Illusion there is a pervasive, explicit, conscious variance between System 1 and System 2, there is a pervasive, implicit, unconscious variance between System 1 and System 2 in social cognition about racial categorization. This implicit, unconscious variance makes it possible for a person both to possess racialized biases while also (sincerely) disavowing them. Such persons are not lying but rather are dissociated on this issue. They therefore are not even at the level of cognitive dissonance, but they’re also not necessarily in willful denial, because the conflict is implicit, below the level of self-awareness. This is why Banaji and Greenwald call this implicit social cognition. This term and the literature surrounding the IAT have accumulated into a vast academic and practical discussion about how racial social cognition functions across the strata of our social and political world.
This implicit social cognition also illustrates perhaps one of the most unnerving and challenging aspects of cognitive vice. The reason these vices are so stubborn is that they hide in plain sight. Even when System 2 is deployed in self-critical analysis, cognitive vices are camouflaged even from the very mind that seeks to eliminate them. This is true for a whole range of cognitive vices, not just ones pertaining to racialized stereotypes and biases. It’s like that famous quotation of Charles Baudelaire: “The finest trick of the devil is to persuade you that he does not exist.” That’s how cognitive vices flourish.
Dan Yim is Professor of Philosophy at Bethel University. He writes and teaches early modern philosophy, the intersections of race, gender, and sexuality, the philosophy of popular culture, and the epistemology of self-deception.
1. Agnes Moors and Jan De Houwer, “Automaticity: A Conceptual and Theoretical Analysis,” Psychological Bulletin 132 (March 2006): 297–326; Peter Carruthers, “An Architecture for Dual Reasoning,” in In Two Minds: Dual Processes and Beyond, ed. Jonathan Evans and Keith Frankish (New York: Oxford University Press, 2009): 109–27.
2. Keith E. Stanovich and Richard F. West, “Individual Differences in Reasoning: Implications for the Rationality Debate?” in Behavioral and Brain Sciences 23 (2000): 645–726; Daniel Kahneman, “A Perspective on Judgment and Choice: Mapping Bounded Rationality,” in American Psychologist 58 (September 2003): 697-720; Daniel Kahneman, Thinking, Fast and Slow (New York: Farrar, Strauss, and Giroux, 2011): 89-96.
3. Mahzarin R. Banaji and Anthony Greenwald, “Implicit Social Cognition: Attitudes, Self-esteem, and Stereotypes,” in Psychological Review 102 (1995): 4-27; Mahzarin R. Banaji and Anthony G. Greenwald, Blindspot: Hidden Biases of Good People (New York: Delacorte Press, 2013), 32-52.
The views, opinions, authors, and contributors represented in The Table do not necessarily represent the beliefs of Biola University or the Biola University Center for Christian Thought.