Research on “implicit bias” suggests that people can acton the basis of prejudice and stereotypes without intending to do so.While psychologists in the field of “implicit socialcognition” study consumer products, self-esteem, food, alcohol,political values, and more, the most striking and well-known researchhas focused on implicit biases toward members of socially stigmatizedgroups, such as African-Americans, women, and theLGBTQ community.[1]For example, imagineFrank, who explicitly believes that women and men are equally suitedfor careers outside the home. Despite his explicitly egalitarianbelief, Frank might nevertheless behave in any number of biased ways,from distrusting feedback from female co-workers to hiring equallyqualified men over women. Part of the reason for Frank’sdiscriminatory behavior might be an implicit gender bias. Psychologicalresearch on implicit bias has grown steadily (§1), raisingmetaphysical (§2), epistemological (§3), and ethicalquestions (§4).[2]
While Allport’s (1954)The Nature of Prejudiceremains a touchstone for psychological research on prejudice, the studyof implicit social cognition has two distinct and more recent setsof roots.[3]The first stems from thedistinction between “controlled” and“automatic” information processing made by cognitivepsychologists in the 1970s (e.g., Shiffrin & Schneider 1977). Whilecontrolled processing was thought to be voluntary, attention-demanding,and of limited capacity, automatic processing was thought to unfoldwithout attention, to have nearly unlimited capacity, and to be hard tosuppress voluntarily (Payne & Gawronski 2010; see also Bargh 1994).In important early work on implicit cognition, Fazio and colleaguesshowed that attitudes can be understood as activated by eithercontrolled or automatic processes. In Fazio’s (1995)“sequential priming” task, for example, following exposureto social group labels (e.g., “black”, “women”,etc.), subjects’ reaction times (or “responselatencies”) to stereotypic words (e.g., “lazy” or“nurturing”) are measured. People respond more quickly toconcepts closely linked together in memory, and most subjects in thesequential priming task are quicker to respond to words like“lazy” following exposure to “black” than“white”. Researchers standardly take this pattern toindicate a prejudiced automatic association between semantic concepts.The broader notion embedded in this research was that subjects’automatic responses were thought to be “uncontaminated” bycontrolled or strategic responses (Amodio & Devine 2009).
While this first stream of research focused on automaticity, asecond stream focused on (un)consciousness. Many studies demonstratedthat awareness of stereotypes can affect social judgment and behaviorin relative independence from subjects’ reported attitudes(Devine 1989; Devine & Monteith 1999; Dovidio & Gaertner 2004;Greenwald & Banaji 1995; Banaji et al. 1993). These studies wereinfluenced by theories of implicit memory (e.g., Jacoby & Dallas1981; Schacter 1987), leading to Greenwald & Banaji’soriginal definition of “implicit attitudes” as
introspectively unidentified (or inaccurately identified) traces ofpast experience that mediate favorable or unfavorable feeling, thought,or action toward social objects. (1995: 8)
The guiding idea here, as Dovidio and Gaertner (1986) put it, isthat in the modern world prejudice has been “drivenunderground,” that is, out of conscious awareness. This idea hasled to the common view that what makes a bias implicit is that a personis unwilling or unable to report it. Recent findings have challengedthis view, however (§3.1)
What a person says is not necessarily a good representation of thewhole of what she feels and thinks, nor of how she will behave.Arguably, the central advance of research on implicit social cognitionis the ability to assess people’s thoughts, feelings, andbehavior without having to ask them directly, “what do youthink/feel about X?” or “what would you do in Xsituation?”
Implicit measures, then, might be thought of as instruments thatassess people’s thoughts, feelings, and behavior indirectly, thatis, without relying on “self-report.” This is too quick,however. For example, a survey that asks “what do you think ofblack people” is explicit and direct, in the sense that thesubject’s judgment is both explicitly reported and the subject isbeing directly asked about the topic of interest to the researchers.However, a survey that asks “what do you think aboutDarnell” (i.e., a person with a stereotypically black name) isexplicit and indirect, because the subject’s judgment isexplicitly reported but the content of what is being judged (i.e., thesubject’s attitudes toward race) is inferred by the researcher.The distinction between direct and indirect measures is also relativerather than absolute. Even in some direct measures, such as personalityinventories, subjects may not be completely aware of what is beingstudied.
In the literature, “implicit” is used to refer to atleast four distinct things (Gawronski & Brannon 2017): (1) adistinctive psychological construct, such as an “implicitattitude,” which is assessed by a variety of instruments; (2) afamily of instruments, called “implicit measures,” thatassess people’s thoughts and feelings in a specific way (e.g., ina way that minimizes subjects’ reliance on introspection andtheir ability to respond strategically); (3) a set of cognitive andaffective processes—“implicit processes”—thataffect responses on a variety of measures; and (4) a kind of evaluativebehavior—e.g., a categorization judgment—elicited byspecific circumstances, such as cognitive load. In this entry, Iwill use “implicit” in the senses of (2) and (4), unlessotherwise noted. One virtue of this approach is that it allows one toremain agnostic about the nature of the phenomena implicitmeasures assess.[4]Consider Frank again.His implicit gender bias may be assessed by several differentinstruments, such as sequential priming or the “ImplicitAssociation Test” (IAT; Greenwald et al. 1998). The IAT—themost well-known implicit test—is a reaction time measure. In astandard IAT, the subject attempts to sort words or pictures intocategories as fast as possible while making as few errors as possible.In the images below, the correct answers would be left, right, left,right.
![[a black box in the center is the word 'Michelle' in white, on the top left are the words 'Female or [in white] Family [in green]', on the top right are the words 'Male or [in white] Career [in green]']](/image.pl?url=http%3a%2f%2fplato.stanford.edu%2fentries%2fhusserl%2f..%2f..%2fentries%2fassociationist-thought%2f..%2fimplicit-bias%2fimage1.png&f=jpg&w=240)
Image 1
![[a black box in the center is the word 'Michelle' in white, on the top left are the words 'Male or [in white] Family [in green]', on the top right are the words 'Female or [in white] Career [in green]']](/image.pl?url=http%3a%2f%2fplato.stanford.edu%2fentries%2fhusserl%2f..%2f..%2fentries%2fassociationist-thought%2f..%2fimplicit-bias%2fimage2.png&f=jpg&w=240)
Image 2
![[a black box in the center is the word 'Business' in green, on the top left are the words 'Male or [in white] Career [in green]', on the top right are the words 'Female or [in white] Family [in green]']](/image.pl?url=http%3a%2f%2fplato.stanford.edu%2fentries%2fhusserl%2f..%2f..%2fentries%2fassociationist-thought%2f..%2fimplicit-bias%2fimage3.png&f=jpg&w=240)
Image 3
An IAT score is computed by comparing speed and error rates on the“blocks” (or trials) in which the pairing of concepts isconsistent with common stereotypes (images 1 and 3) to the blocks inwhich the pairing of the concepts is inconsistent with commonstereotypes (images 2 and 4). If he is typical of most subjects, Frankwill be faster and make fewer errors on stereotype-consistent trialsthan stereotype-inconsistent trials. While this“gender-career” IAT pairs concepts (e.g.,“male” and “career”), other IATs, such as the“race-evaluation” IAT, pair a concept to an evaluation(e.g., “black” and “bad”). Other IATs assessbody image, age, sexual orientation, and so on. As of 2019,approximately 26 million IATs have been taken (although it is unclearif this number represents 26 million unique participants or 26 milliontests taken or started; Lai p.c.). One review (Nosek et al. 2007),which tested over 700,000 subjects on the race-evaluation IAT, foundthat over 70% of white participants more easily associated black faceswith negative words (e.g., war, bad) and white faces with positivewords (e.g., peace, good). The researchers consider this an implicitpreference for white faces over black faces.[5]
Although the IAT remains the most popular implicit measure, it isfar from the only one. Other prominent implicit measures, many of whichare derivations of sequential priming, are semantic priming (Banaji& Hardin 1996) and the Affect Misattribution Procedure (AMP; Payneet al. 2005). Also, a “second generation” ofcategorization-based measures (like the IAT) has been developed. Forexample, the Go/No-go Association Task (GNAT; Nosek & Banaji 2001)presents subjects with one target object rather than two in order todetermine whether preferences or aversions are primarily responsiblefor scores on the standard IAT (i.e., the ease of pairing good wordswith white faces and bad words with black faces, or the difficulty ofpairing good words with black faces and bad words with white faces;Brewer 1999).
A notable advance in the psychometrics of implicit bias has been theadvent of multinomial (or formal process) models, which identifydistinct processes contributing to performance on implicit measures.For example, elderly people tend to show greater bias on therace-evaluation IAT compared with younger people, but this may be dueto their having stronger preferences for whites or having weakercontrol over their biased responding (Nosek et al. 2011). Multinomialmodels, like the Quadruple Process Model (Conrey et al. 2005), are usedto tease apart these possibilities. The Quad model identifies fourdistinct processes that contribute to responses: (1) the automaticactivation of an association; (2) the subject’s ability todetermine a correct response (i.e., a response that reflectsone’s subjective assessment of truth); (3) the ability tooverride automatic associations; and (4) general response biases (e.g.,favoring right-handed responses). Multinomial modeling has madeclear that implicit measures are not “process pure,” i.e.,they do not tap into a single unified psychological process.
While there is not consensus about what implicit measures capture(§2), it is clear that they provide at least three kinds ofinformation (Gawronski & Hahn 2019). The first is information aboutdissociation with more explicit, direct measures. Correlations betweenimplicit and explicit measures tend to be relatively low (r =.2–.25; Hofmann et al. 2005; Cameron et al. 2012), although theserelations are significantly affected by methodological practices, suchas comparing non-corresponding implicit and explicit measures (e.g., animplicit measure of gender stereotypes and an explicit “feelingsthermometer” toward women). It is important to note the breadthof research in this vein; dissociations between implicit and explicitmeasures are found in the study of personality (e.g., Vianello et al.2010), attitudes toward alcohol (e.g., de Houwer et al. 2004), phobias(Teachman & Woody 2003), and more. Second, implicit measures can beused as dependent variables in experiments. Theories about theformation and change of attitudes, for example, have focused ondifferential effects of manipulations, such as counter-attitudinalinformation, on implicit and explicit measures (e.g., Gawronski &Bodenhausen 2006; Petty 2006). Third, implicit measures are used topredict behavior. Philosophers have been especially interested in therelationship between implicit bias and discriminatory behavior,particularly when the discriminatory behavior conflicts with aperson’s reported beliefs (as in the “Frank” caseabove). Studies report relationships between implicit bias and behaviorin a huge variety of social contexts, from hiring to policing tomedicine to teaching and more (for an incomplete list see Table 1 inJost et al. 2009). There is also voluminous, varied, and on-goingdiscussion about how well implicit measures predict behavior, alongwith several related critical assessments of the information implicitmeasures provide (§5).
“Implicit bias” is a term of art, used in a variety ofways. In this entry, the term is used to refer to the family ofevaluative judgments and behavior assessed by implicit measures (e.g.,categorization judgments on an IAT). These measures mimic somerelevant aspects of judgment and decision-making outside the lab(e.g., time pressure). But what do these measures measure? With someblurry boundaries, philosophical and psychological theories can bedivided into five groups. Implicit measures might provide informationabout attitudes (§2.1), implicit processes (§2.2), beliefs(§2.3), traits (§2.4), or situations (§2.5).
The idea that people’s attitudes are the cause of implicitbias is pervasive. The term “attitudes” tends to be useddifferently in psychology and philosophy, however. In psychology,attitudes are akin to preferences (i.e., likings and dislikings); theterm does not refer to propositional states per se (i.e., mental statesthat are thought to bear a relationship to a proposition), as it doesin philosophy. Most attitudinal theories of implicit bias use the termin the psychologist’s sense, although variations will be notedbelow.
2.1.1 Dual Attitudes in Psychology
Early and influential theories posited that people hold two distinctattitudes in mind toward the same object, one implicit and the otherexplicit (Greenwald & Banaji 1995; Wilson et al. 2000).“Explicit attitudes” are commonly identified with verballyreported attitudes, in this vein, while “implicitattitudes” are those that a person is unwilling or unable toreport. Evidence for theories of dual attitudes stems largely from twosources. The first are anecdotal reports of surprise and consternationthat people sometimes express after being informed of their performanceon an implicit measure (e.g., Banaji 2011; Krickel 2018). Theseexperiences suggest that people discover their putative implicitattitudes by taking the relevant tests, just like one learns aboutone’s cholesterol by taking the relevant tests. The second sourceof evidence for dual-attitude views are dissociations between implicitand explicit measures (§1.2). These suggest that implicit andexplicit measures may be tapping into distinct representations of thesame attitude-object (e.g., “the elderly”).
A central challenge for theories of this sort is whether peopletruly are unaware of their implicit biases, and if so, in what way(e.g., if people are unaware of the source, content, or behavioraleffects of their attitudes; §3.1). There may be reasons to positunconscious representations in the human mind independent of whetherpeople are or are not aware of their implicit biases, of course. But ifpeople are aware of their implicit biases, then implicit measures aremost likely not assessing unconscious “dual”attitudes.
2.1.2 Dual Attitudes in Philosophy
Some philosophers have proposed that implicit measures assess adistinct kind of “action-oriented” attitude, which isdifferent from ordinary attitudes, but not necessarily in terms ofbeing unconscious. The core idea here is that implicit attitudes linkrepresentations with behavioral impulses.[6] Gendler’s (2008a,b, 2011, 2012) accountof “alief,” asui generis mental state comprisedof tightly woven co-activating representational (R), affective(A), and behavioral (B) components, is emblematic ofthis approach. Gendler argues that theR-A-B components ofalief are “bundled” together or “cluster” insuch a way that when an implicitly biased person sees a black face in aparticular context, for example, the agent’s representation willautomatically activate particular feelings and behaviors (i.e., anR–A–B cluster). This is in contrast to the“combinatoric” nature of ordinary beliefs and desires, thatis, that any belief could, in principle, be combined with any desire.So while the belief that “that is a black man” is not fixedto any particular feelings or behavior, an alief will have contentlike, “Black man! Scary! Avoid!”
“To have an alief”, Gendler writes, is
to a reasonable approximation, to have an innate or habitualpropensity to respond to an apparent stimulus in a particular way. Itis to be in a mental state that is…associative,automatic andarational. As a class, aliefs arestates that we share with non-humananimals; they aredevelopmentally and conceptuallyantecedent to other cognitiveattitudes that the creature may go on to develop. Typically, they arealsoaffect-laden andaction-generating. (2008b: 557,original emphasis; see also 2008a: 641)
According to Gendler, aliefs explain a wide array of otherwisepuzzling cases of belief-behavior discordance, including not onlyimplicit bias, but also phobias, fictional emotions, and bad habits(2008b: 554). In fact, Gendler suggests (2008a: 663) that aliefs arecausally responsible for much of the “moment-by-momentmanagement” of human behavior, whether that behavior isbelief-concordant or not.
Critics have raised a number of concerns about this approach, inparticular whether putative aliefs form a unified kind (Egan 2011;Currie & Ichino 2012; Doggett 2012; Nagel 2012; Mandelbaum 2013).Others have proposed alternate conceptions of action-oriented dualattitudes. Brownstein and Madva (2012a,b; see also Madva and Brownstein2018 and Brownstein 2018), for example, propose that implicit attitudesare comprised ofF-T-B-A components: the perception of asalientFeature triggers automatic low-level feelings ofaffectiveTension, which are associated in turn with specificBehavioral responses, which either do or do notAlleviate the agent’s felt tension. This approach shareswith Gendler’s the idea that aliefs/implicit attitudes differ inkind from beliefs/explicit attitudes. Moreover, the difference betweenthese putative kinds of states is not necessarily the agent’sintrospective access to them. Gendler proposes that while paradigmaticbeliefs update when the agent requires new relevant information,paradigmatic aliefs don’t. In contrast, Brownstein and Madvaargue that implicit attitudes do update in the face of newinformation—this is the feed-forward function of“alleviation”—and thus can automatically yet flexiblymodify and improve over time. Thus, for Brownstein and Madva, implicitattitudes are implicated not only in bias and prejudice, but also inskillful, intelligent, and even ethical action.[7]But while implicit attitudes aren’t ballistic,information-insensitive reflexes, on Brownstein and Madva’s view,they also don’t update in the same way as ordinary attitudes.Brownstein and Madva draw the distinction in terms of two key features.First, implicit attitudes are paradigmatically insensitive to thelogical form in which information is presented. For example, subjectshave been shown to form equivalent implicit attitudes on the basis ofinformation and the negation of that information (e.g., Gawronski etal. 2008). Second, implicit attitudes fail to respond to the semanticcontents of other mental states in a systematic way; they appear to be“inferentially impoverished.” For example, implicitattitudes are implicated in behaviors for which it is difficult to givean inferential explanation (e.g., Dovidio et al. 1997) and implicitattitudes change in response to irrelevant information (e.g., Gregg etal. 2006; Han et al. 2006). Levy (2012, 2015)—who argues thatimplicit attitudes are “patchy endorsements”—makessimilar claims about the ways in which implicit attitudes do and do notupdate, although he does not argue that these kinds of states are“action-oriented” in the way that Gendler and Brownsteinand Madva do. Debate about these findings is ongoing (§2.3).
2.1.3 Single Attitudes
Some theories posit the existence of a singular representation ofattitude-objects. According to MODE (“Motivation and Opportunityas Determinants”; Fazio 1990; Fazio & Towles-Schwen 1999;Olson & Fazio 2009) and the related MCM (“Meta-CognitiveModel”; Petty 2006; Petty et al. 2007), attitudes areassociations between objects and “evaluative knowledge” ofthose objects. MODE posits one singular representation underlying thebehavioral effects measured by implicit and explicit tests. Thus, MODEdenies the distinction between implicit and explicit attitudes. Thedifference between implicit and explicit measures, then, reflects adifference in the control that subjects have over the measuredbehavior. Control is understood in terms of motivation and opportunityto deliberate. When an agent has low motivation or opportunity toengage in deliberative thought, her automatically activatedattitudes—which might be thought of as her “true”attitudes—will guide her behavior and judgment. Implicit measuresmanufacture this situation (of low control due to low motivation and/oropportunity to deliberate). Explicit measures, by contrast, increasenon-attitudinal contributions to test performance. MODE thereforeprovides empirically-testable predictions about the conditions underwhich a person’s performance on implicit and explicit measureswill converge and diverge, as well as predictions about the conditionsunder which implicit and explicit measures will and will not predictbehavior (see Gawronski & Brannon 2017 for review).
Influenced by dual process theories of mind, RIM(“Reflective-Impulsive Model”; Strack & Deutsche 2004)and APE (“Associative-Propositional Evaluation”; Gawronski& Bodenhausen 2006, 2011) suggest that implicit measures assessdistinctive cognitive processes. The central distinction at the heartof both RIM and APE is between “associative” and“propositional” processes. Associative processes are saidto underlie an impulsive system that functions according to classicassociationist principles of similarity and contiguity. Implicitmeasures are thought of as assessing the momentary accessibility ofelements or nodes of a network of associations. This network producesspontaneous evaluative responses to stimuli. Propositional processes,on the other hand, underlie a reflective system that validates theinformation provided by activated associations. Explicit measures arethought to capture this process of validation, which is said to operateaccording to agents’ syllogistic reasoning and judgments oflogical consistency. In sum, the key distinction between associativeand propositional processes according to RIM and APE is thatpropositional processing alone depends on an agent’s assessmentof the truth of a given representation.[8]APE in particular aims to explain the interactionsbetween and mutual influences of associative and propositionalprocesses in judgment and behavior.
RIM and APE bear resemblance to the dual attitudes theories inphilosophy discussed above. Indeed, Bodenhausen & Gawronski (2014:957) write that the “distinction between associative andpropositional evaluations is analogous to the distinction between‘alief’ and belief in recent philosophy ofepistemology.” It is important to keep in mind, however, that RIMand APE are not attitudinal theories. APE, for example, posits twodistinct kinds of process—associative and propositionalprocesses—that give rise to two kinds of evaluative responses tostimuli—implicit and explicit. It does not posit the existence oftwo distinct attitudes or two distinct co-existing representations ofthe same entity. It is also important to note that the distinctionbetween associative and propositional processes can be understood in atleast three distinct senses: as applying to the way in whichinformation is learned, stored, or expressed (Gawronski et al. 2017).At present, evidence is mixed for dissociation between associative andpropositional processing in the learning and storage of information,while it is stronger for dissociation in the behavioral expression ofstored information (Brownstein et al. 2019).
Some have argued that familiar notions of belief, desire, andpretense can in fact explain what neologisms like “implicitattitudes” are meant to elucidate (Egan 2011; Kwong 2012;Mandelbaum 2013). Most defend some version of what Schwitzgebel (2010)calls Contradictory Belief (Egan 2008, 2011; Huebner 2009; Gertler2011; Huddleston 2012; Muller & Bashour 2011; Mandelbaum 2013,2014, forthcoming).[9]Drawingupon theories of the “fragmentation” of the mind (Lewis1982; Stalnaker 1984), Contradictory Belief holds that implicit andexplicit measures both reflect what a person believes, and that thesedifferent sets of beliefs may be causally responsible for differentbehavior in different contexts (Egan 2008). In short, if a personbehaves in a manner consistent with the belief that black men aredangerous, it is because they believe that black men are dangerous(notwithstanding what they say they believe).
In the psychological literature, De Houwer and colleagues defend aview that can be thought of as supporting Contradictory Belief(Mitchell et al. 2009; Hughes et al. 2011; De Houwer 2014). On thismodel, propositions[10]havethree defining features: (1) propositions are statements about theworld that specify the nature of the relation between concepts (e.g.,“I am good” and “I want to be good” arepropositions that involve the same two concepts—“me”and “good”—but differ in the way that the conceptsare related); (2) propositions can be formed rapidly on the basis ofinstructions or inferences; and (3) subjects are conscious ofpropositions (De Houwer 2014). On the basis of data consistent withthese criteria—for example, responses on implicit measures areaffected by one-shot instruction—De Houwer (2014) argues thatimplicit measures capture propositional states (i.e., beliefs).[11]This claim represents anapplication of Mitchell and colleagues’ (2009) broader argumentthatall learning is propositional (i.e., there is no case inwhich learning is the result of the automatic associative linking ofmental representations). One reason philosophers have been interestedin this view is due to its resonance with classic debates in thephilosophy of mind between empiricists and rationalists, behavioristsand cognitivists, and so on.
Another belief-based approach argues that implicit biases should beunderstood as cognitive “schemas.” Schemas are clusters ofculturally shared concepts and beliefs. More precisely, schemas areabstract knowledge structures that specify the defining features andattributes of a target (Fiske & Linville 1980). The term“mother”, for example, invokes a schema that attributes acollection of attributes to the person so labelled (Haslanger 2015). Onsome accounts, schemas are “coldly” cognitive (Valian2005), and so in the psychologist’s sense, they are notattitudes. Rather, schemas are tools for social categorization, andwhile schemas may help to organize and interpret feelings andmotivations, they are themselves affectless. One advantage of focusingon schemas is that doing so emphasizes that implicit bias is not amatter of straightforward antipathy toward members of sociallystigmatized groups.
A separate version of the generic belief approach stems from recentwork in the philosophy of language. This approach focuses onstereotypes that involve generalizing extreme or horrific behavior froma few individuals to groups. Such generalizations, such as “pitbulls maul children” or “Muslims are terrorists”, canbe thought of as a particular kind of generic statement, which Leslie(2017) calls a “striking property generic”. This subclassof generics is defined by having predicates that express propertiesthat people typically have a strong interest in avoiding. Building onearlier work on the cognitive structure and semantics of generics(Leslie 2007, 2008), Leslie notes a particularly insidious feature ofsocial stereotyping: even if just a few members of what is perceived tobe an essential kind (e.g., pit bulls, Muslims) exhibit a harmful ordangerous property, then a generic that attributes the property to thekind likely will be judged to be true. This is only the case withstriking properties, however. As Leslie (2017) points out, ittakes far fewer instances of murder for one to be considered a murdererthan it does instances of anxiety to be considered a worrier. Strikingproperty generics may thus illuminate some social stereotypes (e.g.,“black men are rapists”) better than others (e.g.,“black men are athletic”). Beeghly (2014), however,construes generics as expressions of cognitive schemas, which maybroaden the scope of explanation by way of generic statements. In allof these cases, generics involve an array of doxastic properties.Generics involve inferences to dispositions, for example (Leslie 2017).That is, generic statements about striking properties will usually bejudged true if and only if some members of the kind possess theproperty and other members of the kind are judged to be disposed topossess it.
The most explicit defense of Contradictory Belief has been via atheory of “Spinozan Belief Fixation” (SBF; Gilbert 1991;Egan 2008, 2011; Huebner 2009; Mandelbaum 2011, 2013, 2014, 2016).Proponents of SBF are inspired by Spinoza’s rejection of theconcept of the will as a cause of free action (Huebner 2009: 68), anidea which is embodied in what they call the theory of “CartesianBelief Fixation” (CBF). CBF holds that ordinary agents arecapable of evaluating the truth of an idea (or representation, orproposition) delivered to the mind (via sensation or imagination)before believing or disbelieving it. Agents can choose to believe ordisbelieveP, according to CBF, in other words, viadeliberation or judgment. SBF, on the other hand, holds that as soon asan idea is presented to the mind, it is believed. Beliefs on this vieware understood to be unconscious propositional attitudes that areformed automatically as soon as an agent registers or tokens theircontent. For example, one cannot entertain or consider or imagine theproposition that “dogs are made out of paper” withoutimmediately and unavoidably believing that dogs are made out of paper,according to SBF (Mandelbaum 2014). More pointedly, one cannotentertain or imagine the stereotype that “women are bad atmath” without believing that women are bad at math. As Mandelbaum(2014) puts it, the automaticity of believing according to SBF explainswhy people are likely to have many contradictory beliefs; in order to rejectP, one must already believeP.[12]
SBF is strongly revisionist with respect to the ordinary concept ofbelief (but see Helton (forthcoming) for a similarly spirited but lessrevisionist view).[13]Notwithstanding this, the central line of debate about SBF’saccount of implicit bias—as well as about belief-based accountsof implicit social cognition generally—focuses on the fact thatpeople’s performance on implicit measures is sometimesunresponsive to the kinds of reinforcement learning based interventionsthat ought to affect associative processes and/or states; meanwhile,performance on implicit measures sometimes appears to be responsive tothe kinds of logical and persuasion based interventions thought toaffect doxastic states(e.g., de Houwer 2009,2014; Hu et al. 2017; Mann & Ferguson 2017; Van Dessel et al. 2018;for additional discussion see Mandelbaum 2013, 2016; Gawronski et al.2017; Brownstein et al. 2019). Caution is needed in drawing strongconclusions about cognitive structure from these behavioral data,however (Levy 2015; Madva 2016c; Byrd forthcoming; Brownstein et al 2019). Asnoted above (§1.2), implicit measures are not process-pure.Modeling technique for disentangling the multiple causal contributionsto performance on implicit measures may help to move these debatesforward (e.g., Conrey et al. 2005; Hütter & Sweldens2018).
As is the case with terms like “attitude” and“propositional,” psychologists and philosophers tend to usethe term “trait” in different ways. In psychology,trait-like constructs are stable over time and across situations. Ifyou have always disliked eating pork, and never eat it no matter thecontext, then your feelings toward pork are trait-like. If yousometimes decline to eat pork but sometimes indulge, depending on thecompany or your mood, then your feelings are more“state”-like. In the psychologist’s sense,significant evidence suggests that implicit bias is more state-likethan trait-like. Multiple longitudinal studies have found thatindividuals’ scores on implicit measures vary significantly overdays, weeks, and months, much more so than individuals’ scores oncorresponding explicit measures (Cooley & Payne 2017; Cunningham etal. 2001; Devine et al. 2012; Gawronski et al. 2017). Of course, thesignificance of this depends on one’s theory of implicit bias. Ifimplicit measures are theorized to capture spontaneous affectivereactions (as APE suggests; §2.2), then contextual and temporalvariability in performance should bepredicted (because, forexample, one’s immediate reactions to images of women leaderswill likely be different after watching a documentary about Ruth BaderGinsburg than after watchingClueless). However, if implicitmeasures are meant to “diagnose” stable features ofindividuals like political party affiliation, then far less variationshould be expected. Another possibility is that measurement errorcontributes significantly to the instability of scores on implicitmeasures. The fact that methodological improvements have in some casesimproved the temporal stability of participants’ performancesupports this idea (e.g., Cooley and Payne 2017).
In philosophy, “trait” is used more often in the contextof anti-representationalist, dispositional theories of mind. Whilerepresentationalists define concepts like “belief” in termsof internal, representational structures of the mind, dispositionalistsdefine concepts like “belief” in terms of tendencies tobehave in certain ways (and perhaps also to feel and think in certainways). Building upon Ryle (1949/2009), Schwitzgebel (2006/2010, 2010,2013) advances a dispositional theory of attitudes (in thephilosophical sense, that is, a theory that claims that beliefs,desires, hopes, etc. are dispositions). On his view, attitudes have abroad (or “multitrack”) profile, including dispositions tofeel, think, and speak in specific ways. The dispositional profile of agiven attitude is determined by the folk-psychological stereotype forhaving that attitude, not by what’s inside the agent’smetaphoric “belief box.” For example, to establish thatJordan believes that women make good philosophers, one would look towhat Jordan says about women philosophers, to her judgments about whichphilosophers are good and which aren’t, to her hiring practices,her gut feelings around men and women philosophers, etc. Agents withimplicit biases pose an interesting challenge to dispositionalists,since these agents often match only part of the relevantfolk-psychological stereotypes. For example, Jordan might say that shebelieves that women make good philosophers but fail to read any womenphilosophers (or, recall Frank;§1).On Schwitzgebel’s“gradualist dispositionalism,” Jordan and Frank would be“in-between believers,” agents who partly match therelevant folk-psychological stereotypes for the attitudes inquestion.
A related trait-based approach treats the results of indirectmeasures as reflective ofelements of attitudes, rather thanas assessing attitudes or biases themselves (Machery 2016, 2017). OnMachery’s view, attitudes (in the psychologist’s sense,that is, preferences) are dispositions and are comprised of variousbases, including feelings, associations, behavioral impulses, andpropositional states like beliefs. (In contrast to Schwitzgebel,Machery holds a representationalist view of belief, but adispositionalist view of attitudes.) To have a racist attitude, on thispicture, is to be disposed to display the relevant mix of these bases,that is, to display the feelings, associations, etc. that togethercomprise the attitude. Implicit measures, then, are said to capture oneof the psychological bases (e.g., her associations between concepts) ofthe agent’s overall attitude. Explicit questionnaire measurescapture another psychological basis of the agent’s attitude,behavioral measures yet another basis, and so on. Implicit measures,then, do not assess “implicit attitudes,” and indeed,Machery denies that attitudes divide into implicit and explicit kinds.Rather, implicit measures quantify elements of attitudes. In part, thisproposal is meant to explain some of the key psychometric properties ofimplicit measures, such as their instability over time and the factthat some implicit measures correlate poorly with each other (§5).These findings are consistent with the notion that different implicitmeasures quantify different psychological bases of attitudes, Macheryargues.
One advantage of thinking of implicit biases as traits is that it isconsistent with the way in which personality attributions readily admitof vague cases. Just as we might say that Frank ispartlyagreeable if he extols the virtues of compassion yet sometimestreats strangers rudely, we might say that Frank ispartlyprejudiced. Dispositional theories capture this intuition. On theother hand, trait-based theories of implicit bias face long-standingchallenges to dispositionalism in the philosophy of mind. One suchchallenge is that traits are explanatory as generalizations, not astoken causes of judgment and behavior(Carruthers 2013). Another is the specter of circularity arising from the simultaneous use of an agent’sbehavior to both define her disposition and to point to what herdisposition predicts (Bandura,1971; Cervone et al. 2015; Mischel 1968; Payne et al. 2017). In bothcases, the question for dispositionalism is whether it truly helps toexplain the data, or merely repackages outwardly observedpatterns in new terms.
The most common way people think and write about implicit biases isas attributes ofpersons. Another possibility, though, is thatimplicit biases are attributes ofsituations. Althoughpsychologists have been debating person-based and situation-basedexplanations throughout the history of implicit social cognitionresearch (Payne & Gawronski 2010; Murphy & Walton 2013; Murphyet al. 2018), the situationist approach has gained steam due to Payneand colleagues’ (2017) “bias of crowds” model.Borrowing from the concept of the “wisdom of crowds,” thisapproach suggests that differences between situations explains thevariance of scores on implicit measures, rather than differencesbetween individuals. A helpful metaphor used by Payne and colleagues isdoing “the wave” at a baseball game. Where a person issitting in the bleachers, in combination with where the wave is at agiven time, is likely to outperform most individual differences (e.g.,implicit or explicit feelings about the wave) in predicting whether aperson sits or stands. Likewise, what predicts implicit bias arefeatures of people’s situations, not features of theirpersonality. For example, living in a highly residentially segregatedneighborhood might be expected to outpredict racial implicit biascompared to individual-level factors, such as beliefs andpersonality.
The bias of crowds model is aimed at making sense of five featuresof implicit bias which are otherwise challenging to make sense oftogether, namely: (1) average group-level scores of implicit bias arevery robust and stable; (2) children’s average scores of implicitbias are nearly identical to adults’ average scores; (3)aggregate levels of implicit bias at the population level (e.g.,regions, states, and countries) are both highly stable and stronglyassociated with discriminatory outcomes and group-based disparities;yet, (4) individual differences in implicit bias have small-to-mediumzero-order correlations with discriminatory behavior; and (5)individual test-retest reliability is low over weeks and months. (SeePayne et al. 2017 for references.) Another advantage of the bias ofcrowds model is that it coalesces well with calls in philosophy forfocusing more on “structural” or “systemic”bias, rather than on the biases in the heads of individuals(§5).
One challenge for the bias of crowds model is explaining howsystemic biases interact with and affect the minds of individuals,however. Payne and colleagues appeal to the idea of the“accessibility” of concepts in individuals’ minds,that is, the “likelihood that a thought, evaluation, stereotype,trait, or other piece of information” becomes activated andpoised to influence behavior. The lion’s share of evidence, theyargue, suggests that the concepts related to implicit bias areactivated due to situational causes. This may be, but it does notexplain (a) how situations activate concepts in individuals’minds (Payne and colleagues are explicitly agnostic about the format ofcognitive representations that underlie implicit bias); and (b) howsituational factors interact with individual factors to give rise tobiased actions (Gawronski & Bodenhausen 2017; Brownstein et al.2019).
Philosophical work on the epistemology of implicit bias has focusedon three related questions.[14]First, do we have knowledge of our own implicit biases, and if so, how?Second, do the emerging data on implicit bias demand that we becomeskeptics about our perceptual beliefs or our overall status asepistemic agents? And third, are we faced with a dilemma between ourepistemic and ethical values due to the pervasive nature of implicitbias?
Implicit bias is typically thought of as unconscious (§2.1.1),but what exactly does this mean? There are several possibilities: theremight be no phenomenology associated with the relevant mental states ordispositions; agents might be unaware of the content of therepresentations underlying their performance on implicit measures, orthey might be unaware of the source of their implicit biases or theeffects those biases have on their behavior; agents might be unaware ofthe relations between their relevant states (e.g., that their implicitand explicit evaluations of a given target conflict); and agents mighthave different modes of awareness of their own minds (e.g.,“access” vs. “phenomenal” awareness; Block1995). Gawronski and colleagues (2006) argue that agents typically lack“source” and “impact” awareness of theirimplicit biases, but typically have“content” awareness.[15]Evidence forcontent awareness stems from “bogus pipeline” experiments(e.g., Nier 2005) in which participants are led to believe thatinaccurate self-reports will be detected by the experimenter. In theseexperiments, participants’ scores on implicit and explicitmeasures come to be more closely correlated, suggesting thatparticipants are aware of the content of those judgments detected byimplicit measures and shift their reports when they believe that theexperimenter will notice discrepancies. Additional evidence for contentawareness is found in studies in which experimenters bring implicitmeasures and self-reports into conceptual alignment (e.g., Banse et al.2001) and studies in which agents are asked to predict their ownimplicit biases (Hahn et al. 2014). Indeed, Hahn and colleagues (2014)and Hahn and Gawronski (2019) have found that people are good atpredicting their own IAT scores regardless of how the test isdescribed, how much experience they have taking the test, and how muchexplanation they are given about the test before taking it. Moreover,people have unique insight into how they will do on the test, insightwhich is not explained by their beliefs about how people in generalwill perform.
Hahn and colleagues’ data do not determine, however, whetheragents come to be aware of the content of their implicit biases throughintrospection, by drawing inferences from their own behavior, or fromsome other source (see Berger forthcoming for discussion). This is importantfor determining whether the awareness agents have of their implicitbiases constitutes self-knowledge. If our awareness of the content ofour implicit biases derives from inferences we make based on (forexample) our behavior, then the question is whether these inferencesare justified, assuming knowledge entails justified true belief. Somehave suggested that the facts about implicit bias warrant a“global” skepticism toward our capacities as epistemicagents (Saul 2012; see§3.2.2).If this isright, then we ought to worry that our inferences about the content ofour implicit biases, from all the ways we behave on a day-to-day basis,are likely to be unjustified. Others, however, have argued that peopleare typically very good interpreters of their own minds (e.g.,Carruthers 2009; Levy 2012), in which case it may be more likely thatour inferences about the content of our implicit biases arewell-justified. But whether the inferences we make about our own mindsare well-justified would be moot if it were shown that we have directintrospective access to our biases.
One sort of skeptical worry stems from research on the effects ofimplicit bias on perception (§3.2.1).Thisleads to a worry about the status of our perceptual beliefs. A secondkind of skeptical worry focuses on what implicit bias may tell us aboutour capacities as epistemic agents in general (§3.2.2).
Compared with participants who were first shown pictures of whitefaces, those who were primed with black faces in Payne (2001) werefaster to identify pictures of guns as guns and were more likely tomisidentify pictures of tools as guns. This finding has been directlyand conceptually replicated (e.g., Payne et al. 2002; Conrey et al.2005) and is an instance of a broader set of findings about the effectsof attitudes and beliefs on perception (e.g., Barrick et al. 2002;Proffitt 2006). Payne’s findings are chilling particularly inlight of police shootings of unarmed black men in recent years, such asAmadou Diallo and Oscar Grant. The findings suggest that agents’implicit associations between “black men” and“guns” may affect their judgment and behavior by affectingwhat they see. In addition to the moral implications, this may be causefor a particular kind of epistemic concern. As Siegel (2012, 2017,forthcoming) puts it, the worry is that implicit bias introduces acircular structure into belief formation. If an agent believes thatblack men are more likely than white men to have or use guns, and thisbelief causes the agent to more readily see ambiguous objects in thehands of black men as guns, then when the agent relies upon visualperception as evidence to confirm her beliefs, she will have moved in avicious circle.
Whether implicit biases are cause for this sort of epistemic concerndepends on what sort of causal influence social attitudes have onvisual perception. Payne’s weapons bias findings would be a caseof “cognitive penetration” if the black primes make theimages of tools look like images of guns, via an effect on perceptualexperience itself (Siegel 2012, 2017, forthcoming). This wouldcertainly introduce a circular structure in belief formation. Otherscenarios raise the possibility of illicit belief formation withoutgenuine cognitive penetration. Consider what Siegel calls“perceptual bypass”: the black primes do not cause thetools to look like guns (i.e., the prime does not cause a change inperceptual experience), yet some state in the agent, such as aheightened state of anxiety, is affected by the black prime and causesthe agent to make a classification error. This will count as a case ofillicit belief formation inasmuch as the agent’s social attitudescause her to be insensitive to her visual stimuli in a way thatconfirms her antecedent attitudes (Siegel 2012). Other scenarios mightallay the worry about illicit belief formation. For example, whatSiegel calls “disowned behavior” proposes the same route tothe classification error as “perceptual bypass,” exceptthat the agent antecedently regards her error as an error. Empiricalevidence can help to sort through these possibilities, though perhapsnot settle between them conclusively (e.g., Correll et al. 2015).
A broader worry is that research on implicit bias should causeagents to mistrust their knowledge-seeking faculties in general.“Bias-related doubt” (Saul 2012) isstronger thantraditional forms of skepticism (e.g., external world skepticism) inthe sense that it suggests that our epistemic judgments are not justpossibly but oftenlikely mistaken. Implicit biases are likelyto degrade our judgments across many domains, e.g., professors’judgments about student grades, journal submissions, andjobcandidates.[16]Moreover, asFricker (2007) points out, the testimony of members of stigmatizedgroups is likely to be discounted due to implicit bias, which, Saulsuggests, can magnify these epistemic failures as well as createothers, such as failing to recognize certain questions as relevant forinquiry (Hookway 2010). The key point about these examples is that ourjudgments are likely to be affected by implicit biases even when“we think we’re making judgments of scientific orargumentative merit” (Saul 2012: 249; see also Welpinghus forthcoming).Moreover, unlike errors of probabilistic reasoning, these effectsgeneralize across many areas of day-to-day life. We should be worried,Saul argues,
whenever we consider a claim, an argument, a suggestion, a question,etc from a person whose apparent social group we’re in a positionto recognize. (Saul 2012: 250).
Bias-related doubt may be diminished if successful interventions canbe developed to correct for epistemic errors caused by implicit bias.In some cases, the fix may be simple, such as anonymous review of jobcandidate dossiers. But other contexts will certainly bemore challenging.[17]More generally,Saul’s account of bias-related doubt takes a strongly pessimisticstance toward the normativity of our unreflective habits. “It isdifficult to see”, she writes, “how we could ever properlytrust [our habits] again once we have reflected on implicit bias”(2012: 254). Others, however, have stressed the ways in whichunreflective habits can have epistemic virtues (e.g., Arpaly 2004;Railton 2014; Brownstein & Madva 2012a,b; Nagel 2012; Antony 2016).Squaring the reasons for pessimism about the epistemic status of ourhabits with these streams of thought will be important in futureresearch.
Gendler (2011) and Egan (2011) argue that implicit bias creates aconflict between our ethical and epistemic aims. Concern aboutethical/epistemic dilemmas is at least as old as Pascal, as Egan pointsout, but is also incarnated in contemporary research on the value ofpositive illusions (i.e., beliefs like “I am brilliant!”which may promote well-being despite being false; e.g., Taylor &Brown 1988). The dilemma surrounding implicit bias stems from theapparent unavoidability of stereotyping, which Gendler traces to theway in which social categorization is fundamental to ourcognitive capacities.[18]For agents whodisavow common social stereotypes for ethical reasons, this creates aconflict between what we know and what we value. As Gendler putsit,
if you live in a society structured by racial categories that youdisavow, either you must pay the epistemic cost of failing to encodecertain sorts of base-rate or background information about culturalcategories, or you must expend epistemic energy regulating theinevitable associations to which that information—encoded in waysto guarantee availability—gives rise. (2011: 37)
Gender considers forbidden base rates, for example, which are usefulstatistical generalizations that utilize problematic social knowledge.People who are asked to set insurance premiums for hypotheticalneighborhoods will accept actuarial risk as a justification for settinghigher premiums for particular neighborhoods but will not do so if theyare told that actuarial risk is correlated with the racial compositionof that neighborhood (Tetlock et al. 2000). This “epistemicself-censorship on non-epistemic grounds” makes it putativelyimpossible for agents to be both rational and equitable (Gendler 2011:55, 57).
Egan (2011) raises problems for intuitive ways of diffusing thisdilemma, settling instead on the idea that making epistemic sacrificesfor our ethical values may simply be worth it. Others have been moreunwilling to accept that implicit bias does in fact create anunavoidable ethical-epistemic dilemma (Mugg 2013; Beeghly 2014; Madva2016b; Lassiter & Ballantyne 2017; Puddifoot 2017). One way ofdiffusing the dilemma, for example, is to suggest that it is not socialknowledgeper se that has costs, but rather that theaccessibility of social knowledge in the wrong circumstances hascognitive costs (Madva 2016b). The solution to the dilemma, then, isnot ignorance, but the situation-specific regulation of stereotypeaccessibility. For example, the accessibility of social knowledge canbe regulated by agents’ goals and habits (Moskowitz & Li2011). Readers interested in ethical-epistemic dilemmas due to implicitbias should also consider related scholarship on “moralencroachment” (e.g., Basu & Schroeder 2018; Gardiner2018).
Most philosophical writing on the ethics of implicit bias hasfocused on two distinct (but related) questions. First, are agentsmorally responsible for their implicit biases (§4.1)?Second, can agents change their implicitbiases or control their effects on their judgments andbehavior (§4.2)?
Researchers working on moral responsibility for implicit bias oftenmake two key distinctions. First, they distinguish responsibility forattitudes from responsibility for judgments and behavior. One can, thatis, ask whether agents are responsible for their putative (§2)implicit attitudes as such, or whether agents are responsible for theeffects of their implicit attitudes on their judgments and behavior.Most have focused on the latter question, as will I. A second importantdistinction is betweenbeing responsible andholdingresponsible. This distinction can be glossed in a number of differentbut related ways. It can be glossed as a distinction betweenblameworthiness and actual expressions of blame; between backward- andforward-looking responsibility (i.e., responsibility for things one hasdone in the past versus responsibility for doing certain things in thefuture); and between responsibility as a form of judgment versusresponsibility as a form of sanction. Most have focused on the formerof these disjuncts (being responsible, blameworthiness, etc.) via threekinds of approaches: arguments from the importance of awareness orknowledge of one’s implicit biases (§4.1.1);arguments from the importance of controlover the impact of one’s implicit biases on one’s judgmentand behavior (§4.1.2);and arguments from“attributionist” and “Deep Self”considerations (§4.1.3;see Holroyd et al. 2017 for amore in-depth review of theories of moral responsibility and implicitbias).
It is plausible that conscious awareness of our implicit biases is anecessary condition for moral responsibility for those biases. Saularticulates the intuitive idea, suggesting that we
abandon the view that all biases against stigmatised groups areblameworthy … [because a] person should not be blamedfor an implicit bias that they are completely unaware of, which resultssolely from the fact that they live in a sexist culture. (2013: 55,emphasis in original)
Saul’s claim appears to be in keeping with folk psychologicalattitudes about blameworthiness and implicit bias. Cameron andcolleagues (2010) found that subjects were considerably more willing toascribe moral responsibility to “John” when he wasdescribed as acting in discriminatory ways against black people despite“thinking that people should be treated equally, regardless ofrace” compared to when he was described as acting indiscriminatory ways despite having a “sub-conscious dislike forAfrican Americans” that he is “unaware ofhaving”.
Recalling the evidence that people often do have awareness of theirimplicit biases (§3.1),it would seem thattypical agents are responsible for those biases on the basis of theargument from awareness. However, if the question is whether agents areblameworthy for behaviors affected by implicit biases (rather than forhaving biases themselves), then perhaps impact awareness is whatmatters most (Holroyd 2012). That said, lacking impact awareness of theeffects of implicit bias on our behavior may not exculpate agents fromresponsibility even in principle. One possibility is that implicitbiases are analogous to moods in the sense that being in anintrospectively unnoticed bad mood can cause one to act badly (Madva2018). There is debate about whether unnoticed moods are exculpatory(e.g., Korsgaard 1997; Levy 2011). One possibility is that bad moodsand implicit biases both diminish blameworthiness, but do not undermineit as such. This claim depends in part on moral responsibilityadmitting of degrees.
One problem with focusing on impact awareness, however, as Holroyd(2012) points out, is that we may be unaware of the impact of a greatmany cognitive states on our behavior. The focus on impact awarenessmay lead to a global skepticism about moral responsibility, in otherwords. This suggests that impact awareness may not serve as a goodcriterion for distinguishing responsibility for implicit biases fromresponsibility for other cognitive states, notwithstanding whetherglobal skepticism about moral responsibility is defensible.
A second way to unpack the argument from awareness is to focus onwhat agentsought to know about implicit bias, rather thanwhat theydo know. This approach indexes moral responsibilityto one’s social and epistemic environment. For example, Kelly& Roedder (2008) argue that a “savvy grader” isresponsible for adjusting her grades to compensate for her likelybiases because she ought to be aware of and compelled by research onimplicit bias. In a similar spirit, Washington & Kelly (2016)compare two hypothetical egalitarians with equivalent psychologicalprofiles, the only difference between them being that the “OldSchool Egalitarian” is evaluating résumés in 1980and the “New Egalitarian” is doing so in 2014. Whileneither has heard of implicit bias, Washington & Kelly argue thatthe New Egalitarian is morally culpable in a way that the Old SchoolEgalitarian isn’t. Only the New Egalitarian could have, and oughtto have, known about his likely implicit biases, given the comparativestates of art of psychological research in 1980 and 2014. Theunderlying intuition here is that assessments of responsibility changewith changes in an agent’s social and epistemic environment.
A third way of unpacking the argument from awareness is to focus onthe way in which an attitude does or does not integrate with a varietyof the agent’s other attitudes once it becomes conscious (Levy2012; see§2.1).On this view, attitudesthat cause responsible behavior are available to a broad range ofcognitive systems. For example, in cognitive dissonance experiments(e.g., Festinger 1956), agents attribute confabulatory reasons tothemselves and then tend to act in accord with those self-attributedreasons. The self-attribution of reasons in this case, according toLevy (2012), has an integrating effect on behavior, and thus can bethought of as underwriting the sort of agency required for moralresponsibility. Crucially, it is when the agent becomes conscious ofher self-attributed reasons that they have this integrating effect.This provides grounds for claiming that attitudes for which agents areresponsible are those that integrate behavior when the agent becomesaware of the content of those attitudes. Implicit attitudes are notlike this, according to Levy. What’s morally important isthat
awareness of the content of our implicit attitudes fails tointegrate them into our person level concerns in the manner requiredfor direct moral responsibility. (Levy 2012: 9).
The fact that implicit processes are often defined in contrast to“controlled” cognitive processes (§2.2) implies thatthey may affect behavior in a way that bypasses a person’sagential capacities. The fact that implicit biases seem to“rebound” in response to intentional efforts to suppressthem supports this interpretation (Huebner 2009; Follenfant & Ric2010). Early research suggesting that implicit biases reflect mereawareness of stereotypes, rather than personal attitudes, also impliesthat these states reflect processes that “happen to”agents. More recently, however, philosophers have questioned theramifications of these and other data for the notion of controlrelevant to moral responsibility.
Perhaps the most familiar way of understanding control in theresponsibility literature is in terms of a psychological mechanism thatwould allow an agent to act differently than she otherwise would actwhen there is sufficient reason to do so (Fischer & Ravizza 2000).The question facing this sort of reasons-responsiveness view of controlis whether automatized behaviors—which unfold in the absence ofexplicit reasoning—should be thought of as under an agent’scontrol. Some have argued that automaticity and control are notmutually exclusive. Holroyd & Kelly (2016) advance a notion of“ecological control”, and Suhler and Churchland (2009)offer an account of nonconscious control that underwrites automaticityitself, yet is ostensibly sufficient for underwriting responsibility.Others have distinguished between automaticity and automatisms (e.g.,sleepwalking); in this sense, the relevant moral distinction might bedrawn in terms of agents’ ability to “pre-program”their automatic actions (but not automatistic actions) via previouscontrolled choices (e.g., Wigley 2007); it might be drawn in terms ofagents’ ability to consciously monitor their automatic actions(e.g., Levy & Bayne, 2004); or it might simply be the case thatputative implicit attitudes are not automatic because they are readilychangeable (e.g., Buckwalter forthcoming).[19]Others still have distinguished between“indirect” and “direct” control overone’s attitudes or behavior (e.g., Holroyd 2012; Levy &Mandelbaum 2014; Sie & Voorst Vader-Bours 2016). Holroyd (2012)argues that there are many things over which we do not hold direct andimmediate control, yet for which we are commonly held responsible, suchas learning a skill, speaking a foreign language, and even holdingcertain beliefs. None of these abilities or states can be had by fiatof will; rather, they take time and effort to obtain. This suggeststhat we can be held responsible for attitudes or behaviors over whichwe only have indirect long-range control. The question, then, ofcourse, is whether agents can exercise indirect long-range control overtheir implicit biases. Mounting evidence suggests that we can (§4.2).
“Attributionist” and Deep Self theories of moralresponsibility represent an alternative to arguments from awareness andcontrol. According to these theories, for an agent to be responsiblefor an action is for that action to “reflect upon” theagent “herself”. A common way of speaking is to say thatresponsibility-bearing actions are attributable to agents in virtue ofreflecting upon the agent’s “deep self”, where thedeep self represents the person’s fundamental evaluative stance(Sripada 2016). Although there is much disagreement in the literatureabout what the deep self really is, as well as what it means for anattitude or action to reflect upon it, attributionists agree thatpeople can be morally responsible for actions that are non-conscious(e.g., “failure to notice” cases), non-voluntary (e.g.,actions stemming from strong emotional reactions), or otherwisedivergent from an agent’s will (Frankfurt 1971; Watson 1975,1996; Scanlon 1998; A. Smith 2005, 2008, 2012; Hieronymi 2008; Sher2009; and H. Smith 2011).
One influential view developed in recent years is that agents areresponsible for just those actions or attitudes that stem from, or aresusceptible to modification by, the agent’s“evaluative” or “rational” judgments, which arejudgments for which it is appropriate (in principle) to ask the agenther reasons (in a justifying sense) for holding (Scanlon 1998; A. Smith2005, 2008, 2012). A. Smith suggests that implicit biases stem fromrational judgments, because
a person’s explicitly avowed beliefs do not settle thequestion of what she regards as a justifying consideration. (2012:581–582, fn 10)
An alternative approach sees the source of the “deepself” in an agent’s “cares” rather than in herrational judgments (Shoemaker 2003, 2011; Jaworska 2007; Sripada 2016).Cares have been described in different ways, but in this context arethought of as psychological states with motivational, affective, andevaluative dispositional properties. It is an open question whetherimplicit biases are reflective of an agent’s cares (Brownstein2016a, 2018). It is also possible that even in cases in which animplicit bias is not attributable to an agent’s deep self, it maystill be appropriate tohold the agent responsible forviolating some duty or obligation she holds due to her implicit biases(Zheng 2016). Glasgow (2016) similarly argues for responsibility forimplicit biases that may not be attributable to agents. His viewunfolds in terms of responsibility for actions from which agents arenevertheless alienated. Glasgow defends this view on the basis of“Content-Sensitive Variantism” and “Harm-SensitiveVariantism”, a pair of views according to which alienationexculpates depending on extra-agential features of an action, such asthe content of the action or the kind of harm it creates. Thesevariantist views are fairly strongly revisionist with respect totraditional conceptions of responsibility in the 20thcentury philosophical literature. Some have argued that research onimplicit bias calls for revisionism of this sort (Vargas 2005; Faucher2016).
Researchers working in applied ethics may be less concerned withquestions about in-principle culpability and more concerned withinvestigating how to change or control our implicit biases. Of course,anyone committed to fighting against prejudice and discrimination willlikely share this interest. Policymakers and workplace managers mayalso be concerned with finding effective interventions, given that theyare already directing tremendous public and private resources towardanti-discrimination programs in workplaces, universities, and otherdomains affected by intergroup conflict. Yet as Paluck and Green (2009)suggest, the effectiveness of many of the strategies commonly usedremains unclear. Most studies on prejudice reduction arenon-experimental (lacking random assignment), are performed withoutcontrol groups, focus on self-report surveys, and gather primarilyqualitative (rather than quantitative) data.
An emerging body of laboratory-based research suggests thatstrategies are available for regulating implicit biases, however. Oneway to class these strategies is in terms of those that purport tochange the apparent associations underlying agents’implicit biases, compared with those that purport to leave implicitassociations intact but enable agents tocontrol the effectsof their biases on their judgment and behavior (Stewart & Payne2008; Mendoza et al. 2010; Lai et al. 2013). For example, a“change-based” strategy might reduce individuals’automatic associations of “white” with “good”while a “control-based” strategy might enable individualsto prevent that association from affecting their behavior. Below, Ibriefly describe some of these interventions. For comparison of thedata on their effectiveness, see Lai and colleagues (2014, 2016), andfor discussion of their significance for theories of the metaphysics ofimplicit bias, including a helpful appendix listing“debiasing” experiments, see Byrd (forthcoming).
Intergroup contact (Aberson et al. 2008; Dasgupta &Rivera 2008; Anderson 2010 for discussion): long studied for itseffects on explicit prejudice (e.g., Allport 1954; Pettigrew &Tropp 2006), interaction between members of different social groupsappears to diminish implicit bias as well, albeit under somemoderating conditions (e.g., equal status interaction) and not underothers.
Approach training (Kawakami et al. 2007, 2008; Phills etal. 2011): participants repeatedly “negate” stereotypesand “affirm” counter-stereotypes by pressing a buttonlabelled “NO!” when they see stereotype-consistent images(e.g., of a black face paired with the word “athletic”) or“YES!” when they see stereotype-inconsistent images (e.g.,of a white face paired with the word “athletic”). Otherexperimental scenarios have had participants push a joystick away fromthemselves to “negate” stereotypes and pull the joysticktoward themselves to “affirm” counter-stereotypes.
Evaluative conditioning (Olson & Fazio 2006; De Houwer2011): a widely used technique whereby an attitude object (e.g., a pictureof a black face) is paired with another valenced attitude object (e.g.,the word “genius”), which shifts the valence of the firstobject in the direction of the second.
Counter-stereotype exposure (Blair et al. 2001; Dasgupta& Greenwald 2001): increasing individuals’ exposure toimages, film clips, or even mental imagery depicting members ofstigmatized groups acting in stereotype-discordant ways (e.g., imagesof female scientists).
Implementation intentions (Gollwitzer & Sheeran 2006;Stewart & Payne 2008; Mendoza et al. 2010; Webb et al. 2012):“if-then” plans that specify a goal-directed responsethat an individual plans to perform on encountering an anticipated cue.For example, in a “Shooter Bias” test, where participantsare given the goal to “shoot” all and only thoseindividuals shown holding guns in a computer simulation, participantsmay be asked to adopt the plan, “if I see a black face, I willthink ‘safe!’”[20]
“Cues for control” (Monteith 1993; Monteith etal. 2002): techniques for noticing prejudiced responses, in particular theaffective discomfort caused by the inconsistency of those responseswith participants’ egalitarian goals.
Priming goals, moods, and motivations (Huntsinger et al.2010; Moskowitz & Li 2011; Mann & Kawakami 2012): priming egalitarian goals, multicultural ideologies, or particularmoods can lower scores of prejudice on implicit measures.
There is some doubt about this way of categorizing interventions, assome control-based interventions may also change agents’underlying associations and some association-based interventions mayalso promote control (Stewart & Payne 2008; Mendoza et al. 2010).More significant though are concerns about the efficacy of theseinterventions over time (Lai et al. 2016), their practical feasibility(Bargh 1999; Schneider 2004), and the possibility that they maydistract from broader problems of economic and institutional forms ofinjustice (Anderson 2010; Dixon et al. 2012; see§5).Of course, most of the research oninterventions like these is recent, so it is simply not clear yet whichstrategies, or combination of strategies (Devine et al. 2012), will orwon’t be effective. Some have voiced optimism about the rolelab-based interventions like these can play as elements of broaderefforts to combat prejudice and discrimination (e.g., Kelly et al.2010a; Madva 2017).
Research on implicit bias has been criticized in several ways.Below are brief descriptions of, and discussion about, prominent linesof critique.[21]I leave asidecritical assessments of specific implicit measures.
Research on implicit bias has received a lot of attention, not onlyin philosophy and psychology, but in politics, journalism,jurisprudence, business, and medicine as well. Some have worried thatthis attention is excessive, such that the explanatory power ofresearch on implicit bias has been overstated (e.g., Singal 2017;Jussim 2018 (Other Internet Resources); Blanton & Ikizer 2019).
While the difficulty of public science communication is pervasive(i.e., not limited to implicit bias research), and the most egregiouscases are found in the popular press, it is true that some researchershave overhyped the importance of implicit bias for explaining socialphenomena. Hype can have disastrous consequences, such as creatingpublic distrust in science. One important point to bear in mind,however, is that the challenges facing science communication and thechallenges facing a body of research are distinct. That is, onequestion is whether the science is strong, and it is a separatequestion whether the strength of the science, such as it is, isaccurately communicated to the public. Overhyped research may createincentives for scientists to do flashy but weak work—and this isa problem—but problems with hype are nevertheless distinct fromproblems with the science itself.
Some have argued that explicit bias can explain much of whatimplicit bias purports to explain (e.g., Hermanson 2017a,b, 2018(Other Internet Resources); Singal 2017; Buckwalter 2018). JesseSingal (2017), for example, denies that implicit bias is moreimportant than explicit bias, pointing to the United States Departmentof Justice’s findings about intentional race-baseddiscrimination in Ferguson, MO and to the fact that the United Stateselected a relatively explicitly racist President in 2016.
Singal and others are surely right that explicit bias and outrightprejudice are persistent and, in some places, pervasive. It is,however, unclear who, if anyone, thinks that implicit bias is moreimportant than explicit bias. Philosophers in particular have beeninterested in implicit bias because, despite the persistence andpervasiveness of explicit bias, there are many people—presumablymany of those reading this article—who aim to think and act inunprejudiced ways, and yet are susceptible to the kinds of biasedbehavior implicit bias researchers have studied. This is not only animportant phenomenon in its own right, but also may contribute causallyto the mainstream complacence toward the very outrageous instances ofbigotry Singal discusses. Implicit bias may also contribute causally toexplicit bias, particularly in environments suffused with prejudicednorms (Madva 2019).
A related worry is that there is not agreement in the literatureabout what “implicit” means. Arguably the most commonunderstanding is that “implicit” means“unconscious.” But whatever is assessed by implicitmeasures is arguably not unconscious (§3.1).
It is true that there is no widespread agreement about the meaningof “implicit,” and it is also true that no theory ofimplicit social cognition is consistent with all the current data. Towhat extent this is a problem depends on background theories about howscience progresses. It is also crucial to recognize that implicitmeasures are not high-fidelity assessments of any one distinct“part” of the mind. They are not process pure (§1.2).This means that they capture a mix of various cognitive and affectiveprocesses. Included in this mix are people’s beliefs and explicitattitudes. Indeed, researchers have known for some time that the bestway to predict a person’s scores on an implicit measure like theIAT is to ask them their opinions about the IAT’s targets. Thisdoes not mean that implicit measures lack “discriminantvalidity,” however (i.e., that they are redundant with existingmeasures). By analogy, you are likely to find that people who say thatcilantro is disgusting are likely to have aversive reactions to it, butthis doesn’t mean that their aversive reactions are an invalidconstruct. Indeed, one of the leading theories of the dynamics andprocesses of implicit social cognition since 2006—APE(§2.2)—is based on a set of predictions about this processimpurity (i.e., about the interactions of implicit and explicitevaluative processes).
Several meta-analyses have found that, according to standardconventions, the correlation between implicit measures and behavior issmall to medium. Average correlations have ranged from approximately.14 to .37(Cameron et al. 2012; Greenwald et al. 2009; Oswald et al. 2013; Kurdi et al. 2019).This variety is dueto several factors, including the type of measures, type of attitudesmeasured (e.g., attitudes in general vs. intergroup attitudes inparticular), inclusion criteria for meta-analyses, and statisticalmeta-analytic techniques. From these data, critics have concluded thatimplicit measures are poor predictors of behavior. Oswald andcolleagues write, “the IAT provides little insight into who willdiscriminate against whom, and provides no more insight than explicitmeasures of bias” (2013, 18). Focusing on implicit bias researchmore broadly, Buckwalter suggests that a review of the evidence“casts doubt on the claim that implicit attitudes will be foundto be significant causes of behavior” (2018, 11).
Several background questions must be considered in order to assessthese claims. Should implicit measures be expected to have small,medium, or large unconditional (or “zero-order”)correlations with behavior? Zero-order correlations are those thatobtain between two variables when no additional variable has beencontrolled for. Since the 1970s, research on self-reported attitudeshas largely focused onwhen—under whatconditions—attitudes predict behavior, notwhetherattitudes predict behavior just as such. For example, attitudes betterpredict behavior when there is clear correspondence between theattitude object and the behavior in question (Ajzen & Fishbein1977). While generic attitudes toward the environment do not predictrecycling behavior very well, for instance, specific attitudes towardrecycling do (Oskamp et al. 1991). In the 1970s and 1980s, a consensusemerged that attitude-behavior relations depend in general on theparticular behavior being measured (e.g., political judgments vs.racial judgments), the conditions under which the behavior isperformed (e.g., under time pressure or not), and the person who isperforming the behavior (e.g., personality; Zanna & Fazio 1982). Awealth of theoretical models of attitude-behavior relations take thesefacts into account to make principled predictions about when attitudesdo and do not predict behavior (e.g., Fazio 1990). Similar work isunderway focusing on implicit social cognition (for review seeGawronski & Hahn 2019 and Brownstein et al. 2020).
In a related vein, it is also important to keep in mind that largezero-order correlations are rarely found in social science, let alonein attitude research. Large zero-order correlations should not beexpected to be found in implicit bias research, either(Gawronski, forthcoming).Indeed, the zero-ordercorrelations between other familiar constructs and outcome measures iscomparable to what has been found in meta-analyses of implicitmeasures: beliefs and stereotypes about outgroups and behavior(r = .12;Talaskaet al. 2008); IQand income (r = .2–.3; Strenze 2007); SAT scores and freshmangrades in college (r = .24; Wolfe and Johnson 1995);parents’ and their children’s socioeconomic status(r = .2–.3; Strenze 2007). The fact that no meta-analysis ofimplicit measures has reported nonsignificant correlations close tozero or negative correlations with behavior further supports theconclusion that the relationship between implicit bias and behaviorfalls within the “zone” of the relationship between thesemore familiar constructs and relevant kinds of behavior. Whether thiscommon pattern of findings in social science—of weak to moderateunconditional relations with behavior—is succor for supporters ofimplicit bias research or cause for concern about the social sciencesin general is an important and open question (see, e.g., Greenwald etal. 2015; Oswald et al. 2015; Jost 2019;Gawronski forthcoming).[22]But note thatthe consistent finding of meta-analyses of implicit measuresdistinguishes this body of research from those that have beenswept up in the social sciences’ ongoing “replicationcrisis.” That people, on average, display biases on implicitmeasures is one of the most stable and replicated findings in recentpsychological science.[23]Thedebate described in this section pertains to interpreting thesignificance of this finding.
So-called “structuralist” critics (e.g.,Banks & Ford 2009; Anderson 2010;Haslanger 2015; Ayala 2016, 2018; Mallon 2021) have argued that researchers ought to paymore attention to systemic and institutional causes ofinjustice—such as poverty, housing segregation, economicinequality, etc.—rather than focusing on the biases inside theminds of individuals. One way to express the structuralist idea is thatwhat happens in the minds of individuals, including their biases, istheproduct of social inequities rather than anexplanation for them. Structuralists then tend to argue thatour efforts to combat discrimination and inequity ought to focus onchanging social structures themselves, rather than trying to changeindividual’s biases directly. For example, Ayala argues that“agents’ mental states [are] … not necessary tounderstand and explain” when considering social injustice (2016,9). Likewise, in her call to combat segregation in the contemporaryUnited States, Anderson (2010) is critical of what she sees as adistracting focus on the psychology of bias.
A strong version of the structuralist critique—that research onthe psychology of prejudice is entirely useless, distracting, or evendangerous—is hard to defend. Large-scale demographic researchmakes clear that psychological prejudice is a key driver of (forexample) economic inequality (e.g., Chetty et al. 2018) and inequitiesin the criminal justice system (Goff etal. 2016). More broadly, no matter how autonomously certainsocial structures operate, people must choose to accept or rejectthose structures, to vote for politicians who speak for or againstthem, and so on. How people assess these options is at least in part apsychological question.
A weaker version of the structuralist critique calls for neededattention to the ways in which psychological and structural phenomenainteract to produce and entrench discrimination and inequity. This“interactionism” seeks to understand how bias operatesdifferently in different contexts. If you wanted to combat housingsegregation, for example, you would want to consider not onlyproblematic institutional practices, such as “redlining”certain neighborhoods within which banks will not give mortgage loans,and not only psychological factors, such as the propensity to perceivelow-income people as untrustworthy, but the interaction of thetwo. A low-income person from a redlined neighborhood might notbe perceived as untrustworthy when they are interviewing for a job as ananny, but might be perceived as untrustworthy when they areinterviewing for a loan. Adopting the view that bias and structureinteract to produce unequal outcomes does not mean that researchersmust always account for both. Sometimes it makes sense toemphasize one kind of cause or the other.
An interactionist version of structuralism can incorporate researchon prejudice into a wider understanding of inequity, rather than eschewit. One way to do so is to identify ways in which psychological biases(whether implicit or explicit) might be key contributors tosocial-structural phenomena. For example, structuralists sometimespoint to the drug laws and sentencing guidelines that contribute to themass incarceration of black men in the USA as examples of systemicbiases. Sometimes, however, when these laws and policies change,discrimination persists. While arrests have declined for all racialgroups in states that have decriminalized marijuana, black peoplecontinue to be arrested for marijuana-related offenses at a rate ofabout 10 times that of white people(Drug Policy Alliance 2018).This suggests that psychological biases (belongingto officers, policy makers, or voters) are an ineliminable part ofsystemic inequity. Such interactionism is just one approach forblending individual and institutional approaches to intergroupdiscrimination (see, e.g.,Madva2016a, 2017;Davidson & Kelly forthcoming). Another idea is to incorporate researchspecifically on implicit bias into a wider understanding of thestructural sources of inequity by using implicit measures to assessbroad social patterns (rather than to assess the differences betweenindividuals). The “Bias of Crowds” model (§2.5) arguesthat implicit bias is a feature of cultures and communities. Forexample, average scores on implicit measures of prejudice andstereotypes, when aggregated at the level of cities within the UnitedStates, predict racial disparities of shootings of citizens by policein those cities(Hehmanet al. 2017). Thus,while it is certainly true that most of the relevant literature anddiscussion conceptualizes implicit bias as way of differentiatingbetween individuals, structuralists might utilize the data fordifferentiating regions, cultures, and so on.
Nosek and colleagues (2011) suggest that the second generation ofresearch on implicit social cognition will come to be known as the“Age of Mechanism”. Several metaphysical questions fallunder this label. One question crucial to the metaphysics of implicitbias is whether the relevant psychological constructs should be thoughtof as stable, trait-like features of a person’s identity or asmomentary, state-like features of their current mindset or situation(§2.4). While current data suggest that implicit biases are morestate-like than trait-like, methodological improvements may generatemore stable, dispositional results on implicit measures. Ongoingresearch on additional psychometric properties of implicitmeasures—such as their discriminant validity and capacity topredict behavior—will also strengthen support for some theoriesof the metaphysics of implicit bias and weaken support for others.Another open metaphysical question is whether the mechanisms underlyingdifferent forms of implicit bias (e.g., implicit racial biases vs.implicit gender biases) are heterogeneous. Some have already begun tocarve implicit social attitudes into kinds (Amodio & Devine 2006;Holroyd & Sweetman 2016; Del Pinal et al. 2017; Del Pinal &Spaulding 2018; Madva & Brownstein 2018). Future research onimplicit bias in particular domains of social life may also help toilluminate this issue, such as research on implicit bias in legalpractices (e.g., Lane et al. 2007; Kang 2009) and in medicine (e.g.,Green et al. 2007; Penner et al. 2010), on the development of implicitbias in children (e.g., Dunham et al. 2013b), on implicit intergroupbias toward non-black racial minorities, such as Asians and Latinos(Dasgupta 2004), and cross-cultural research on implicit bias innon-Western countries (e.g., Dunham et al. 2013a).
Future research on epistemology and implicit bias may tackle anumber of questions, for example: does the testimony of social andpersonality psychologists about statistical regularities justifybelieving thatyou are biased? What can developments in visionscience tell us about illicit belief formation due to implicit bias? Inwhat ways is implicit bias depicted and discussed outside academia(e.g., in stand-up comedy focusing on social attitudes)? Also germaneare future methodological questions, such as how research on implicitsocial cognition may interface with large-scale correlationalsociological studies on social attitudes and discrimination (Lee 2016).Another crucial methodological question is whether and how theories ofimplicit bias—and more generally psychological approaches tounderstanding social phenomena—can come to be integrated withbroader social theories focusing on race, gender, class, disability,etc. Important discussions have begun (e.g., Valian 2005; Kelly &Roedder 2008; Faucher & Machery 2009; Anderson 2010; Machery et al.2010; Madva 2017), but there is no doubt that more connections must bedrawn to relevant work on identity (e.g., Appiah 2005), critical theory(e.g., Delgado & Stefancic 2012), feminist epistemology (Grasswick2013), and race and political theory (e.g., Mills 1999).
As with all of the above, questions in theoretical ethics aboutmoral responsibility for implicit bias will certainly be influenced byfuture empirical research. One noteworthy intersection of theoreticalethics with forthcoming empirical research will focus on theinterpersonal effects of blaming and judgments about blameworthinessfor implicit bias.[24]Thisresearch aims to have practical ramifications for mitigating intergroupconflict as well, of course. On this front, arguably the most pressingquestion, however, is about the durability of psychologicalinterventions once agents leave the lab. How long will shifts in biasedresponding last? Will individuals inevitably “relearn”their biases (cf. Madva 2017)? Is it possible to leverage the lessonsof “situationism” in reverse, such that shifts inindividuals’ attitudes create environments that provoke moreegalitarian behaviors in others (Sarkissian 2010; Brownstein 2016b)?Moreover, what has (or has not) changed in people’s feelings,judgments, and actions now that research on implicit bias has receivedconsiderable public attention (e.g., Charlesworth & Banaji2019)?
How to cite this entry. Preview the PDF version of this entry at theFriends of the SEP Society. Look up topics and thinkers related to this entry at the Internet Philosophy Ontology Project (InPhO). Enhanced bibliography for this entryatPhilPapers, with links to its database.
belief |cognitive science |feminist philosophy, interventions: moral psychology |feminist philosophy, interventions: social epistemology |moral responsibility |race |self-knowledge
Many thanks to Yarrow Dunham, Jules Holroyd, Bryce Huebner, DanielKelly, Calvin Lai, Carole Lee, Alex Madva, Eric Mandelbaum, JenniferSaul, and Susanna Siegel for invaluable suggestions and feedback.Thanks also to the Leverhulme Trust for funding the “ImplicitBias and Philosophy” workshops at the University of Sheffieldfrom 2011–2013, and to Jennifer Saul for running the workshopsand making them a model of scholarship and collaboration at itsbest.
Copyright © 2019 by
Michael Brownstein<msbrownstein@gmail.com>
View this site from another server:
The Stanford Encyclopedia of Philosophy iscopyright © 2025 byThe Metaphysics Research Lab, Department of Philosophy, Stanford University
Library of Congress Catalog Data: ISSN 1095-5054