Movatterモバイル変換


[0]ホーム

URL:


SEP home page
Stanford Encyclopedia of Philosophy

Computational Philosophy

First published Mon Mar 16, 2020; substantive revision Mon May 13, 2024

Computational philosophy is the use of mechanized computationaltechniques to instantiate, extend, and amplify philosophical research.Computational philosophy is not philosophyof computers orcomputational techniques; it is rather philosophyusingcomputers and computational techniques. The idea is simply to applyadvances in computer technology and techniques to advance discovery,exploration and argument within any philosophical area.

After touching on historical precursors, this article discussescontemporary computational philosophy across a variety of fields:epistemology, metaphysics, philosophy of science, ethics and socialphilosophy, philosophy of language and philosophy of mind, often withexamples of operating software. Far short of any attempt at anexhaustive treatment, the intention is to introduce the spirit of eachapplication by using some representative examples.


1. Introduction

Computational philosophy is not an area or subdiscipline of philosophybut a set of computational techniques applicable across manyphilosophical areas. The idea is simply to apply computationalmodeling and techniques to advance philosophical discovery,exploration and argument. One should not therefore expect a sharpbreak between computational and non-computational philosophy, nor asharp break between computational philosophy and other computationaldisciplines.

The past half-century has seen impressive advances in raw computerpower as well as theoretical advances in automated theorem proving,agent-based modeling, causal and system dynamics, neural networks,machine learning and data mining. What might contemporarycomputational technologies and techniques have to offer in advancingour understanding of issues in epistemology, ethics, social andpolitical philosophy, philosophy of language, philosophy of mind,philosophy of science, or philosophy of religion?[1] Suggested by Leibniz and with important precursors in the history offormal logic, the idea is to apply new computational advances withinlong-standing areas of philosophical interest.

Computational philosophy is not the philosophyofcomputation, an area that asks about the nature of computation itself.Although applicable and informative regarding artificial intelligence,computational philosophy is not the philosophyof artificialintelligence. Nor is it an umbrella term for the questions about thesocial impact of computer use explored for example in philosophy ofinformation, philosophy of technology, and computer ethics. Moregenerally, there is no “of” that computational philosophycan be said to be the philosophyof. Computational philosophyrepresents not an isolated topic area but the widespread applicationof whatever computer techniques are available across the full range ofphilosophical topics. Techniques employed in computational philosophymay draw from standard computer programming and software engineering,including aspects of artificial intelligence, neural networks, systemsscience, complex adaptive systems, and a variety of computer modelingmethods. As a growing set of methodologies, it includes the prospectof computational textual analysis, big data analysis, and othertechniques as well. Its field of application is equally broad,unrestricted within the traditional discipline and domain ofphilosophy.

This article is an introduction to computational philosophy ratherthan anything like a complete survey. The goal is to offer a handfulof suggestive examples across computational techniques and fields ofphilosophical application.

2. Anticipations in Leibniz

The only way to rectify our reasonings is to make them as tangible asthose of the Mathematicians, so that we can find our error at aglance, and when there are disputes among persons, we can simply say:Let us calculate, without further ado, to see who is right.—Leibniz,The Art ofDiscovery (1685 [1951: 51])

Formalization of philosophical argument has a history as old as logic.[2] Logic is the historical source and foundation of contemporary computing.[3] Our topic here is more specific: the application of contemporarycomputing to a range of philosophical questions. But that too has ahistory, evident in Leibniz’s vision of the power ofcomputation.

Leibniz is known for both the development of formal techniques inphilosophy and the design and production of actual computationalmachinery. In 1642, the philosopher Blaise Pascal had invented thePascaline, designed to add with carry and subtract. Between 1673 and1720 Leibniz designed a series of calculating machines intended toinstantiate multiplication and division as well: the stepped reckoner,employing what is still known as the Leibniz wheel (Martin 1925). Thesole surviving Leibniz step reckoner was discovered in 1879 as workmenwere fixing a leaking roof at the University of Göttingen. Incorrespondence, Leibniz alluded to a cryptographic encoder and decoderusing the same mechanical principles. On the basis of thosedescriptions, Nicholas Rescher has produced a working conjecturalreconstruction (Rescher 2012).

But Leibniz had visions for the power of computation far beyond merearithmetic and cryptography. Leibniz’s 1666Dissertatio DeArte Combinatoria trumpets the “art of combinations”as a method of producing novel ideas and inventions as well asanalyzing complex ideas into simpler elements (Leibniz 1666 [1923]).Leibniz describes it as the “mother of inventions” thatwould lead to the “discovery of all things”, withapplications in logic, law, medicine, and physics. The vision was of aset of formal methods applied within a perfect language of pureconcepts which would make possible the general mechanization of reason(Gray 2016).[4]

The specifics of Leibniz’s combinatorial vision can be tracedback to the mystical mechanisms of Raymond Llull circa 1308,combinatorial mechanisms lampooned in Jonathan Swift’sGulliver’s Travels of 1726 as allowing one to

write books in philosophy, poetry, politics, mathematics, andtheology, without the least assistance from genius or study. (Swift1726: 174, Lem 1964 [2013: 359])

Combinatorial specifics aside, however, Leibniz’s vision of anapplication of computational methods to substantive questions remains.It is the vision of computational physics, computational biology,computational social science, and—in application to perennialquestions within philosophy—of computational philosophy.

3. Computational Philosophy by Example

Despite Leibniz’s hopes for a single computational method thatwould serve as a universal key to discovery, computational philosophytoday is characterized by a number of distinct computationalapproaches to a variety of philosophical questions. Particularquestions and particular areas have simply seemed ripe for variousmodels, methodologies, or techniques. Both attempts and results aretherefore scattered across a range of different areas. In what followswe offer a survey of various explorations in computationalphilosophy.

3.1 Social Epistemology and Agent-Based Modeling

Computational philosophy is perhaps most easily introduced by focusingon applications of agent-based modeling to questions in socialepistemology, social and political philosophy, philosophy of science,and philosophy of language. Sections 3.1 through 3.3 are thereforestructured around examples of agent-based modeling in these areas.Other important computational approaches and other areas are discussedin 3.4 through 3.6.

Traditional epistemology—the epistemology of Plato, Hume,Descartes, and Kant—treats the acquisition and validation ofknowledge on the individual level. The question for traditionalepistemology was always how I as anindividual can acquireknowledge of the objective world, when all I have to work with is mysubjective experience. Perennial questions of individual epistemologyremain, but the last few decades have seen the rise of a verydifferent form of epistemology as well. Anticipated in early work byAlvin I. Goldman, Helen Longino, Philip Kitcher, and Miriam Solomon,social epistemology is now evident both within dedicatedjournals and across philosophy quite generally (Goldman 1987; Longino1990; Kitcher 1993; Solomon 1994a, 1994b; Goldman & Whitcomb 2011;Goldman & O’Connor 2001 [2019]; Longino 2019). I acquire myknowledge of the world as a member of a social group: a group thatincludes those inquirers that constitute the scientific enterprise,for example. In order to understand the acquisition and validation ofknowledge we have to go beyond the level of individual epistemology:we need to understand the social structure, dynamics, and process ofscientific investigation. It is within this social turn inepistemology that the tools of computationalmodelling—agent-based modeling in particular—becomeparticularly useful (Klein, Marx and Fischbach 2018).

The following two sections use computational work on belief change asan introduction to agent-based modeling in social epistemology.Closely related questions regarding scientific communication are leftto sections3.2.2 and3.2.3.

3.1.1 Belief change and opinion polarization

How should we expect beliefs and opinions to change within a socialgroup? How might theyrationally change? The computationalapproach to these kinds of questions attempts to understand basicdynamics of the target phenomenon by building, running, and analyzingsimulations. Simulations may start with a model of interactivedynamics and initial conditions, which might include, for example, theinitial beliefs of individual agents and how prone those agents are toshare information and listen to others. The computer calculatessuccessive states of the model (“steps”) as a function(typically stochastic) of preceding stages. Researchers collect andanalyze simulation outputs, which might include, for example, thedistribution of beliefs in the simulated society after a certainnumber of rounds of communication. Because simulations typicallyinvolve many stochastic elements (which agents talk with which agentsat what point in the simulation, what specific beliefs specific agentsstart with, etc.), data is usually collected and analyzed across alarge number of simulation runs.

One model of belief change and opinion polarization that has been ofwide interest is that of Hegselmann and Krause (2002, 2005, 2006),which offers a clear and simple example of the application ofagent-based techniques.

Opinions in the Hegselmann-Krause model are mapped as numbers in the[0, 1] interval, with initial opinions spread uniformly at random inan artificial population. Individuals update their beliefs by takingan average of the opinions that are “close enough” to anagent’s own. As agents’ beliefs change, a different set ofagents or a different set of values can be expected to influencefurther updating. A crucial parameter in the model is the threshold ofwhat counts as “close enough” for actual influence.[5]

Figure 1 shows the changes in agent opinions over time in single runs withthresholds ε set at 0.01, 0.15, and 0.25 respectively. With athreshold of 0.01, individuals remain isolated in a large number ofsmall local groups. With a threshold of 0.15, the agents form twopermanent groups. With a threshold of 0.25, the groups fuse into asingle consensus opinion. These are typical representative cases, andruns vary slightly. As might be expected, all results depend on boththe number of individual agents and their initial random locationsacross the opinion space. See theinteractive simulation of the Hegselmann and Krause bounded confidence model in the Other Internet Resources section below.

three graphs: link to extended description below

Figure 1: Example changes in opinionacross time from single runs with different threshold values\(\varepsilon \in \{0.01, 0.15, 0.25\}\) in the Hegselmann and Krause(2002) model. [Anextended description of figure 1 is in the supplement.]

An illustration of average outcomes for different threshold valuesappears asfigure 2. What is represented here is not change over time but rather the finalopinion positions given different threshold values. As the thresholdvalue climbs from 0 to roughly 0.20, there is an increasing number ofresults with concentrations of agents at the outer edges of thedistribution, which themselves are moving inward. Between 0.22 and0.26 there is a quick transition from results with two final groups toresults with a single final group. For values still higher, the twosides are sufficiently within reach that they coalesce on a centralconsensus, although the exact location of that final monolithic groupchanges from run to run creating the fat central spike shown.Hegselmann and Krause describe the progression of outcomes with anincreasing threshold as going through three phases: “fromfragmentation (plurality) over polarisation (polarity) to consensus(conformity).” (2002: 11, authors’ italics)

a 3-d graph: link to extended description below

Figure 2: Frequency of equilibrium opinionpositions for different threshold values in the Hegselmann and Krausemodel scaled to [0, 100] (as original with axes relabeled; Hegselmannand Krause 2002). [Anextended description of figure 2 is in the supplement.]

A number of models further refine the “bounded confidence”mechanisms of the Hegselmann Krause model. Deffuant et al., forexample, replace the sharp cutoff of influence in Hegselmann-Krausewith continuous influence values (Deffuant et al. 2002; Deffuant 2006;Meadows & Cliff 2012). Agents are again assigned both opinionvalues and threshold (“uncertainty”) ranges, but theextent to which the opinion of agenti is influential onagentj is proportional to the ratio of the overlap of theirranges (opinion plus or minus threshold) overi’srange. Opinion centers and threshold ranges are updated accordingly,resulting in the possibility of individuals with narrower and widerranges. Given the updating algorithm, influence may also beasymmetric: individuals with a narrower range of tolerance, whichDeffuant et al. interpret as higher confidence or lower uncertainty,will be more influential on individuals with a wider range than viceversa. The influence on polarization of “stubborn”individuals who do not change, and of agents on extremes, has alsobeen studied, showing a clear impact on the dynamics of belief changein the group.[6]

Eric Olsson and Sofi Angere have developed a sophisticated program inwhich the interaction of agents is modelled within a Bayesian networkof both information and trust (Olsson 2011). Their program, Laputa hasa wide range of applications, one of which is a model of polarizationinterpreted in terms of the Persuasive Argument Theory in psychologyand which replicates an effect seen in empirical studies: theincreasing divergence of polarized groups (Lord, Ross, & Lepper1979; Isenberg 1986; Olsson 2013). Olsson raises the question ofwhether polarization may be epistemically rational, offering apositive answer. O’Connor and Weatherall (2018) and Singer etal. (2019) also argue that polarization can be rational, usingdifferent models and perhaps different senses of polarization (Bramsonet al. 2017). Kevin Dorst uses simulation as part of an argument thatpolarization can be a predictable result if fully rational agents,while aiming for accuracy, selectively find flaws in evidence opposedto their current view. Initial divergences, he argues, can be theresult of iterated Bayesian updating on ambiguous evidence (Dorst2023).

The topic of polarization is anticipated in an earlier tradition ofcellular automata models initiated by Robert Axelrod. The basicpremise of Axelrod (1997) is that people tend to interact more withthose like themselves and tend to become more like those with whomthey interact. But if people come to share one another’s beliefs(or other cultural features) over time, why do we not observe completecultural convergence? At the core of Axelrod’s model is aspatially instantiated imitative mechanism that produces culturalconvergence within local groups but also results in progressivedifferentiation and cultural isolation between groups.

100 agents are arranged on a \(10 \times 10\) lattice such as thatillustrated inFigure 3. Each agent is connected to four others: top, bottom, left, and right.The exceptions are those at the edges or corners of the array,connected to only three and two neighbors, respectively. Agents in themodel have multiple cultural “features”, each of whichcarries one of multiple possible “traits”. One can thinkof the features as categorical variables and the traits as options orvalues within each category. For example, the first feature mightrepresent culinary tradition, the second one the style of dress, thethird music, and so on. In the base configuration an agent’s“culture” is defined by five features \((F = 5)\) eachhaving one of 10 traits \((q =10),\) numbered 0 through 9. Agentx might have \(\langle 8, 7, 2, 5, 4\rangle\) as a culturalsignature while agenty is characterized \(\langle 1, 4, 4,8, 4\rangle\). Agents are fixed in their lattice location and hencetheir interaction partners. Agent interaction and imitation rates aredetermined by neighbor similarity, where similarity is measured as thepercentage of feature positions that carry identical traits. With fivefeatures, if a pair of agents share exactly one such element they are20% similar; if two elements match then they are 40% similar, and soforth. In the example just given, agentsx andy andhave a similarity of 20% because they share only one feature.


41846096170622773975781969886567856395794629239070
95667345578546349129834463104278640705186174596211
47298869485426175923026659733067790697194552037354
09575727859499170805049525229999741129291893281593
02029946021485294392831218430933260441211916673581
84484935790905212567723710835225212397434578555341
69263944142524668061122084481302717906999493805728
98129449718642726499058854578840317085203552773303
18261182157097715211928227456160786342550742042317
30487230572465603204604185635957759017832196784773

Figure 3: Typical initial set of“cultures” for a basic Axelrod-style model consisting of100 agents on a \(10 \times 10\) lattice with five features and 10possible traits per agent. The marked sight shares two of five traitswith the site above it, giving it a cultural similarity score of 40%(Axelrod 1997).

For each iteration, the model picks at random an agent to be activeand one of its neighbors. With probability equal to their culturalsimilarity, the two sites interact and the active agent changes one ofits dissimilar elements to that of its neighbor. If agent \(i =\langle 8, 7, 2, 5, 4\rangle\) is chosen to be active and it is pairedwith its neighbor agent \(j = \langle 8, 4, 9, 5, 1\rangle,\) forexample, the two will interact with a 40% probability because theyhave two elements in common. If the interaction does happen, agenti changes one of its mismatched elements to match that ofj, becoming perhaps \(\langle 8, 7, 2, 5, 1\rangle.\) Thischange creates a similarity score of 60%, yielding an increasedprobability of future interaction between the two.

In the course of approximately 80,000 iterations, Axelrod’smodel produces large areas in which cultural features are identical:local convergence. It is also true, however, that arrays such as thatillustrated do not typically move to full convergence. They insteadtend to produce a small number of culturally isolated stableregions—groups of identical agents none of whom share featuresin common with adjacent groups and so cannot further interact. As anarray develops, agents interact with increasing frequency with thosewith whom they become increasingly similar, interacting lessfrequently with the dissimilar agents. With only a mechanism of localconvergence, small pockets of similar agents emerge that move towardtheir own homogeneity and away from that of other groups. With theparameters described above, Axelrod reports a median of three stableregions at equilibrium. It is this phenomenon of global separationthat Axelrod refers to as “polarization”. See theinteractive simulation of the Axelrod polarization model in the Other Internet Resources section below.

Axelrod notes a number of intriguing results from the model, many ofwhich have been further explored in later work. Results are verysensitive to the number of featuresF and traitsqused as parameters, for example. Changing numbers of features andtraits changes the final number of stable regions in oppositedirections: the number of stable regions correlates negatively withthe number of featuresF but positively with the number oftraitsq (Klemm et al. 2003). In Axelrod’s base casewith \(F = 5\) and \(q = 10\) on a \(10 \times 10\) lattice, theresult is a median of three stable regions. Whenq isincreased from 10 to 15, the number of final regions increases fromthree to 20; increasing the number of traits increases the number ofstable groups dramatically. If the number of featuresF isincreased to 15, in contrast, the average number of stable regionsdrops to only 1.2 (Axelrod 1997). Further explorations of parametersof population size, configuration, and dynamics, with measures ofrelative size of resultant groups, appear in Klemm et al. (2003a, b,c, 2005) and in Centola et al. (2007).

One result that computational modeling promises regarding a phenomenonsuch as opinion polarization is an understanding of the phenomenonitself: how real opinion polarization might happen, and how it mightbe avoided. Another and very different outcome, however, is created bythe fact that computational modeling both offers and demands precisionabout concepts and measures that may otherwise be lacking in theory.Bramson et al. (2017), for example, argues that“polarization” has a range of possible meanings across theliterature in which it appears, different aspects of which arecaptured by different computational models with differentmeasures.

3.1.2 The social dynamics of argument

In general, the social dynamics of belief change reviewed above treatsbeliefs as items that spread by contact, much on the model ofinfection dynamics (Grim, Singer, Reade, & Fisher 2015, thoughRiegler & Douven 2009 can be seen as an exception). Other attemptshave been made to model belief change in greater detail, motivated byreasons or arguments.

With gestures toward earlier work by Phan Minh Dung (1995), GregorBetz constructs a model of belief change based on “dialecticalstructures” of linked arguments (Betz 2013). Sentences and theirnegations are represented as digits positive and negative, argumentsas ordered sets of sentences, and two forms of links betweenarguments: an attack relation in which a conclusion of one argumentcontradicts a premise of another and support relations in which theconclusion of one argument is equivalent to the premise of another (Figure 4). A “position” on a dynamical structure, complete orpartial, consists of an assignment of truth values T or F to theelements of the set of sentences involved. Consistent positionsrelative to a structure are those in which contradictory sentences aresigned opposite truth values and every argument in which all premisesare assigned T has a conclusion which is assigned T as well. Betz thenmaps the space of coherent positions for a given dialectical structureas an undirected network, with links between positions that differ inthe truth-value of just one sentence of the set.

a diagram: link to extended description below

Figure 4: A dialectical structure ofpropositions and their negations as positive and negative numbers,with two complete positions indicated by values of T and F. The leftassignment is consistent; the right assignment is not (after Betz2013). [Anextended description of figure 4 is in the supplement.]

In the simplest form of the model, two agents start with randomassignments to a set of 20 sentences with consistent assignments totheir negations. Arguments are added randomly, starting from a blankslate, and agents move to the coherent position closest to theirprevious position, with a random choice in the case of a draw. Invariations on the basic structure, Betz considers (a) cases in whichan initial background agreement is assumed, (b) cases of“controversial” argumentation, in which arguments areintroduced which support a proponent’s position or attack anopponent’s, and (c) in which up to six agents are involved. Intwo series of simulations, he tracks both the consensus-conducivenessof different parameters, and—with an assumption of a specificassignment as the “truth”—the truth-conduciveness ofdifferent parameters.

In individual runs, depending on initial positions and argumentsintroduced, Betz finds that argumentation of the sort modeled caneither increase or decrease agreement, and can track the truth or leadastray. Averaging across many debates, however, Betz finds thatcontroversial argumentation in particular is both consensus-conduciveand better tracks the truth.[7]

3.2 Computational Philosophy of Science

Computational models have been used in philosophy of science in twovery different respects: (a) as models of scientific theory, and (b)as models of the social interaction characteristic of collectivescientific research. The next sections review some examples ofeach.

3.2.1 Network models of scientific theory

“Computational philosophy of science” is enshrined as abook title as early as Paul Thagard’s 1988. A central core ofhis work is a connectionist ECHO program, which constructs networkstructures of scientific explanation (Thagard 1992, 2012). From inputsof “explain”, “contradict”,“data”, and “analogous” for the status andrelation of nodes, ECHO uses a set of principles of explanatorycoherence to construct a network of undirected excitatory andinhibitory links between nodes which “cohere” and thosewhich “incohere”, respectively. If p1 through pm explainq, for example, all of p1 through pm cohere withqand with each other, for example, though the weight of coherence isdivided by the number of p1 through pm. If p1 contradicts p2 or p1 andp2 are parts of competing explanations for the same phenomenon, they“incohere”.

Starting with initial node activations close to zero, the nodes of thecoherence network are synchronously updated in terms of their oldactivation and weighted input from linked nodes, with“data” nodes set as a constant input of 1. Once thenetwork settles down to equilibrium, an explanatory hypothesis p1 istaken to defeat another p2 if its activation value is higher—atleast generally, positive as opposed to negative (Figure 5).

E and P1 each have solid lines connecting them to Q1 and Q2. P2 has a dotted line connecting it to P1 and a solid line connecting it to Q2.

Figure 5: An ECHO network for hypothesesP1 and P2 and evidence units Q1 and Q2. Solid lines representexcitatory links, the dotted line an inhibitory link. Because Q1 andQ2 are evidence nodes, they take a constant excitatory value of 1 fromE. Started from values of .01 and following Thagard’s updating,P1 dominates P2 once the network has settled down: a hypothesis thatexplains more dominates its alternative. Adapted from Thagard1992.

Thagard is able to show that such an algorithm effectively echoes arange of familiar observations regarding theory selection. Hypothesesthat explain more defeat those that explain less, for example, andsimpler hypotheses are to be preferred. In contrast to simplePopperian refutation, ECHO abandons a hypothesis only when adominating hypothesis is available. Thagard uses the basic approach ofexplanatory coherence, instantiated in ECHO, in an analysis of anumber of historical cases in the history of science, including theabandonment of phlogiston theory in favor of oxygen theory, theDarwinian revolution, and the eventual triumph of Wegener’splate tectonics and continental drift.

The influence of Bayesian networks has been far more widespread, bothacross disciplines and in technological application—applicationmade possible only with computers. Grounded in the work of Judea Pearl(1988, 2000; Pearl & Mackenzie 2018), Bayesian networks aredirected acyclic graphs in which nodes represent variables that can beread as either probabilities or degrees of belief and directed edgesas conditional probabilities from “parent” to“child”. By the Markov convention, the value of a node isindependent of all other nodes that are not its descendants,conditional on its parents. A standard textbook example is shown inFigure 6.

a diagram: link to extended description below

Figure 6: A standard example of a simpleBayesian net. [Anextended description of figure 6 is in the supplement.]

Changes of values at the nodes of a Bayesian network (in response toevidence, for example) are updated through belief propagationalgorithms applied at every node. The update of a response to inputfrom a parent uses the conditional probabilities of the link. Aparent’s response to input from a child uses the relatedlikelihood ratio (see also the supplement on Bayesian networks inBringsjord & Govindarajulu 2018 [2019]). Reading some variables ashypotheses and others as pieces of evidence, simple instances of corescientific concepts can easily be read off such a structure. Simpleexplanation amounts to showing how the value of a variable“downstream” depends on the pattern“upstream”. Simple confirmation amounts to an increase inthe probability or degree of belief of a nodeh upstreamgiven a piece of evidencee downstream. Evaluating competinghypotheses consists in calculating the comparative probability ofdifferent patterns upstream (Climenhaga 2020, 2023, Grim et al.2022a). In tracing the dynamics of credence changes across Bayesiannetworks subjected to an ‘evidence barrage,’ it has beenargued that a Kuhnian pattern of normal science punctuated withoccasional radical shifts follows from Bayesian updating in networksalone (Grim et al. 2022b).

As Pearl notes, a Bayesian network is nothing more than a graphicalrepresentation of a huge table of joint probabilities for thevariables involved (Pearl & Mackenzie 2018: 129). Given anysizable number of variables, however, calculation becomes humanlyunmanageable—hence the crucial use of computers. The fact thatBayesian networks are so computationally intensive is in fact a pointthat Thagard makes against using them as models of human cognitiveprocessing (Thagard 1992: 201). But that is not an objection againstother philosophical interpretations. One clear reading of networks isas causal graphs. Application to philosophical questions of causalityin philosophy of science is detailed in Spirtes, Glymour, and Scheines(1993) and Sprenger and Hartmann (2019). Bayesian networks are nowsomething of a standard in artificial intelligence, ubiquitous inapplications, and powerful algorithms have been developed to extractcausal networks from the massive amounts of data available.

3.2.2 Network models of scientific communication

It should be no surprise that the computational studies of beliefchange and opinion dynamics noted above blend smoothly into a range ofcomputational studies in philosophy of science. Here a centralmotivating question has been one of optimal investigatory structure:what pattern of scientific communication and cooperation, between whatkinds of investigators, is best positioned to advance science? Thereare two strands of computational philosophy of science that attempt towork toward an answer to this question. The first strand models theeffect of communicative networks within groups. The second strand,left to the next section, models the effects of cognitive diversitywithin groups. This section outlines what makes modeling of both sortspromising, but also notes limitations and some failures as well.

One might think that access to more data by more investigators wouldinevitably optimize the truth-seeking goals of communities ofinvestigators. On that intuition, faster and more completecommunication—the contemporary science of theinternet—would allow faster, more accurate, and more explorationof nature. Surprisingly, however, this first strand of modeling offersrobust arguments for the potential benefits oflimitedcommunication.

In the spirit of rational choice theory, much of this work wasinspired by analytical work in economics on infinite populations byVenkatesh Bala and Sanjeev Goyal (1998), computationally implementedfor small populations in a finite context and with an eye tophilosophical implications by Kevin Zollman (2007, 2010a, 2010b). InZollman’s model, Bayesian agents choose between a current method\(\phi_1\) and what is set as a better method \(\phi_2,\) startingwith random beliefs and allowing agents to pursue the investigatoryaction with the highest subjective utility. Agents update theirbeliefs based on the results of their own testing results—drawnfrom a distribution for that action—together with results fromthe other agents to which they are communicatively connected. Acommunity is taken to have successfully learned when all agentsconverge on the better \(\phi_2.\)

Zollman’s results are shown inFigure 7 for the three simple networks shown inFigure 8. The communication network which performs the best is not the fullyconnected network in which all investigators have access to allresults from all others, but the maximally distributed networkrepresented by the ring. As Zollman also shows, this is also thatconfiguration which takes the longest time to achieve convergence. Seean interactive simulation of a simplified version of Zollman’s model in the Other Internet Resources section below.

10 person ring graph: 10 points in a ring each connected to its adjacent points by a line
wheel graph: 10 points in a ring each connected to its adjacent points by a line and each also connected to the center by a line
complete graph: 10 points in a ring each connected to all the other points and to the center by a line.

Figure 7: A 10 person ring, wheel, andcomplete graph. After Zollman (2010a).

a 2-d graph: link to extended description below

Figure 8: Learning results of computersimulations: ring, wheel, and complete networks of Bayesian agents.Adapted from Zollman (2010a). [Anextended description of figure 8 is in the supplement.]

Olsson and Angere’s Bayesian network Laputa (mentioned above)has also been applied to the question of optimal networks forscientific communication. Their results essentially confirmZollman’s result, though sampled over a larger range of networks(Angere & Olsson 2017). Distributed networks with low connectivityare those that most reliably fix on the truth, though they are boundto do so more slowly.

In Zollman’s original version, all agents are envisaged asscientists who follow the same set of updating rules. The model hasbeen extended to include both scientists who communicate all resultsand industry propagandists who selectively communicate only resultsfavoring their side, modelling the impact on policy makers who receiveinput from both. Not surprisingly, the activity of the propagandist(and selective publication in general) can affect whether policymakers can find the truth in order to act on it (Holman and Bruner2017; Weatherall, Owen, O’Connor and Bruner 2018; O’Connorand Weatherall 2019).

The concept of anepistemic landscape has also emerged as ofcentral importance in this strand of research. Analogous to a fitnesslandscape in biology (Wright 1932), an epistemic landscape offers anabstract representation of ideal data that might in principle beobtained in testing a range of hypotheses (Grim 2009; Weisberg &Muldoon 2009; Hong & Page 2004, Page 2007).Figure 9 uses the example of data that might be obtained by testingalternative medical treatments. In such a graph points in thechemotherapy-radiation plane represent particular hypotheses about themost effective combination of radiation and chemotherapy. Graph heightat each location represents some measure of success: the percentage ofpatients with 5-years survival on that treatment, for example.

3-d graph: link to extended description below

Figure 9: A three-dimensional epistemiclandscape. Points on the xz plane represent hypotheses regardingoptimal combination of radiation and chemotherapy; graph height on they axis represents some measure of success. [Anextended description of figure 9 is in the supplement.]

An epistemic landscape is intended to be an abstract representation ofthe real-world phenomenon being explored. The key word, of course, is“abstract”: few would argue that such a model is fullyrealistic either in terms of the simplicity of limited dimensions orthe precision in which one hypothesis has a distinctly higher valuethan a close neighbor. As in all modeling, the goal is to represent assimply as possible those aspects of a situation relevant to answeringa specific: in this case, the question of optimal scientificorganization. Epistemic landscapes—even those thissimple—have been assumed to offer a promising start. As outlinedbelow, however, one of the deeper conclusions that has emerged is howsensitive results can be to the specific topography of the epistemiclandscape.

Is there a form of scientific communication which optimizes itstruth-seeking goals in exploration of a landscape? In a series ofagent-based models, agents are communicatively linked explorerssituated at specific points on an epistemic landscape (Grim, Singer etal. 2013). In such a design, simulation can be used to explore theeffect of network structure, the topography of the epistemiclandscape, and the interaction of the two.

The simplest form of the results echo the pattern seen in differentforms in Bala and Goyal (1998) and in Zollman (2010a, 2010b), hereplayed out on epistemic landscapes. Agents start with randomhypotheses as points on the x-axis of a two-dimensional landscape.They compare their results (the height of the y axis at that point)with those of the other agents to which they are networked. If anetworked neighbor has a higher result, the agent moves toward anapproximation of that point (in the interval of a “shakinghand”) with an inertia factor (generally 50%, or a movehalfway). The process is repeated by all agents, progressivelyexploring the landscape in attempting to move toward more successfulresults.

On “smooth” landscapes of the form of the first two graphsinFigure 10, agents in any of the networks shown in Figure 10 succeed in findingthe highest point on the landscape. Results become much moreinteresting for epistemic landscapes that contain a “needle in ahaystack” as in the third graph in Figure 10.

first of three graphs each labeled 'The Epistemic Landscape' and with x and y axes both going from 0 to 100. This one has a sine like curve starting about 50 and going up and then down before ending below 50.
second graph. This has a more irregular curve that goes down then up before a small dip then going up some more before dipping.
third graph: This line has a sharp down before a sharp up that reaches 100 before a sharp down almost to zero (the 'needle') then, similar to the second graph, curves up with a small dip before up again and a final dip

Figure 10: Two-dimensional epistemiclandscapes.

ring radius 1: a ring of 15 points each connected to their adjacent points by a line

ring radius 1

small world: a ring of 15 points most connected to their adjacents and two pairs of non-adjacent points also by a line

small world

wheel: a ring of 15 points each connected to their adjacent points by a line and also each connected to the center by a line

wheel

hub: a ring of 15 points each connected to the center by a line

hub

random: a ring of 15 points several lines randomly connecting two of the points

random

complete: a ring of 15 points each connected by a line to all the other points and to the center

complete

Figure 11: Sample networks.

In a ring with radius 1, each agent is connected with just itsimmediate neighbors on each side. Using an inertia of 50% and a“shaking hand” interval of 8 on a 100-point landscape, 50agents in that configuration converge on the global maximum in the“needle in the haystack” landscape in 66% of simulationruns. If agents are connected to the two closest neighbors on eachside, results drop immediately to 50% of runs in which agents find theglobal maximum. A small world network can be envisaged as a ring inwhich agents have a certain probability of “rewiring”:breaking an existing link and establishing another one to some otheragent at random (Watts & Strogatz 1998). If each of 50 agents hasa 9% probability of rewiring, the success rate of small worlds dropsto 55%. Wheels and hubs have a 42% and 37% success rate, respectively.Random networks with a 10% probability of connection between any twonodes score at 47%. The worst performing communication network on a“needle in a haystack” landscape is the “internet ofscience” of a complete network in which everyone instantly seeseveryone else’s result.

Extensions of these results appear in Grim, Singer et al. (2013).There a small sample of landscapes is replaced with a quantified“fiendishness index”, roughly representing the extent towhich a landscape embodies a “needle in a haystack”.Higher fiendishness quantifies a lower probability that hill-climbingfrom a randomly chosen point “finds” the landscape’sglobal maximum. Landscapes, though still two-dimensional, are“looped” so as to avoid edge-effects also noted inHegselmann and Krause (2006). Here again results emphasize theepistemic advantages of ring-like or distributed network over fullyconnected networks in the exploration of intuitively difficultepistemic landscapes. Distributed single rings achieve the highestpercentage of cases in which the highest point on the landscape isfound, followed by all other network configurations. Total orcompletely connected networks show the worst results over all. Timesto convergence are shown to be roughly though not precisely theinverse of these relationships. Seethe interactive simulation of a Grim and Singer et al.’s model in the Other Internet Resources section below.

What all these models suggest is that it is distributed networks ofcommunication between investigators, rather than full and immediatecommunication between all, that will—or at leastcan—give us more accurate scientific outcomes. In theseventeenth century, scientific results were exchanged slowly, fromperson to person, in the form of individual correspondence. Intoday’s science results are instantly available to everyone.What these models suggest is that the communication mechanisms ofseventeenth century science may be more reliable than the highlyconnected communications of today. Zollman draws the corollaryconclusion that loosely connected communities made up of less informedscientists might be more reliable in seeking the truth thancommunities of more informed scientists that are better connected(Zollman 2010b).

The explanation is not far to seek. In all the models noted, moreconnected networks produce inferior results because agents move tooquickly to salient but sub-optimal positions: to local rather thanglobal maxima. In the landscape models surveyed, connected networksresult in all investigators moving toward the same point, currentlyannounced to everyone as highest, skipping over large areas in theprocess—precisely where the “needle in the haystack”might be hidden. In more distributed networks, local action results ina far more even and effective exploration of widespread areas of thelandscape; exploration rather than exploitation (Holland 1975).

How should we structure the funding and communication structure of ourscientific communities? It is clear both from these results in theircurrent form, and in further work along these general lines, that theanswer may well be “landscape”-relative: it may welldepend on what kind of question is at issue what form scientificcommunication ought to take. It may also depend on what desiderata areat issue. The models surveyed emphasize accuracy of results,abstractly modeled. All those surveyed concede that there is a cleartrade-off between accuracy of results and the speed of communityconsensus (Zollman 2007; Zollman 2010b; Grim, Singer et al. 2013). Butfor many purposes, and reasons both ethical and practical, it mayoften be far better to work with a result that is only roughlyaccurate but available today than to wait 10 years for a result thatis many times more accurate but arrives far too late.

3.2.3 Division of labor, diversity, and exploration

A second tradition of work in computational philosophy of science alsouses epistemic landscapes, but attempts to model the effect not ofnetwork structure but of the division of labor and diversity withinscientific groups. An influential but ultimately flawed precursor inthis tradition is the work of Weisberg and Muldoon (2009).

Two views of Weisberg and Muldoon’s landscape appear inFigure 12. In their treatment, points on the base plane of the landscaperepresent “approaches”—abstract representations ofthe background theories, methods, instruments and techniques used toinvestigate a particular research question. Heights at those pointsare taken to represent scientific significance (following Kitcher1993).

A three-D graph, mostly flat with two peaks, one at the center and one in the upper left hand corner.
A black square with two light oval patches, one in the upper right hand corner and one, slightly larger, in the lower left hand corner.

Figure 12: Two visions of Weisberg andMuldoon’s landscape of scientific significance (height) atdifferent approaches to a research topic.

The agents that traverse this landscape are not networked, as in theearlier studies noted, except to the extent that they are influencedby agents with “approaches” near theirs on the landscape.What is significant about the Weisberg & Muldoon model, however,is that their agents are not homogeneous. Two types of agents play aprimary role.

“Followers” take previous investigation of the territoryby others into account in order to follow successful trends. If anypreviously investigated points in their immediate neighborhood have ahigher significance than the point they stand on, they move to thatpoint (randomly breaking any tie).[8] Only if no neighboring investigated points have higher significanceand uninvestigated point remain, followers move to one of those.

“Mavericks” avoid previously investigated points much asfollowers prioritize them. Mavericks chooseunexplored pointsin their neighborhoods, testing significance. If higher than theircurrent spot, they move to that point.

Weisberg and Muldoon measure both the percentages of runs in whichgroups of agents find the highest peak and the speed at which peaksare found. They report that the epistemic success of a population offollowers is increased when mavericks are included, and that theexplanation for that effect lies in the fact that mavericks canprovide pathways for followers: “[m]avericks help many of thefollowers to get unstuck, and to explore more fruitful areas of theepistemic landscape” (for details see Weisberg & Muldoon2009: 247 ff). Against that background they argue for broad claimsregarding the value for an epistemic community of combining differentresearch strategies. The optimal division of labor that their modelsuggests is “a healthy number of followers with a small numberof mavericks”.

Critics of Weisberg and Muldoon’s model argue that it is flawedby simple implementation errors in which >= was used in place of>, with the result that their software agents do not in factoperate in accord with their outlined strategies (Alexander,Himmelreich & Thomson 2015). As implemented, their followers tendto get trapped into oscillating between two equivalent spaces (oftenof value 0). According to the critics, when followers are properlyimplemented, it turns out that mavericks help the success of acommunity solely in terms of discovery by the mavericks themselves,not by getting followers “unstuck” who shouldn’thave been stuck in the first place (see also Thoma 2015). If thecritics are right, the Weisberg-Muldoon model as originallyimplemented proves inadequate as philosophical support for the claimthat division of labor and strategic diversity are important epistemicdrivers. There’san interactive simulation of the Weisberg and Muldoon model, which includes a switch to change the >= to >, in the Other Internet Resources section below.

Critics of the model don’t deny the general conclusion thatWeisberg and Muldoon draw: that cognitive diversity or division ofcognitive labor can favor social epistemic outcomes.[9] What they deny is that the Weisberg and Muldoon model adequatelysupports that conclusion. A particularly intriguing model that doessupport that conclusion, built on a very different model of diversity,is that of Hong and Page (2004). But it also supports a point thatAlexander et al. emphasize: that the advantages of cognitive diversitycan very much depend on the epistemic landscape being explored.

Lu Hong and Scott Page work with a two-dimensional landscape of 2000points, wrapped around as a loop. Each point is assigned a randomvalue between 1 and 100. Their epistemic individuals explore thatlandscape using heuristics composed of three ordered numbers between,say, 1 and 12. An example helps. Consider an individual with heuristic\(\langle 2, 4, 7\rangle\) at point 112 on the landscape. He firstuses his heuristic 2 to see if a point two to the right—at114—has a higher value than his current position. If so, hemoves to that point. If not, he stays put. From that point, whicheverit is, he uses his heuristic 4 in order to see if a point 4 steps tothe right has a higher peak, and so forth. An agent circles throughhis heuristic numbers repeatedly until he reaches a point from whichnone within reach of his heuristic offers a higher value. The basicdynamic is illustrated inFigure 13.

a jagged line: link to extended description below

Figure 13: An example of exploration ofa landscape by an individual using heuristics as in Hong and Page(2004). Explored points can be read left to right. [Anextended description of figure 13 is in the supplement.]

Hong and Page score individuals on a given landscape in terms of theaverage height they reach starting from each of the 2000 points. Buttheir real target is the value of diversity in groups. With that inmind, they compare the performance of (a) groups composed of the 9individuals with highest-scoring heuristics on a given landscape with(b) groups composed of 9 individuals with random heuristics on thatlandscape. In each case groups function together in what has beentermed a “relay”. For each point on the 2000-pointlandscape, the first individual of the group finds his highestreachable value. The next individual of the group starts from there,and so forth, circling through the individuals until a point isreached at which none can achieve a higher value. The score for thegroup as a whole is the average of values achieved in such a wayacross all of the 2000 points.

What Hong and Page demonstrate in simulation is that groups withrandom heuristics routinely outperform groups composed entirely of the“best” individual performers. They christen their findingsthe “Diversity Trumps Ability” result. In a replication oftheir study, the average maximum on the 2000-point terrain for thegroup of the 9 best individuals comes in at 92.53, with a median of92.67. The average for a group of 9 random individuals comes in at94.82, with a median of 94.83. Across 1000 runs in that replication, ahigher score was achieved by groups of random agents in 97.6% of allcases (Grim et al. 2019). Seean interactive simulation of Hong and Page’s group deliberation model in the Other Internet Resources section below. Hong and Page alsooffer a mathematical theorem as a partial explanation of such a result(Hong & Page 2004). That component of their work has been attackedas trivial or irrelevant (Thompson 2014), though the attack itself hascome under criticism as well (Kuehn 2017, Singer 2019).

The Hong-Page model solidly demonstrates a general claim attempted inthe disputed Weisberg-Muldoon model: cognitive diversity can indeed bea social epistemic advantage. In application, however, the Hong-Pageresult has sometimes been appealed to as support for much broaderclaims: that diversity is always or quite generally of epistemicadvantage (Anderson 2006, Landemore 2013, Gunn 2014, Weymark 2015).The result itself is limited in ways that have not always beenacknowledged. In particular, it proves sensitive to the precisecharacter of the epistemic landscape employed.

Hong and Page’s landscape is one in which each of 2000 points isgiven a random value between 1 and 100: a purely random landscape. Oneconsequence of that fact is that the group of 9 best heuristics ondifferent random Hong-Page landscapes have essentially no correlation:a high-performing individual on one landscape need have no carry-overto another. Grim et al. (2019) expands the Hong-Page model toincorporate other landscapes as well, in ways which challenge thegeneral conclusions regarding diversity that have been drawn from themodel but which also suggest the potential for further interestingapplications.

An easy way to “smooth” the Hong-Page landscapes is toassign random values not to every point on the 2000-point loop butevery second point, for example, with intermediate points taking anaverage between those on each side. Where a random landscape has a“smoothness” factor of 0, this variation will have arandomness factor of 1. A still “smoother” landscape ofdegree 2 would be one in which slopes are drawn between random valuesassigned to every third point. Each degree of smoothness increases theaverage value correlation between a point and its neighbors.

Using Hong and Page’s parameters in other respects, it turns outthat the “Diversity Trumps Ability” result holds only forlandscapes with a smoothness factor less than 4. Beyond that point, itis “ability”—the performance of groups of the 9best-performing individuals—that trumps“diversity”—the performance of groups of randomheuristics.

The Hong-Page result is therefore very sensitive to the“smoothness” of the epistemic landscape modeled. As hintedinsection 3.2.2, this is an indication from within the modeling tradition itself ofthe danger of restricted and over-simple abstractions regardingepistemic landscapes. Moreover, the model’s sensitivity is notlimited to landscape smoothness: social epistemic success depends onthe pool of numbers from which heuristics are drawn as well, with“diversity” showing strength on smoother landscapes if thepool of heuristics is expanded. Results also depend on whether socialinteraction is modeled using of Hong-Page’s “relay”or an alternative dynamics in which individuals collectively (ratherthan sequentially) announce their results, with all moving to thehighest point announced by any. Different landscape smoothnesses,different heuristic pool sizes, and different interactive dynamicswill favor the epistemic advantages of different compositions ofgroups, with different proportions of random and best-performingindividuals (Grim et al. 2019).

3.3 Ethics and Social-Political Philosophy

What, then, is the conduct that ought to be adopted, the reasonablecourse of conduct, for this egoistic, naturally unsocial being, livingside by side with similar beings?—HenrySidgwick,Outlines of the Historyof Ethics (1886: 162)

Hobbes’Leviathan can be read as asking, with Sidgwick,how cooperation can emerge in a society of egoists (Hobbes 1651).Cooperation is thus a central theme in both ethics andsocial-political philosophy.

3.3.1 Game theory and the evolution of cooperation

Game theory has been a major tool in many of the philosophicalconsiderations of cooperation, extended with computationalmethodologies. Here the primary example is the Prisoner’sDilemma, a strategic interaction between two agents with a payoffmatrix in which joint cooperation gets a higher payoff than jointdefection, but the highest payoff goes to a player who defects whenthe other player cooperates (see esp. Kuhn 1997 [2019]). Formally, thePrisoner’s Dilemma requires the value DC for defection againstcooperation to be higher than CC for joint cooperation, with CC higherthan the payoff CD for cooperation against defection. In order toavoid an advantage to alternating trade-offs, CC should also be higherthan \((\textrm{CD} + \textrm{DC}) / 2.\) A simple set of values thatfits those requirements is shown in the matrix inFigure 14.

 Player A
CooperateDefect
Player BCooperate3,30,5
Defect5,01,1

Figure 14: A Prisoner’s Dilemmapayoff matrix

It is clear in the “one-shot” Prisoner’s Dilemmathat defection is strictly dominant: whether the other playercooperates or defects, one gains more points by defecting. But ifdefection always gives a higher payoff, what sense does it make tocooperate? In a Hobbesian population of egoists, with payoffs as inthe Prisoner’s Dilemma, it would seem that we should expectmutual defection as both a matter of course and the rationaloutcome—Hobbes’ “war of all against all”. Howcould a population of egoists come to cooperate? How could the ethicaldesideratum of cooperation arise and persist?

A number of mechanisms have been shown to support the emergence ofcooperation: kin selection (Fisher 1930; Haldane 1932), green beards(Hamilton 1964a,b; Dawkins 1976), secret handshakes (Robson 1990;Wiseman & Yilankaya 2001), iterated games, spatialized andstructured interactions (Grim 1995; Skyrms 1996, 2004; Grim, Mar,& St. Denis 1998; Alexander 2007), and noisy signals (Nowak &Sigmund 1992). This section offers examples of the last two ofthese.

In the iterated Prisoner’s Dilemma, players repeat theirinteractions, either in a fixed number of rounds or in an infinite orindefinite repetition. Robert Axelrod’s tournaments in the early1980s are the classic studies in the iterated prisoner’sdilemma, and early examples of the application of computationaltechniques. Strategies for playing the Prisoner’s Dilemma weresolicited from experts in various fields, pitted against all others(and themselves) in round-robin competition over 200 rounds. Famously,the strategy that triumphed was Tit for Tat, a simple strategy whichresponds to cooperation from the other player on the previous roundwith cooperation, responding to defection on the previous round withdefection. Even more surprisingly, Tit for Tat again came out in frontin a second tournament, despite the fact that submitted strategiesknew that Tit for Tat was the opponent to aim for. When those samestrategies were explored with replicator dynamics in place ofround-robin competition, Tit for Tat again was the winner (Axelrod andHamilton 1981). Further work has tempered Tit for Tat’sreputation somewhat, emphasizing the constraints of Axelrod’stournaments both in terms of structure and the strategies submitted(Kendall, Yao, & Chang 2007; Kuhn 1997 [2019]).

A simple set of eight “reactive” strategies, in which aplayer acts solely on the basis of the opponent’s previous move,is shown inFigure 15. Coded with “1” for cooperate and “0” fordefect and three places representing first movei, responseto cooperation on the other sidec, and response to defectionon the other sided, these give us 8 strategies that includeall defect, all cooperate, tit for tat as well as several othervariations.

icdreactive strategy
000All Defect
001 
010Suspicious Tit for Tat
011Suspicious All Cooperate
100Deceptive All Defect
101 
110Tit for Tat
111All Cooperate

Figure 15: 8 reactive strategies in thePrisoner’s Dilemma

If these strategies are played against each other and themselves, inthe manner of Axelrod’s tournaments, it is “alldefect” that is the clear winner. If agents imitate the mostsuccessful strategy, a population will thus immediately go to AllDefect—a game-theoretic image of Hobbes’ war of allagainst all, perhaps.

Consider, however, a spatialized Prisoner’s Dilemma in the formof cellular automata, easily run and analyzed on a computer. Cells areassigned one of these eight strategies at random, play an iteratedgame locally with their eight immediate neighbors in the array, andthen adopt the strategy of that neighbor (if any) that achieves ahigher total score. In this case, with the same 8 strategies,occupation of the array starts with a dominance by All Defect, butclusters of Tit for Tat grow to dominate the space (Figure 16).An interactive simulation in which one can choose which competing reactive strategies play in a spatialized array is available in the Other Internet Resources sectionbelow.

first of six 64x64 squares. The cells in this square are randomly colored with no apparent clustering
second square, the cells are mostly green with some clusters of yellow and some of gray.
third square, the cells are about half green with 6 large clumps of gray and a few yellow cells adjacent to the gray clumps in places.
fourth square, more than half the cells are gray with one large clump of green and two very small clumps of green. Again a few yellow cells adjacent to the gray cells in places.
fifth square, two small clumps of green cells with a few yellow cells on the edge.
sixth square, all gray.

Figure 16: Conquest by Tit for Tat inthe Spatialized Prisoner’s Dilemma. All defect is shown ingreen, Tit for Tat in gray.

In this case, there are two aspects to the emergence of cooperation inthe form of Tit for Tat. One is the fact that play is local:strategies total points over just local interactions, rather than playwith all other cells. The other is that imitation is local as well:strategies imitate their most successful neighbor, rather than thatstrategy in the array that gained the most points. The fact that bothconditions play out in the local structure of the lattice allowsclusters of Tit for Tat to form and grow. In Axelrod’stournaments it is particularly important that Tit for Tat does well inplay against itself; the same is true here. If either game interactionor strategy updating is made global rather than local, dominance goesto All Defect instead. One way in which cooperation can emerge, then,is through structured interactions (Grim 1995; Skyrms 1996, 2004;Grim, Mar, & St. Denis 1998). J. McKenzie Alexander (2007) offersa particularly thorough investigation of different interactionstructures and different games.

Martin Nowak and Karl Sigmund offer a further variation that resultsin an even more surprising level of cooperation in thePrisoner’s Dilemma (Nowak & Sigmund 1992). The reactivestrategies outlined above are communicatively perfect strategies.There is no noise in “hearing” a move as cooperation ordefection on the other side, and no “shaking hand” inresponse. In Tit for Tat a cooperation on the other side is flawlesslyperceived as such, for example, and is perfectly responded to withcooperation. If signals are noisy or responses are less than flawless,however, Tit for Tat loses its advantage in play against itself. Inthat case a chancy defection will set up a chain of mutual defectionsuntil a chancy cooperation reverses the trend. A “noisy”Tit for Tat played against itself in an infinite game does no betterthan a random strategy.

Nowak and Sigmund replace the “perfect” strategies ofFigure 14 with uniformly stochastic ones, reflecting a world of noisy signalsand actions. The closest to All Defect will now be a strategy .01,.01, .01, indicating a strategy that has only a 99% chance ofdefecting initially and in response to either cooperation ordefection. The closest to Tit for Tat will be a strategy .99, .99,.01, indicating merely a high probability of starting with cooperationand responding to cooperation with cooperation, defection withdefection. Using the mathematical fiction of an infinite game, Nowakand Sigmund are able to ignore the initial value.

Pitting a full range of stochastic strategies of this type againsteach other in a computerized tournament, using replicator dynamics inthe manner of Axelrod and Hamilton (1981), Nowak and Sigmund trace aprogressive evolution of strategies. Computer simulation showsimperfect All Defect to be an early winner, followed by Imperfect Titfor Tat. But at that point dominance in the population goes to a stillmore cooperative strategy which cooperates with cooperation 99% of thetime but cooperates even against defection 10% of the time. Thatstrategy is eventually dominated by one that cooperates againstdefection 20% of the time, and then by one that cooperates againstdefection 30% of the time. A replication of the Nowak and Sigmundresult is shown inFigure 17. Nowak and Sigmund show analytically that the most successful strategyin a world of noisy information will be “Generous Tit forTat”, with probabilities of \(1 - \varepsilon\) and 1/3 forcooperation against cooperation and defection respectively.

a 2-d graph; link to extended description below

Figure 17: Evolution toward Nowak andSigmund’s “Generous Tit for Tat” in a world ofimperfect information (Nowak & Sigmund 1992). Populationproportions are shown vertically for labelled strategies shown over12,000 generations for an initial pool of 121 stochastic strategies\(\langle c,d\rangle\) at .1 intervals, full value of 0 and 1 replacedwith 0.01 and 0.99. [Anextended description of figure 17 is in the supplement.]

How can cooperation emerge in a society of self-serving egoists? Inthe game-theoretic context of the Prisoner’s Dilemma, theseresults indicate that iterated interaction, spatialization andstructured interaction, and noisy information can all facilitatecooperation, at least in the form of strategies such as Tit for Tat.When all three effects are combined, the result appears to be a levelof cooperation even greater than that indicated in Nowak and Sigmund.Within a spatialized Prisoner’s Dilemma using stochasticstrategies, it is strategies in the region of probabilities \(1 -\varepsilon\) and 2/3 that emerge as optimal in the sense of havingthe highest scores in play against themselves without being open toinvasion from small clusters of other strategies (Grim 1996).

This outline has focused on some basic background regarding thePrisoner’s Dilemma and emergence of cooperation. More recently ageneration of richer game-theoretic models has appeared, using a widervariety of games of conflict and coordination and more closely tied tohistorical precedents in social and political philosophy. Newergame-theoretic analyses of state of nature scenarios in Hobbes appearin Vanderschraaf (2006) and Chung (2015), extended with simulation toinclude Locke and Nozick in Bruner (2020).

There is also a new body of work that extends game-theoretic modelingand simulation to questions of social inequity. Bruner (2017) showsthat the mere fact that one group is a minority in a population, andthus interacts more frequently with majority than with minoritymembers, can result in its being disadvantaged where exchanges arecharacterized by bargaining in a Nash demand game (Young 1993). Termedthe “cultural Red King”, the effect has been furtherexplored through simulation, with links to experiment, and withextensions to questions of “intersectional disadvantage”,in which overlapping minority categories are in play (O’Connor2017;Mohseni, O’Connor, & Rubin 2019 [Other Internet Resources]; O’Connor, Bright, & Bruner 2019). The relevance of this tothe focus of the previous section is made clear in Rubin andO’Connor (2018) and O’Connor and Bruner (2019), modelingminority disadvantage in scientific communities.

3.3.2 Modeling democracy

In computational simulations, game-theoretic cooperation has beenappealed to as a model for aspects of both ethics in the sense ofSidgwick and social-political philosophy on the model of Hobbes. Thatmodel is tied to game-theoretic assumptions in general, however, andoften to the structure of the Prisoner’s Dilemma in particular(though Skyrms 2003 and Alexander 2007 are notable exceptions). Withregard to a wide range of questions in social and political philosophyin particular, the limitations of game theory may seem unhelpfullyabstract and artificial.

While still abstract, there are other attempts to model questions insocial political philosophy computationally. Here the studiesmentioned earlier regarding polarization are relevant. There have alsobeen recent attempts to address questions regarding epistemicdemocracy: the idea that among its other virtues, democraticdecision-making is more likely to track the truth.

There is a contrast, however, between open democratic decision-making,in which a full population takes part, and representative democracy,in which decision-making is passed up through a hierarchy ofrepresentation. There is also a contrast between democracy seen aspurely a matter of voting and as a deliberative process that in someway involves a population in wider discussion (Habermas 1992 [1996];Anderson 2006; Landemore 2013).

a 2-d graph: link to extended description below

Figure 18: The Condorcet result:probability of a majority of different odd-numbered sizes beingcorrect on a binary question with different homogeneous probabilitiesof individual members being correct. [Anextended description of figure 18 is in the supplement.]

The classic result for an open democracy and simple voting is theCondorcet jury theorem (Condorcet 1785). As long as each voter has auniform an independent probability greater than 0.5 of getting ananswer right, the probability of a correct answer from a majority voteis significantly higher than that of any individual, and it quicklyincreases with the size of the population (Figure 18).

It can be shown analytically that the basic thrust of the Condorcetresult remains when assumptions regarding uniform and independentprobabilities are relaxed (Boland, Proschan, & Tong 1989; Dietrich& Spiekermann 2013). The Condorcet result is significantlyweakened, however, when applied in hierarchical representation, inwhich smaller groups first reach a majority verdict which is thencarried to a second level of representatives who use a majority voteon that level (Boland 1989). More complicated questions regardingdeliberative dynamics and representation require simulation usingcomputers.

The Hong-Page structure of group deliberation, outlined in the contextof computational philosophy of science above, can also be taken as amodel of “deliberative democracy” beyond a simple vote.The success of deliberation in a group can be measured as the averagevalue height of points found. In a representative instantiation ofthis kind of deliberation, smaller groups of individuals first usetheir individual heuristics to explore a landscape collectively, thenhanding their collective “best” for each point on thelandscape to a representative. In a second round of deliberation, therepresentatives work from the results from their constituents in asecond round of exploration.

Unlike in the case of pure voting and the Condorcet result,computational simulations show that the use of a representativestructure does not dull the effect of deliberation on this model:average scores for three groups of three in a representative structureare if anything slightly higher than average scores from an opendeliberation involving 9 agents (Grim et al. 2020). Results like theseshow how computational models might help expand the politicalphilosophical arguments for representative democracy.

Social and political philosophy appears to be a particularly promisingarea for big data and computational philosophy employing the datamining tools of computational social science, but as of this writingthat development remains largely a promise for the future.

3.3.3 Social outcomes as complex systems

The guiding idea of the interdisciplinary theme known as“complex systems” is that phenomena on a higher level can“emerge” from complex interactions on a lower level(Waldrop 1992, Kauffman 1995, Mitchell 2011, Krakauer 2019). Theemergence of social outcomes from the interaction of individualchoices is a natural target, and agent-based modeling is a naturaltool.

Opinion polarization and the evolution of cooperation, outlined above,both fit this pattern. A further classic example is the work of ThomasC. Schelling on residential segregation. A glance at demographic mapsof American cities makes the fact of residential segregation obvious:ethnic and racial groups appear as clearly distinguished patches (Figure 19). Is this an open and shut indication of rampant racism in Americanlife?

see caption, red, purple, green, and orange are all clustered

Figure 19: A demographic map of LosAngeles. White households are shown in red, African-American inpurple, Asian-American in green, and Hispanic in orange. (Fischer 2010 in Other Internet Resources)

Schelling attempted an answer to this question with an agent-basedmodel that originally consisted of pennies and dimes on a checkerboardarray (Schelling 1971, 1978), but which has been studiedcomputationally in a number of variations. Two types of agents(Schelling’s pennies and dimes) are distributed at random acrossa cellular automata lattice, with given preferences regarding theirneighbors. In its original form, each agent has a threshold regardingneighbors of “their own kind”. At that threshold level andabove, agents remain in place. Should they not have that number oflike neighbors, they move to another spot (in some variations, a moveat random, in others a move to the closest spot that satisfies theirthreshold).

What Schelling found was that residential segregation occurs evenwithout a strong racist demand that all of one’s neighbors, oreven most, are “of one’s kind”. Even when preferenceis that just a third of one’s neighbors are “ofone’s kind”, clear patches of residential segregationappear. The iterated evolution of such an array is shown inFigure 20. Seethe interactive simulation of this residential segregation model in the Other Internet Resources section below.

a large grid of red and green circles on a black background (with a few places with no circles), randomly distributed with only a few small clumps of either red or green.
same large grid but now some moderate size clumps of red and green circles.
same large grid but now almost all the red or green circles are adjacent to several circles of the same color (large clumps).

Figure 20: Emergence of residentialsegregation in the Schelling model with preference threshold set at33%

The conclusion that Schelling is careful to draw from such a model issimply that a low level of preference can be sufficient forresidential segregation. It does not follow that more egregious socialand economic factors aren’t operative or even dominant in theresidential segregation we actually observe.

In this case basic modeling assumptions have been challenged onempirical grounds. Elizabeth Bruch and Robert Mare use sociologicaldata on racial preferences, challenging the sharp cut-off employed inthe Schelling model (Bruch & Mare 2006). They claim on the basisof simulation that the Schelling effect disappears when morerealistically smooth preference functions are used instead. Theirsimulations and the latter claim turn out to be in error (van de Rijt,Siegel, & Macy 2009), but the example of testing the robustness ofsimple models with an eye to real data remains a valuable one.

3.4 Computational Philosophy of Language

Computational modeling has been applied in philosophy of languagealong two main lines. First, there are investigations of analogy andmetaphor using models of semantic webs that share a developmentalhistory with some of the models of scientific theory outlined above.Second, there are investigations of the emergence of signaling, whichhave often used a game-theoretic base akin to some approaches to theemergence of cooperation discussed above.

3.4.1 Semantic webs, analogy and metaphor

WordNet is a computerized lexical database for English built by GeorgeMiller in 1985 with a hierarchical structure of semantic categoriesintended to reflect empirical observations regarding human processing.A category “bird” includes a sub-category“songbirds” with “canary” as a particular, forexample, intended to explain the fact that subjects could more quicklyprocess “canaries sing”—which involves traversingjust one categorical step—than they could process“canaries fly” (Miller, Beckwith, Fellbaum, Gross, &Miller 1990).

There is a long tradition, across psychology, linguistics, andphilosophy, in which analogy and metaphor are seen as an important keyto abstract reasoning and creativity (Black 1962; Hesse 1943 [1966];Lakoff & Johnson 1980; Gentner 1982; Lakoff & Turner 1989).Beginning in the 1980s several notable attempts have been made toapply computational tools in order to both understand and generateanalogies. Douglas Hofstadter and Melanie Mitchell’s Copycat,developed as a model of high-level cognition, has“codelets” compete within a network in order to answersimple questions of analogy: “abc is to abd as ijk is towhat?” (Hofstadter 2008). Holyoak and Thagard envisage metaphorsas analogies in which the source and target domain are semanticallydistinct, calling for relational comparison between two semantic nets(Holyoak & Thagard 1989, 1995; see also Falkenhainer, Forbus,& Gentner 1989). In the Holyoak and Thagard model thosecomparisons are constrained in a number of different ways that callfor coherence; their computational modeling for coherence in the caseof metaphor was in fact a direct ancestor to Thagard’s coherencemodeling of scientific theory change discussed above (Thagard 1988,1992).

Eric Steinhart and Eva Kittay’sNETMET (see Other Internet Resources) offers an illustration of the relational approach to analogy andmetaphor. They use one semantic and inferential subnet related tobirth another related to the theory of ideas in the Theatetus. Eachsubnet is categorized in terms of relations of containment,production, discarding, helping, passing, expressing and opposition.On that basis NETMET generates metaphors including “Socrates isa midwife”, “the mind is an intellectual womb”,“an idea is a child of the mind”, “some ideas arestillborn”, and the like (Steinhart 1994; Steinhart & Kittay1994). NETMET can be applied to large linguistic databases such asWordNet.

3.4.2 Signaling games and the emergence of communication

Suppose we start without pre-existing meaning. Is it possible thatunder favorable conditions, unsophisticated learning dynamics canspontaneously generate meaningful signaling? The answer isaffirmative.—Brian Skyrms,Signals (2010: 19)

David Lewis’ sender-receiver game is a cooperative game in whicha sender observes a state of nature and chooses a signal, a receiverobserves that signal and chooses an act, with both sender and receiverbenefiting from an appropriate coordination between state of natureand act (Lewis 1969). A number of researchers have explored bothanalytic and computational models of signaling games with an eye toways in which initially arbitrary signals can come to function in waysthat start to look like meaning.

Communication can be seen as a form of cooperation, and here as in thecase of the emergence of cooperation the methods of (communicative)strategy change seem less important than the interactive structure inwhich those strategies play out. Computer simulations show that simpleimitation of a neighbor’s successful strategy, various forms ofreinforcement learning, and training up of simple neural nets onsuccessful neighbors’ behaviors can all result in the emergenceand spread of signaling systems, sometimes with different dialects(Zollman 2005; Grim, St. Denis & Kokalis 2002; Grim, Kokalis,Alai-Tafti, Kilb & St. Denis, 2004).[10] Development on a cellular automata grid produces communication withany of these techniques, even when the rewards are one-sided ratherthan mutual in a strict Lewis signaling game, but structures ofinteraction that facilitate communication can also co-evolve with thecommunication they facilitate as well (Skyrms 2010). Elliot Wagnerextends the study of communication on interaction structures to othernetworks, a topic furthered in the work of Nicole Fitzgerald andJacopo Tagliabue using complex neural networks as agents (Wagner 2009;Fitzgerald and Tagliabue 2022).

On an interpretation in terms of biological evolution, computationallyemergent signaling of this sort can be seen as modeling communicationin Vervet monkeys (Cheney & Seyfarth 1990) or even chemical“signals” in bacteria (Berleman, Scott, Chumley, &Kirby 2008). If interpreted in terms of learned culture, particularlywith an eye to more complex signal combination, these have beenoffered as models of mechanisms at play in the development of humanlanguage (Skyrms 2010).A simple interactive model in which signaling emerges in a situated population of agents harvesting food sources and avoiding predators is available in the Other Internet Resources section below. Signalinggames and emergent communication are now topics of exploration withdeep neural networks and in machine learning quite widely, often withan eye to technological applications (Bolt and Mortensen 2024).

3.5 From Theorem-Provers to Ethical Reasoning, Metaphysics, and Philosophy of Religion

Many of our examples of computational philosophy have been examples ofsimulation—often social simulation by way of agent-basedmodeling. But there is also a strong tradition in which computation isused not in simulations but as a way of mechanizing and extendingphilosophical argument (typically understood as deductive proof), withapplications in philosophy of logic and ultimately in deontic logic,metaphysics, and philosophy of religion.[11]

Entitling a summer Dartmouth conference in 1956, the organizers coinedthe term “artificial intelligence”. One of the high pointsof that conference was a computational program for the construction oflogical proofs, developed by Allen Newell and Herbert Simon atCarnegie Mellon and programmed by J. C. Shaw using the vacuum tubes ofthe JOHNNIAC computer at the Institute for Advanced Study (Bringsjord& Govindarajulu 2018 [2019]). Newell and Simon’s“Logic Theorist” was given 52 theorems from chapter two ofWhitehead and Russell’sPrincipia Mathematica (1910,1912, 1913), of which it successfully proved 38, including a proofmore elegant than one of Whitehead and Russell’s own (MacKenzie1995, Loveland 1984, Davis 1957 [1983]). Russell himself wasimpressed:

I am delighted to know thatPrincipia Mathematica can now bedone by machinery… I am quite willing to believe thateverything in deductive logic can be done by machinery. (letter toHerbert Simon, 2 November 1956; quoted in O’Leary 1991: 52)

Despite possible claims to anticipation, the most compelling of whichmay be Martin Davis’s 1950 computer implementation of MojseszPresburger’s decision procedure for a fragment of arithmetic(Davis 1957), the Logic Theorist is standardly regarded as the firstautomated theorem-prover. Newell and Simon’s target, however,was not so much a logic prover as a proof of concept for anintelligent or thinking machine. Having rejected geometrical proof astoo reliant on diagrams, and chess as too hard, by Simon’s ownaccount they turned to logic becausePrincipia Mathematicahappened to be on his shelf.[12]

Simon and Newell’s primary target was not an optimizedtheorem-prover but a “thinking machine” that in some waymatched human intelligence. They therefore relied in heuristicsthought of as matching human strategies, an approach later ridiculedby Hao Wang:

There is no need to kill a chicken with a butcher’s knife, yetthe net impression is that Newell-Shaw-Simon failed even to kill thechicken…to argue the superiority of “heuristic”over algorithmic methods by choosing a particularly inefficientalgorithm seems hardly just. (Wang 1960: 3)

Later theorem-provers were focused on proof itself rather than a modelof human reasoning. By 1960 Hao Wang, Paul Gilmore, and Dag Prawitzhad developed computerized theorem-provers for the full first-orderpredicate calculus (Wang 1960, MacKenzie 1995). In the 1990s WilliamMcCune developed Otter, a widely distributed and accessible prover forfirst-order logic (McCune & Wos 1997, Kalman 2001). A more recentincarnation is Prover9, coupled with search for models andcounter-examples inMace4.[13]Examples of Prover9 derivations are offered in Other Internet Resources. A contemporary alternative isVampire, developed by Andrei Voronkov, Kryštof Hodere, and AlexanderRizanov (Riazanov & Voronkov 2002).

Theorem-provers developed for higher-order logics, working from avariety of approaches, include TPS (Andrews and Brown 2006), Leo-IIand -III (Benzmüller, Sultana, Paulson, & Theiß 2015;Steen & Benzmüller 2018), and perhaps most prominently HOLand particularly development-friendlyIsabelle/HOL (Gordon & Melham 1993; Paulson 1990). With clever implementationand extension, these also allow automation of aspects of modal,deontic, epistemic, intuitionistic and paraconsistent logics, ofinterest both in their own terms and in application within computerscience, robotics, and artificial intelligence (McRobbie 1991; Abe,Akama, & Nakamatsu 2015).

Within pure logic, Portararo (2001 [2019]) lists a number of resultsthat have been established using automated theorem-provers. It wasconjectured for 50 years that a particular equation in a Robbinsalgebra could be replaced by a simpler one, for example. Even Tarskihad failed in the attempt at proof, but McCune produced an automatedproof in 1997 (McCune 1997). Shortest and simplest axiomatizations forimplicational fragments of modal logics S4 and S5 had been studied foryears as open questions, with eventual results by automated reasoningin 2002 (Ernst, Fitelson, Harris, & Wos 2002).[14]

Theorem provers have been applied within deontic logics in the attemptto mechanize ethical reasoning and decision-making (Meyer &Wierenga 1994; Van Den Hoven & Lokhorst 2002; Balbiani, Broersen,& Brunel 2009; Governatori & Sartor 2010; Benzmüller,Parent, & van der Torre 2018; Benzmüller, Farjami, &Parent, 2018). Alan Gewirth has argued that agents contradict theirstatus as agents if they don’t accept a principle of genericconsistency—respecting the agency-necessary rights ofothers—as a supreme principle of practical rationality (Gewirth1978; Beyleveld 1992, 2012). Fuenmayor and Benzmüller have shownthat even an ethical theory of this complexity can be formally encodedand assessed computationally (Fuenmayor & Benzmüller2018).

One of the major advances in computational philosophy has been theapplication of theorem-provers to the analysis of classicalphilosophical positions and arguments. From axioms of a metaphysicalobject theory, Zalta and his collaborators use Prover9 and Mace toestablish theorems regarding possible worlds, such as the claim thatevery possible world is maximal, modal theorems in Leibniz, andconsequences from Plato’s theory of Forms (Fitelson & Zalta2007; Alama, Oppenheimer, & Zalta 2015; Kirchner, Benzmüller,& Zalta 2019).

Versions of the ontological argument have formed an important threadin recent work employing theorem provers, both because of theirinherent interest and the technical challenges they bring with them.Prover9 and Mace have again been used recently by Jack Horner in orderto analyze a version of the ontological argument in Spinoza’sEthics (found invalid) and to propose an alternative (Horner2019). Significant work has been done on versions of Anselm’sontological argument (Oppenheimer & Zalta 2011; Garbacz 2012;Rushby 2018). Christoph Benzmüller and his colleagues haveapplied higher-order theorem provers, including including Isabelle/HOLand their own Leo-II and Leo-III, in order to analyze a version of theontological argument found in the papers of Kurt Gödel(Benzmüller & Paelo 2016a, 2016b; Benzmüller, Weber,& Paleo 2017; Benzmüller & Fuenmayor 2018). A previouslyunnoticed inconsistency was found in Gödel’s original,though avoided in Dana Scott’s transcription. Theorem-proversconfirmed that Gödel’s argument forces modalcollapse—all truths become necessary truths. Analysis withtheorem-provers makes it clear that variations proposed by C. AnthonyAnderson and Melvin Fitting avoid that consequence, but in importantlydifferent ways (Benzmüller & Paleo 2014; Kirchner,Benzmüller, & Zalta 2019).[15]

Work in metaphysics employing theorem-provers continues. Here ofparticular note is Ed Zalta’s ambitious and long-term attempt toground metaphysics quite generally in computationally instantiatedobject theory (Fitelson & Zalta 2007; Zalta 2020).A link to Zalta’s project can be found in the Other Internet Resources section below.

3.6 Artificial Intelligence and Philosophy of Mind

The Dartmouth conference of 1956 is standardly taken as marking theinception of both the field and the term “artificial intelligence” (AI). There were, however, two distinct trajectories apparent in thatconference. Some of the participants took as their goal to be thedevelopment of intelligent or thinking machines, with perhaps anunderstanding of human processing as a begrudging means to that end.Others took their goal to be a philosophical and psychologicalunderstanding of human processing, with the development of machines ameans to that end. Those in the first group were quick to exploitlinear programming: what came to be known as “GOFAI”, or“good old-fashioned artificial intelligence”. Those in thesecond group rejoiced when connectionist and neural net architecturescame to maturity several decades later, promising models directlybuilt on and perhaps reflective of mechanisms in the human brain(Churchland 1995).

Attempts to understand perception, conceptualization, belief change,and intelligence are all part of philosophy of mind. The use ofcomputational models toward that end—the second strandabove—thus comes close to computational philosophy of mind.Daniel Dennett has come close to saying that AIis philosophyof mind: “a most abstract inquiry into the possibility ofintelligence or knowledge” (Dennett 1979: 60; Bringsjord &Govindarajulu 2018 [2019]).

The bulk of AI research remains strongly oriented toward producingeffective and profitable information processing, whether or not theresult offers philosophical understanding. So it is perhaps better notto identify AI with philosophy of mind, though AI has often beenguided by philosophical conceptions and aspects of AI have provenfruitful for philosophical exploration. Philosophyof AI(including theethics of AI) and philosophy of mindinspired by andin responseto AI, which are not the topic here, have both been far more commonthan philosophy of mind developed with the techniques of AI.

One example of a program in artificial intelligence that wasexplicitly conceived in philosophical terms and designed forphilosophical ends was the OSCAR project, developed by John Pollockbut cut short by his death (Pollock 1989, 1995, 2006). The goal ofOSCAR was construction of a computational agent: an “artificialintellect”. At the core of OSCAR was implementation of a theoryof rationality. Pollock was explicit regarding the intersection of AIand philosophy of mind in that project:

The implementability of a theory of rationality is a necessarycondition for its correctness. This amounts to saying that philosophyneeds AI just as much as AI needs philosophy. (Pollock 1995: xii;Bringsjord & Govindarajulu 2018 [2019])

At the core of OSCAR’s rationality is implementation ofdefeasible non-monotonic logic employing prima facie reasons andpotential defeaters. Among its successes, Pollock claims an ability tohandle the lottery paradox and preface paradoxes. Informally, the factthat we know that one of the many tickets in a lottery will win meansthat we must treat “ticket 1 will not win…”,“ticket 2 will not win…” and the like not as itemsof knowledge but as defeasible beliefs for which we have strong primafacie reasons. Pollock’s formal treatment in terms of collectivedefeat is nicely outlined in a supplement on OSCAR in Bringsjord &Govindarajulu (2018 [2019]).

4. Evaluating Computational Philosophy

The sections above were intended to be an introduction tocomputational philosophy largely by example, emphasizing both thevariety of computational techniques employed and the spread ofphilosophical topics to which they are applied. This final section isdevoted to the problems and prospects of computational philosophy.

4.1 Critiques

Although computational instantiations of logic are of an importantlydifferent character, simulation—including agent-basedsimulation—plays a major role in much of computationalphilosophy. Beyond philosophy, across all disciplines of itsapplication, simulation often raises suspicions.

A standard suspicion of simulation in various fields is that one“can prove anything” by manipulation of model structureand parameters. The worry is that an anticipated or desired effectcould always be “baked in”, programmed as an artefact ofthe model itself. Production of a simulation would thus demonstratenot the plausibility of a hypothesis or a fact about the world butmerely the cleverness of the programmer. In a somewhat differentcontext, Rodney Brooks has written that the problem with simulationsis that they are “doomed to succeed” (Brooks & Mataric1993).

But consider a similar critique of logical argument: that one“can prove anything” by careful choice of premises andrules of inference. The proper response in the case of logicalargument is to concede the fact that a derivation for any propositioncan be produced from carefully chosen premises and rules, but toemphasize that it may be difficult or impossible to produce aderivation from agreed rules and clear and plausible premises.

A similar response is appropriate here. The effectiveness ofsimulation as argument depends on the strength of its assumptions andthe soundness of its mechanisms just as the effectiveness of logicalproof depends on the strength of its premises and the validity of itsrules of inference. The legitimate force of the critique, then, is notthat simulation is inherently untrustworthy but simply that theassumptions of any simulation are always open to furtherexamination.

Anyone who has attempted computer simulation can testify that it isoften extremely difficult or impossible to produce an expected effect,particularly a robust effect across a plausible range of parametersand with a plausible basic mechanism. Like experiment, simulation candemonstrate both the surprising fragility of a favored hypothesis andthe surprising robustness of an unexpected effect.

Far from being “doomed to succeed”, simulations fail quiteregularly in several important ways (Grim, Rosenberger, Rosenfeld,Anderson, & Eason 2013). Two standard forms of simulation failureare failure of verification and failure of validation (Kleijnen 1995;Windrum, Fabiolo, & Moneta 2007; Sargent 2013). Verification of amodel demands assuring that it accurately reflects design intention.If a computational model is intended to instantiate a particulartheory of belief change, for example, it fails verification if it doesnot accurately represent the dynamics of that theory. Validation isperhaps the more difficult demand, particularly for philosophicalcomputation: that the computational model adequately reflects thoseaspects of the real world it is intended to capture or explain.

If its critics are right, a simple example of verification failure isthe original Weisberg and Muldoon model of scientific explorationoutlined above (Weisberg & Muldoon 2009). The model was intendedto include two kinds of epistemic agents—followers andmavericks—with distinct patterns of exploration. Mavericks avoidpreviously investigated points in their neighborhood. Followers moveto neighboring points that have been investigated but that have ahigher significance. In contrast to their description in the text, thecritics argue, the software for the model used “>=” inplace of “>” at a crucial place, with the result thatfollowers moved to neighboring points with a higher or equalsignificance, resulting in their often getting stuck in a very localoscillation (Alexander, Himmelreich, & Thomson 2015). If so,Weisberg and Muldoon’s original model fails to match its designintention—it fails verification—though some of theirgeneral conclusions regarding epistemic diversity have been vindicatedin further studies.

Validation is a very different and more difficult demand: that asimulation model adequately captures relevant aspects of what it isintended to model. A common critique of specific models is that theyare too simple, leaving out some crucial aspect of the modeledphenomenon. When properly targeted, this can be an entirelyappropriate critique. But what it calls for is not the abandonment ofmodeling but better construction of a better model.

In time…the Cartographers Guilds struck a Map of the Empirewhose size was that of the Empire, and which coincided point for pointwith it. The following Generations, saw that that vast Map wasUseless…. (Jorge Luis Borges, “On Exactitude inScience”, 1946 [1998 English translation: 325])

Borges’ story is often quoted in illustration of the fact thatno model—and no scientific theory—can include allcharacteristics of what it is intended to model (Weisberg 2013).Models and theories would be useless if they did: the purpose of boththeories and models is to present simpler representations ormechanisms that capture therelevant features or dynamics ofa phenomenon. What aspects of a phenomenon are in fact the relevantaspects for understanding that phenomenon calls for evaluative inputoutside of the model. But where relevant aspects are omitted,irrelevant aspects included, or unrealistic or artificial constraintsimposed, what a critique calls for is a better model (Martini &Pinto 2017; Thicke 2019).

There is one aspect of validation that can sometimes be gauged at thelevel of modeling itself and with modeling tools alone. Where thetarget is some general phenomenon—opinion polarization or theemergence of communication, for example—a model which producesthat phenomenon within only a tiny range of parameters should besuspicious. Our estimate of the parameters actually in play in theactual phenomenon may be merely intuitive or extremely rough, and thereal phenomenon may be ubiquitous in a wide range of settings. In sucha case, it would seem prima facie unlikely that a model which produceda parallel effect within only a tiny window of parameters could becapturing the general mechanism of a general phenomenon. In such casesrobustness testing is called for, a test for one aspect of validationthat can still be performed on the computer. To what extent doconclusions drawn from the modeling effect hold up under a range ofparameter variations?

The Hong-Page model of the value of diversity in exploration, outlinedabove, has been widely appealed to quite generally as support forcognitive diversity in groups. It has been cited in NASA internaldocuments, offered in support of diversity requirements at UCLA, andappears in anamicus curiae brief before the Supreme Court insupport of promoting diversity in the armed forces (Fisher v. Univ. ofTexas 2016). But the model is not robust enough across its severalparameters to support sweepingly general claims that have been made onits basis regarding diversity and ability or expertise (Grim et al.2019). Is that a problem internal to the model, or an external matterof its interpretation or application? There is much to be said for thelatter alternative. The model is and remains an interestingone—interesting often in the ways in which itdoes showsensitivity to different parameters. Thus a failure of one aspect ofvalidation—robustness—with an eye to one type of generalclaim can also call for further modelling: modeling intended toexplore different effects in different contexts. Rosenstock, Bruner,and O’Connor (2017) offer a robustness test for the Zollmanmodel outlined above. Borg, Frey, Šešelja, andStraßer (2018) offer new modeling grounded precisely in arobustness critique of their predecessors.

It is noteworthy that the simulation failures mentioned have beendetected and corrected within the literature of simulation itself.These are effective critiques within disciplines employing simulation,rather than from outside. An illustration of a such a case with bothverification and validation in play is that of the Bruch and Marecritique of the Schelling segregation model and the response to it invan Rooij, Siegel, and Macy (Schelling 1971, 1978; Bruch & Mare2006; van de Rijt, Siegel, & Macy 2009). Many aspects of thatmodel are clearly artificial: a limitation to two groups,spatialization on a cellular automata grid, and“unhappiness” or moving in terms of a sharp thresholdcut-off of tolerance for neighbors of the other group. Bruch and Mareoffered clear empirical evidence that residential preferences do notfit a sharp threshold. More importantly, they built a variation of theSchelling model in order to show that the Schelling effect disappearedwith more realistic preference profiles. What Bruch and Marechallenged, in other words, wasvalidation: not merely thataspects of the target phenomenon of residential segregation were leftout (as they would be in any model), but that relevant aspects wereleft out: differences that made an important difference. Van de Rijt,Siegel, and Macy failed to understand why the smooth preference curvesin Bruch and Mare’s data wouldn’t support rather thandefeat a Schelling effect. On investigation they found that theywould: Bruch and Mare’s validation claim against Schelling wasitself founded in a programming error. De Rijt, Siegel andMacy’s verdict was that Bruch and Mare’s attack itselffailed modelverification.

In the case of both Weisberg and Muldoon, and Bruch and Mare, originalcode was made freely available to their critics. In both cases, theoriginal authors recognized the problems revealed, though emphasizingaspects of their work that survived the criticisms. Here again animportant point is that critiques and responses of this type havearisen and been addressed within philosophical and scientificsimulation itself, working toward better models and practices.

4.2 Prospects and Undeveloped Aspects

Philosophy at its best has always been in contact with the conceptualand scientific methodologies of its time. Computational philosophy canbe seen as a contemporary instantiation of that contact, crossingdisciplinary boundaries in order to both influence and benefit fromdevelopments in computer science and artificial intelligence.Incorporation of new technologies and wider application withinphilosophy can be expected and should be hoped for.

There is one extremely promising area in need of development withincomputational philosophy, though that area may also call for changesin conceptions of philosophy itself. Philosophy has classically beenconceived as abstract rather than concrete, as seeking understandingat the most general level rather than specific prediction orretrodiction, often normative, and as operating in terms of logicalargument and analysis rather than empirical data. The last of thesecharacteristics, and to some extent the first, will have to bequalified if computational philosophy grows to incorporate a majorbatch of contemporary techniques: those related to big data.

Expansion of computational philosophy in the intersection with bigdata seems an exciting prospect for social and political philosophy,in the analysis of belief change, and in understanding the social andhistorical dynamics of philosophy of science (Overton 2013; Pence& Ramsey 2018). A particular benefit would be better prospects forvalidation of a range of simulations and agent-based models, asemphasized above (Mäs 2019; Reijula & Kuorikoski 2019). Ifcomputational philosophy moves in that promising direction, however,it may take on a more empirical character in some respects. Emphasison general and abstract understanding and concern with the normativewill remain marks of a philosophical approach, but the membranebetween some topic areas in philosophy and aspects of computationalscience can be expected to become more permeable.

Dissolving these disciplinary boundaries may itself be a good in somerespects. The examples presented above make it clear that inincorporating (and contributing to) computational techniques developedin other areas, computational philosophy has long beencross-disciplinary. If our gain is a better understanding of thetopics that have long fascinated us, compromise in disciplinaryboundaries and a change in our concept of philosophy seem a smallprice to pay.

Bibliography

  • Abe, Jair Minoro, Seiki Akama, and Kazumi Nakamatsu, 2015,Introduction to Annotated Logics: Foundations for Paracomplete andParaconsistent Reasoning, (Intelligent Systems Reference Library88), Cham: Springer International Publishing.doi:10.1007/978-3-319-17912-4
  • Alama, Jesse, Paul E. Oppenheimer, and Edward N. Zalta, 2015,“Automating Leibniz’s Theory of Concepts”, inAutomated Deduction—CADE-25: Proceedings of the 25thInternational Conference on Automated Deduction, Berlin, Germany,August 1–7, 2015, Amy P. Felty and Aart Middeldorp (eds.),(Lecture Notes in Computer Science 9195), Cham: Springer InternationalPublishing, 73–97. doi:10.1007/978-3-319-21401-6_4
  • Alchourrón, Carlos E., Peter Gärdenfors, and DavidMakinson, 1985, “On the Logic of Theory Change: Partial MeetContraction and Revision Functions”,Journal of SymbolicLogic, 50(2): 510–530. doi:10.2307/2274239
  • Alexander, J. McKenzie, 2007,The Structural Evolution ofMorality, Cambridge: Cambridge University Press.doi:10.1017/CBO9780511550997
  • Alexander, J. McKenzie, Johannes Himmelreich, and ChristopherThompson, 2015, “Epistemic Landscapes, Optimal Search, and theDivision of Cognitive Labor”,Philosophy of Science,82(3): 424–453. doi:10.1086/681766
  • Anderson, Elizabeth, 2006, “The Epistemology ofDemocracy”,Episteme, 3(1–2): 8–22.doi:10.3366/epi.2006.3.1-2.8
  • Andrews, Peter B. and Chad E. Brown, 2006, “TPS: A HybridAutomatic-Interactive System for Developing Proofs”,Journalof Applied Logic, 4(4): 367–395.doi:10.1016/j.jal.2005.10.002
  • Angere, Staffan and Erik J. Olsson, 2017, “Publish Late,Publish Rarely!: Network Density and Group Performance in ScientificCommunication”, inScientific Collaboration and CollectiveKnowledge: New Essays, Thomas Boyer-Kassem, Conor Mayo-Wilson,and Michael Weisberg (eds.), New York: Oxford University Press, pp.34–63.
  • Axelrod, Robert, 1997, “The Dissemination of Culture: AModel with Local Convergence and Global Polarization”,Journal of Conflict Resolution, 41(2): 203–226.doi:10.1177/0022002797041002001
  • Axelrod, Robert and W. D. Hamilton, 1981, “The Evolution ofCooperation”,Science, 211(4489): 1390–1396.doi:10.1126/science.7466396
  • Bala, Venkatesh and Sanjeev Goyal, 1998, “Learning fromNeighbours”,Review of Economic Studies, 65(3):595–621. doi:10.1111/1467-937X.00059
  • Balbiani, Philippe, Jan Broersen, and Julien Brunel, 2009,“Decision Procedures for a Deontic Logic Modeling TemporalInheritance of Obligations”,Electronic Notes in TheoreticalComputer Science, 231: 69–89.doi:10.1016/j.entcs.2009.02.030
  • Baltag, Alexandru and Bryan Renne, 2016 [2019], “DynamicEpistemic Logic”,Stanford Encyclopedia of Philosophy,(Winter 2019), Edward N. Zalta (ed.), URL= <https://plato.stanford.edu/archives/win2016/entries/dynamic-epistemic>
  • Baltag, A., R. Boddy, and S. Smets, 2018, “Group Knowledgein Interrogative Epistemology”, inJaakko Hintikka onKnowledge and Game-Theoretical Semantics, Hans van Ditmarsch andGabriel Sandu (eds.), (Outstanding Contributions to Logic 12), Cham:Springer International Publishing, 131–164.doi:10.1007/978-3-319-62864-6_5
  • Benzmüller, Christoph and Bruno Woltzenlogel Paleo, 2014,“Automating Gödel’s Ontological Proof of God’sExistence with Higher-order Automated Theorem Provers”, inECAI 2014, Torsten Schaub, Gerhard Friedrich, and BarryO’Sullivan (eds.), (Frontiers in Artificial Intelligence andApplications 263), Amsterdam: IOS Press, pp. 93–98.doi:10.3233/978-1-61499-419-0-93
  • –––, 2016a, “The Inconsistency inGōdel’s Ontological Argument: A Success Story for AI inMetaphysics”, inProceedings of the Twenty-FifthInternational Joint Conference on Artificial Intelligence (IJCAI2016), Gerhard Brewka (ed.), New York: AAAI Press, pp.936–942.
  • –––, 2016b, “An Object-Logic Explanationfor the Inconsistency in Gōdel’s Ontological Theory”,inKI 2016: Advances in Artificial Intelligence Proceedings,Gerhard Friedrich, Malte Helmert, and Franz Wotawa (eds.), Berlin:Springer, pp. 244–250.
  • Benzmüller, Christoph, L. Weber, and Bruno WoltzenlogelPaleo, 2017, “Computer-Assisted Analysis of theAnderson–Hájek Ontological Controversy”,LogicaUniversalis, 11(1): 139–151.doi:10.1007/s11787-017-0160-9
  • Benzmüller, Christoph and David Fuenmayor, 2018, “CanComputers Help to Sharpen Our Understanding of OntologicalArguments?” in S. Gosh, R. Uppalari, K. Rao, V. Agarwal, and S.Sharma (eds.),Mathematics and Reality: Proceedings of the11th All Indian Students’ Conference on Science andSpiritual Quest (AISSQ), Bhudabenswar, Kolkata: The BhaktiedantaInstitute, pp. 195–226.
  • Benzmüller, Christoph, Nik Sultana, Lawrence C. Paulson, andFrank Theiß, 2015, “The Higher-Order Prover Leo-II”,Journal of Automated Reasoning, 55(4): 389–404.doi:10.1007/s10817-015-9348-y
  • Benzmüller, Christoph, Xavier Parent, and Leendert van derTorre, 2018, “A Deontic Logic Reasoning Infrastructure”,inSailing Routes in the World of Computation: 14th Conference onComputability in Europe, CiE 2018, Florin Manea, Russell G.Miller, and Dirk Nowotka (eds.), (Lecture Notes in Computer Science10936), Cham: Springer International Publishing, 60–69.doi:10.1007/978-3-319-94418-0_6
  • Benzmüller, Christoph, Ali Farjami, and Xavier Parent, 2018,“A Dyadic Deontic Logic in HOL”, inDeontic Logic andNormative Systems, 14th International Conference (DEON2018), Jan Broersen, Cleo Condoravdi, Nair Shyam and GabriellaPigozzi (eds.), London: College Publications, pp. 33–50.
  • Berleman, James E., Jodie Scott, Taliana Chumley, and John R.Kirby, 2008, “Predataxis Behavior inMyxococcusxanthus”,Proceedings of the National Academy ofSciences, 105(44): 17127–17132.doi:10.1073/pnas.0804387105
  • Betz, Gregor, 2013,Debate Dynamics: How Controversy ImprovesOur Beliefs, Dordrecht: Springer Netherlands.doi:10.1007/978-94-007-4599-5
  • Beyleveld, Deryck, 1992,The Dialectical Necessity ofMorality: An Analysis and Defense of Alan Gewirth’s Argument tothe Principle of Generic Consistency, Chicago: University ofChicago Press.
  • –––, 2012, “The Principle of GenericConsistency as the Supreme Principle of Human Rights”,HumanRights Review, 13(1): 1–18.doi:10.1007/s12142-011-0210-2
  • Black, Max, 1962,Models and Metaphors: Studies in Languageand Philosophy, Ithaca, NY: Cornell University Press.
  • Bobzien, Susanne, 2006 [2016], “Ancient Logic”,The Stanford Encyclopedia of Philosophy, (Winter 2016),Edward N. Zalta (ed.), URL = <https://plato.stanford.edu/archives/win2016/entries/logic-ancient/>
  • Boland, Philip J., 1989, “Majority Systems and the CondorcetJury Theorem”,Journal of the Royal Statistical Society:Series D (The Statistician), 38(3): 181–189.doi:10.2307/2348873
  • Boland, Philip J., Frank Proschan, and Y. L. Tong, 1989,“Modelling Dependence in Simple and Indirect MajoritySystems”,Journal of Applied Probability, 26(1):81–88. doi:10.2307/3214318
  • Boldt, Brendon and David Mortensen, 2024, “A Review of theApplications of Deep Learning-Based Emergent Communication,”Transactions on Machine Learning Research, 02/2024 [Boldt and Mortensen 2024 available online].
  • Borg, AnneMarie, Daniel Frey, Dunja Šešelja, andChristian Straßer, 2018, “Epistemic Effects of ScientificInteraction: Approaching the Question with an ArgumentativeAgent-Based Model.”,Historical Social Research, 43(1):285–309. doi:10.12759/HSR.43.2018.1.285-307
  • Borges, Jorge Luis [pseud. Suarez Miranda], 1946 [1998],“Del rigor en la ciencia”,Los Anales de BuenosAires, 1(3). Translated as “On Exactitude inScience”, in hisCollected Fictions, Andrew Hurley(trans.), New York: Penguin Books, 1998.
  • Bramson, Aaron, Patrick Grim, Daniel J. Singer, William J. Berger,Graham Sack, Steven Fisher, Carissa Flocken, and Bennett Holman, 2017,“Understanding Polarization: Meanings, Measures, and ModelEvaluation”,Philosophy of Science, 84(1):115–159. doi:10.1086/688938
  • Bringsjord, Selmer and Naveen Sundar Govindarajulu, 2018 [2019],“Artificial Intelligence”, inThe StanfordEncyclopedia of Philosophy, Winter 2019, Edward N. Zalta (ed.),URL = <https://plato.stanford.edu/archives/win2019/entries/artificial-intelligence/>
  • Brooks, Rodney A. and Maja J. Mataric, 1993, “Real Robots,Real Learning Problems”, inRobot Learning, Jonathan H.Connell and Sridhar Mahadevan (eds.), Dordrecht: Kluwer,193–213. doi:10.1007/978-1-4615-3184-5_8
  • Bruch, Elizabeth E. and Robert D. Mare, 2006, “NeighborhoodChoice and Neighborhood Change”,American Journal ofSociology, 112(3): 667–709. doi:10.1086/507856
  • Bruner, Justin P., 2017, “Minority (Dis)advantage inPopulation Games”,Synthese, 196 (1)DOI:10.1007/s11229-017- 1487-8.
  • –––, 2020, “Locke, Nozick and the State ofNature”,Philosophical Studies, 177: 705–726,doi:10.1007/s11098-018-1201-9
  • Centola, Damon, Juan Carlos González-Avella, VíctorM. Eguíluz, and Maxi San Miguel, 2007, “Homophily,Cultural Drift, and the Co-Evolution of Cultural Groups”,Journal of Conflict Resolution, 51(6): 905–929.doi:10.1177/0022002707307632
  • Cheney, Dorothy L. and Robert M. Seyfarth, 1990,How MonkeysSee the World: Inside the Mind of Another Species, Chicago:University of Chicago Press.
  • Chung, Hun, 2015, “Hobbes’s State of Nature: A ModernBayesian Game-Theoretic Analysis”,Journal of the AmericanPhilosophical Association, 1(3): 485–508.doi:10.1017/apa.2015.12
  • Church, Alonzo, 1936, “A Note on theEntscheidungsproblem”,Journal of Symbolic Logic, 1(1):40–41. doi:10.2307/2269326
  • Churchland, Paul M., 1995,The Engine of Reason, the Seat ofthe Soul: A Philosophical Journey into the Brain, Cambridge MA:MIT Press.
  • Climenhaga, Nevin, 2020, “The Structure of EpistemicProbabilities,”Philosophical Studies, 177:3213–3242. doi:10.1007/s11098-019-01367-0
  • –––, 2023, “Evidence and InductiveInference,” in Maria Lasonen-Aarnko and Clayton Littlejohn,The Routledge Handbook of the Philosophy of Evidence, NewYork: Routledge, 435–449.
  • Condorcet, Nicolas de, 1785 [1995], “An Essay on theApplication of Analysis to the Probability of Decisions Rendered by aPlurality of Votes (fifth part)”, Part translated inClassics of Social Choice, Iain McLean and Arnold B. Urken(trans. and eds.), Ann Arbor MI: University of Michigan Press, pp.91–112, 1995.
  • Davis, Martin, 1957 [1983], “A Computer Program forPresburger’s Algorithm”, presented at the Cornell SummerInstitute for Symbolic Logic. Reprinted in Siekmann and Wrightson1983: 41–48.
  • –––, 1983, “The Prehistory and EarlyHistory of Automated Deduction”, in Siekmann and Wrightson 1983:1–28.
  • Dawkins, Richard, 1976,The Selfish Gene, New York:Oxford University Press.
  • Deffuant, Guillaume, 2006, “Comparing Extremism PropagationPatterns in Continuous Opinion Models”,Journal ofArtificial Societies and Social Simulation, 9(3): 8. URL = <http://jasss.soc.surrey.ac.uk/9/3/8.html>
  • Deffuant, Guillaume, Frédéric Amblard, GérardWeisbuch, and Thierry Faure, 2002, “How Can Extremism Prevail? AStudy Based on the Relative Agreement Interaction Model”,Journal of Artificial Societies and Social Simulation, 5(4):1. URL = <http://jasss.soc.surrey.ac.uk/5/4/1.html>
  • Dennett, Daniel, 1979, “Artificial Intelligence asPhilosophy and as Psychology”, inPhilosophical Perspectivesin Artificial Intelligence, Martin Ringle (ed.), AtlanticHighlands, NJ: Humanities Press, pp. 57–80.
  • Dietrich, Franz and Kai Spiekermann, 2013, “IndependentOpinions? On the Causal Foundations of Belief Formation and JuryTheorems”,Mind, 122(487): 655–685.doi:10.1093/mind/fzt074
  • Dorst, Kevin, 2023, “Rational Polarization,”Philosophical Review, 132(3): 355–458.
  • Douven, Igor and Alexander Riegler, 2010, “Extending theHegselmann-Krause Model I”,Logic Journal of IGPL,18(2): 323–335. doi:10.1093/jigpal/jzp059
  • Dung, Phan Minh, 1995, “On the Acceptability of Argumentsand Its Fundamental Role in Nonmonotonic Reasoning, Logic Programmingandn-Person Games”,Artificial Intelligence,77(2): 321–357. doi:10.1016/0004-3702(94)00041-X
  • Ernst, Zachary, Branden Fitelson, Kenneth Harris, and Larry Wos,2002, “Shortest Axiomatizations of Implicational S4 andS5”,Notre Dame Journal of Formal Logic, 43(3):169–179. doi:10.1305/ndjfl/1074290715
  • Fagin, Ronald, Joseph Y. Halpern, Yoram Moses and Moshe Vardi,1995,Reasoning About Knowledge, Cambridge, MA: MITPress.
  • Falkenhainer, Brian, Kenneth D. Forbus, and Dedre Gentner, 1989,“The Structure-Mapping Engine: Algorithm and Examples”,Artificial Intelligence, 41(1): 1–63.doi:10.1016/0004-3702(89)90077-5
  • Fisher, Ronald, 1930,The Genetical Theory of NaturalSelection, Oxford: Clarendon Press.
  • Fisher v. University of Texas, 2016, Brief for Lt. Gen. Julius W.Becton, Jr., Gen. John P. Abizaid, Adm. Dennis C. Blair, Gen. BryanDoug Brown, Gen. George W. Casey, Lt. Gen Daniel W. Christman, Gen.Wesley K. Clark, Adm. Archie Clemins, Gen. Ann E. Dunwoody, Gen.Ronald R. Fogleman, Adm. Edmund P. Giambastiani, Jr., et al., as AmiciCuriae in Support of respondents. [Fisher v. University of Texas brief 2016 available online]
  • Fitelson, Branden and Edward N. Zalta, 2007, “Steps Toward aComputational Metaphysics”,Journal of PhilosophicalLogic, 36(2): 227–247. doi:10.1007/s10992-006-9038-7
  • Fitzgerald, Nicole and Jacopo Tagliabue, 2022, “On thePlurality of Graphs,”Journal of Logic and Computation,32(6): 1129–1141.
  • Fuenmayor, David and Christoph Benzmüller, 2018,“Formalisation and Evaluation of Alan Gewirth’s Proof forthe Principle of Generic Consistency in Isabelle/HOL”,Archive of Formal Proofs, 30 October 2018. URL = <https://isa-afp.org/entries/GewirthPGCProof.html>
  • Garbacz, Paweł, 2012, “Prover9’s SimplificationExplained Away”,Australasian Journal of Philosophy,90(3): 585–592. doi:10.1080/00048402.2011.636177
  • Gentner, Dedre, 1982, “Are Scientific AnalogiesMetaphors?” inMetaphor: Problems and Perspectives,David S. Miall (ed.), Brighton: Harvester Press, pg.106–132.
  • Gewirth, Alan, 1978,Reason and Morality, Chicago:University of Chicago Press.
  • Gödel, Kurt, 1931, “Über formal unentscheidbareSätze der Principia Mathematica und verwandter Systeme, I”,Monatshefte für Mathematik und Physik, 38(1):173–198.
  • Goldman, Alvin I., 1987, “Foundations of SocialEpistemics”,Synthese, 73(1): 109–144.doi:10.1007/BF00485444
  • –––, 1999,Knowledge in a Social World,Oxford: Oxford University Press. doi:10.1093/0198238207.001.0001
  • Goldman, Alvin and Cailin O’Connor, 2001 [2019],“Social Epistemology”, inThe Stanford Encyclopedia ofPhilosophy, (Fall 2019), Edward N. Zalta (ed.), URL = <https://plato.stanford.edu/archives/fall2019/entries/epistemology-social/>
  • Goldman, Alvin and Dennis Whitcomb (eds.), 2011,SocialEpistemology: Essential Readings, New York: Oxford UniversityPress.
  • Gordon, Michael J.C. and T.F. Melham (eds.), 1993,Introduction to HOL: A Theorem Proving Environment for HigherOrder Logic, Cambridge: Cambridge University Press.
  • Governatori, Guido and Giovanni Sartor (eds.), 2010,DeonticLogic in Computer Science: 10th International Conference, DEON 2010,Fiesole, Italy, July 7–9, 2010. Proceedings, (Lecture Notesin Computer Science 6181), Berlin, Heidelberg: Springer BerlinHeidelberg. doi:10.1007/978-3-642-14183-6
  • Gray, Jonathan, 2016, “’Let us Calculate!’:Leibniz, Lull, and the Computational Imagination”,ThePublic Domain Review, 10 November 2016. URL = <https://publicdomainreview.org/2016/11/10/let-us-calculate-leibniz-llull-and-computational-imagination/>
  • Grim, Patrick, 1995, “The Greater Generosity of theSpatialized Prisoner’s Dilemma”,Journal ofTheoretical Biology, 173(4): 353–359.doi:10.1006/jtbi.1995.0068
  • –––, 1996, “Spatialization and GreaterGenerosity in the Stochastic Prisoner’s Dilemma”,Biosystems, 37(1–2): 3–17.doi:10.1016/0303-2647(95)01541-8
  • –––, 2009, “Threshold Phenomena inEpistemic Networks”,Proceedings, AAAI Fall Symposium onComplex Adaptive Systems and the Threshold Effect, FS-09-03, AAAIPress.
  • Grim, Patrick, Gary R. Mar, and Paul St. Denis, 1998,ThePhilosophical Computer: Exploratory Essays in Philosophical ComputerModeling, Cambridge MA: MIT Press.
  • Grim, Patrick, Paul St. Denis and Trina Kokalis, 2002,“Learning to Communicate: The Emergence of Signaling inSpatialized Arrays of Neural Nets”,Adaptive Behavior,10(4): 45–70. doi:10.1177/10597123020101003
  • Grim, Patrick, Trina Kokalis, Ali Alai-Tafti, Nicholas Kilb andPaul St. Denis, 2004, “Making Meaning Happen”,Journalof Experimental and Theoretical Artificial Intelligence, 16:209–243. doi:10.1080/09528130412331294715
  • Grim, Patrick, Daniel J. Singer, Christopher Reade, and StevenFisher, 2015, “Germs, Genes, and Memes: Function and FitnessDynamics on Information Networks”,Philosophy ofScience, 82(2): 219–243. doi:10.1086/680486
  • Grim, Patrick, Daniel J. Singer, Steven Fisher, Aaron Bramson,William J. Berger, Christopher Reade, Carissa Flocken, and Adam Sales,2013, “Scientific Networks on Data Landscapes: QuestionDifficulty, Epistemic Success, and Convergence”,Episteme, 10(4): 441–464. doi:10.1017/epi.2013.36
  • Grim, Patrick, Daniel J. Singer, Aaron Bramson, Bennett Holman,Sean McGeehan, and William J. Berger, 2019, “Diversity, Ability,and Expertise in Epistemic Communities”,Philosophy ofScience, 86(1): 98–123. doi:10.1086/701070
  • Grim, Patrick, Aaron Bramson, Daniel J. Singer, William J. Berger,Jiin Jung, and Scott E. Page, 2020, “Representation in Models ofEpistemic Democracy”,Episteme, 17(4): 498–518.First online: 21 December 2018. doi:10.1017/epi.2018.51
  • Grim, Patrick, Robert Rosenberger, Adam Rosenfeld, Brian Anderson,and Robb E. Eason, 2013, “How Simulations Fail”,Synthese, 190(12): 2367–2390.doi:10.1007/s11229-011-9976-7
  • Grim, Patrick, Frank Seidl, Calum McNamara, Hinton E. Rago,Isabell N. Astor, Caroline Diaso and Peter Ryner, 2022a,“Scientific Theories as Bayesian Nets: Structure and EvidenceSensitivity,”Philosophy of Science, 89:42–69.
  • Grim, Patrick, Frank Seidl, Calum McNamara, Isabell N. Astor andCaroline Diaso, 2022b, “The Punctuated Equilibrium of ScientificChange: A Bayesian Network Model,”Synthese, 200, 297.doi:10.1007/s11229-022-03720-z
  • Gunn, Paul, 2014, “Democracy and Epistocracy”,Critical Review, 26(1–2): 59–79.doi:10.1080/08913811.2014.907041
  • Habermas, Jürgen , 1992 [1996],Faktizität undGeltung. Beiträge zur Diskurstheorie des Rechts und desdemokratischen Rechtsstaats, Frankfurt Am Main: Suhrkamp Verlag.Translated asBetween Facts and Norms: Contributions to aDiscourse Theory of Law and Democracy, William Rehg (trans.),Cambridge MA: MIT Press, 1996.
  • Haldane, J.B.S., 1932,The Causes of Evolution, London:Longmans, Green & Co.
  • Halpern, Joseph Y. and Yoram Moses, 1984, “Knowledge andCommon Knowledge in a Distributed Environment”, inProceedings of the Third Annual ACM Symposium on Principles ofDistributed Computing, ACM Press, pp. 50–61.
  • Hamilton, W.D., 1964a, “The Genetical Evolution of SocialBehaviour. I”,Journal of Theoretical Biology, 7(1):1–16. doi:10.1016/0022-5193(64)90038-4
  • –––, 1964b, “The Genetical Evolution ofSocial Behaviour. II”,Journal of Theoretical Biology,7(1): 17–52. doi:10.1016/0022-5193(64)90039-6
  • Hegselmann, Rainer and Ulrich Krause, 2002, “OpinionDynamics and Bounded Confidence: Models, Analysis andSimulation”,Journal of Artificial Societies and SocialSimulation, 5(3): 2. URL = <http://jasss.soc.surrey.ac.uk/5/3/2.html>
  • –––, 2005, “Opinion Dynamics Driven byVarious Ways of Averaging”,Computational Economics,25(4): 381–405. doi:10.1007/s10614-005-6296-3
  • –––, 2006, “Truth and Cognitive Divisionof Labour: First Steps towards a Computer Aided SocialEpistemology”,Journal of Artificial Societies and SocialSimulation, 9(3): 10. URL = <http://jasss.soc.surrey.ac.uk/9/3/10.html>
  • Hesse, Hermann, 1943 [1969],Das Glasperlenspelel,Switzerland. Translated asThe Glass Bead Game (MagisterLudi), Richard & Clara Winston (trans.), New York: Holt,Rinehart & Winston, 1969.
  • Hesse, Mary B., 1966,Models and Analogies in Science,Notre Dame IN: Notre Dame University Press.
  • Hobbes, Thomas, 1651,Leviathan, London: Crooke, London:Penguin 1982.
  • Hofstadter, Douglas, 2008,Fluid Concepts and CreativeAnalogies, New York: Basic Books.
  • Holland, John H., 1975,Adaptation in Natural and ArtificialSystems, Cambridge, MA: MIT Press.
  • Holman, Bennett and Jerome Bruner, 2017, “Experimentation byIndustrial Selection,”Philosophy of Science, 84(5):1008–1019.
  • Holyoak, Keith J. and Paul Thagard, 1989, “AnalogicalMapping by Constraint Satisfaction”,Cognitive Science,13(3): 295–355. doi:10.1207/s15516709cog1303_1
  • –––, 1995,Mental Leaps: Analogy in CreativeThought, Cambridge, MA: MIT Press.
  • Hong, Lu and Scott E. Page, 2004, “Groups of Diverse ProblemSolvers Can Outperform Groups of High-Ability Problem Solvers”,Proceedings of the National Academy of Sciences, 101(46):16385–16389. doi:10.1073/pnas.0403723101
  • Horner, Jack K., 2019, “A Computationally AssistedReconstruction of an Ontological Argument in Spinoza’s TheEthics”,Open Philosophy, (special issue oncomputational philosophy) 2(1): 211–229.doi:10.1515/opphil-2019-0012
  • Isenberg, Daniel J., 1986, “Group Polarization: A CriticalReview and Meta-Analysis.”,Journal of Personality andSocial Psychology, 50(6): 1141–1151.doi:10.1037/0022-3514.50.6.1141
  • Kalman, John Arnold, 2001,Automated Reasoning withOtter, Princeton, NJ: Rinton Press.
  • Kauffman, Stuart, 1995,At Home in the Universe: The Searchfor Laws of Self-Organization and Complexity, New York: OxfordUniversity Press.
  • Kendall, Graham, Xin Yao, and Siang Yew Chong, 2007,TheIterated Prisoner’s Dilemma: 20 Years On, Singapore: WorldScientific.
  • Khaldūn, Ibn, 1377 [1958],The Muqaddimah: AnIntroduction to History, vol. 3, Franz Rosenthal, Princeton, NJ:Princeton University Press.
  • Kirchner, Daniel, Christoph Benzmüller, and Edward N. Zalta,2019, “Computer Science and Metaphysics: ACross-Fertilization”,Open Philosophy, (special issueon computational philosophy) 2(1): 230–251.doi:10.1515/opphil-2019-0015
  • Kitcher, Philip, 1993,The Advancement of Science: ScienceWithout Legend, Objectivity Without Illusions, Oxford: OxfordUniversity Press. doi:10.1093/0195096533.001.0001
  • Kleijnen, Jack P.C., 1995, “Verification and Validation ofSimulation Models”,European Journal of OperationalResearch, 82(1): 145–162.doi:10.1016/0377-2217(94)00016-6
  • Klein, Dominik, Johannes Marx, and Kai Fischbach, 2018,“Agent-Based Modeling in Social Science, History, andPhilosophy. An Introduction.”,Historical SocialResearch, 43(1): 7–27. doi:10.12759/HSR.43.2018.1.7-27
  • Klemm, Konstantin, Víctor M. Eguíluz, RaúlToral, and Maxi San Miguel, 2003a, “Global Culture: ANoise-Induced Transition in Finite Systems”,PhysicalReviewE, 67(4pt2): 045101. doi:10.1103/PhysRevE.67.045101
  • –––, 2003b, “Nonequilibrium Transitions inComplex Networks: A Model of Social Interaction”,PhysicalReview E, 67 (2): 026120. doi:10.1103/PhysRevE.67.026120
  • –––, 2003c, “Role of Dimensionality inAxelrod’s Model for the Dissemination of Culture”,Physica A, 327 (1): 1–5.doi:10.1016/S0378-4371(03)00428-X
  • –––, 2005, “Globalization, Polarizationand Cultural Drift”,Journal of Economic Dynamics andControl, 29(1–2): 321–334.doi:10.1016/j.jedc.2003.08.005
  • Krakauer, David C. (ed.), 2019,Worlds Hidden in Plain Sight:Thirty Years of Complexity Thinking at the Santa Fe Institute,Santa Fe, NM: Santa Fe Institute Press.
  • Kuehn, Daniel, 2017, “Diversity, Ability, and Democracy: ANote on Thompson’s Challenge to Hong and Page”,Critical Review, 29(1): 72–87.doi:10.1080/08913811.2017.1288455
  • Kuhn, Steven, 1997 [2019], “Prisoner’s Dilemma”,inThe Stanford Encyclopedia of Philosophy, (Winter 2019),Edward N. Zalta (ed.), URL = <https://plato.stanford.edu/archives/win2019/entries/prisoner-dilemma/>
  • Lakoff, George and Mark Johnson, 1980,Metaphors We LiveBy, Chicago: University of Chicago Press.
  • Lakoff, George and Mark Turner, 1989,More Than Cool Reason: AField Guide to Poetic Metaphor, Chicago: University of ChicagoPress.
  • Landemore, Hélène, 2013,Democratic Reason:Politics, Collective Intelligence, and the Rules of the Many,Princeton NJ: Princeton University Press.
  • Leibniz, Gottfried Wilhelm, 1666 [1923],Dissertatio De ArteCombinatoria , URL = <https://archive.org/details/ita-bnc-mag-00000844-001/page/n11/mode/2up>;Sämtliche Schriften und Briefe, Berlin-BrandenburgischeAkademie der Wissenschraften / Akademie der Wissenschraften zuGöttingen (eds.), Berlin: Akademie Verlag.
  • –––, 1685 [1951],The Art of Discovery,inLeibniz: Selections, Philip P. Wiener (ed., trans), NewYork: Scribner, 1951.
  • Lehrer, Keith and Carl Wagner, 1981,Rational Consensus inScience and Society, Dordrecht: Reidel.
  • Lem, Stanislaw, 1964 [2013],Summa Technologiae, JoannaZylinski, Minneapolis, MN: University of Minnesota Press, 2013.
  • Lewis, David, 1969,Convention: A Philosophical Study,Cambridge MA: Harvard University Press.
  • Longino, Helen E., 1990,Science as Social Knowledge:Valuesand Objectivity in Scientific Inquiry, Princeton, NJ: PrincetonUniversity Press.
  • Longino, Helen, 2002 [2019], “The Social Dimensions ofScientific Knowledge”, inThe Stanford Encyclopedia ofPhilosophy, (Summer 2019), Edward N. Zalta (ed.), URL = <https://plato.stanford.edu/archives/sum2019/entries/scientific-knowledge-social/>
  • Lord, Charles G., Lee Ross, and Mark R. Lepper, 1979,“Biased Assimilation and Attitude Polarization: The Effects ofPrior Theories on Subsequently Considered Evidence.”,Journal of Personality and Social Psychology, 37(11):2098–2109. doi:10.1037/0022-3514.37.11.2098
  • Loveland, Donald W., 1984, “Automated Theorem Proving: AQuarter Century Review”,Contemporary Mathematics, 29:1–45.
  • Llull, Ramon, 1308 [1986],Ars generalis ultima, inRaimondi Lulli Opera Latina XIV / CCCM 75, 4–527,Turnholt, Belgium: Brepols
  • MacKenzie, Donald, 1995, “The Automation of Proof: AHistorical and Sociological Exploration”,IEEE Annals of theHistory of Computing, 17(3): 7–29.doi:10.1109/85.397057
  • Martin, Ernst, 1925 [1992],Die Rechenmaschinen und ihreEntwicklungsgeschichte, Germany: Pappenheim. Translated asThe Calculating Machines: Their History and Development,Peggy Aldrich Kidwell and Michael R. Williams (trans.), Cambridge MA:MIT Press, 1992.
  • Martini, Carlo and Manuela Fernández Pinto, 2017,“Modeling the Social Organization of Science: Chasing Complexitythrough Simulations”,European Journal for Philosophy ofScience, 7(2): 221–238. doi:10.1007/s13194-016-0153-1
  • Mäs, Michael, 2019, “Challenges to SimulationValidation in the Social Sciences. A Critical RationalistPerspective”, inComputer Simulation Validation, ClausBeisbart and Nicole J. Saam (eds.), Cham: Springer InternationalPublishing, 857–879. doi:10.1007/978-3-319-70766-2_35
  • Mccune, William, 1997, “Solution of the RobbinsProblem”,Journal of Automated Reasoning, 19(3):263–276. doi:10.1023/A:1005843212881
  • McCune, William and Larry Wos, 1997, “Otter: The CADE-13Competition Incarnations”,Journal of AutomatedReasoning, 18(2): 211–220. doi:10.1023/A:1005843632307
  • McRobbie, Michael A., 1991, “Automated Reasoning andNonclassical Logics: Introduction”,Journal of AutomatedReasoning, 7(4): 447–451. doi:10.1007/BF01880323
  • Meadows, Michael and Dave Cliff, 2012, “Reexamining theRelative Agreement Model of Opinion Dynamics”,Journal ofArtificial Societies and Social Simulation, 15(4): 4.doi:10.18564/jasss.2083
  • Meyer, John-Jules Ch. and Roel Wierenga, 1994,Deontic Logicin Computer Science: Normative System Specification. Hoboken, NJ:Wiley.
  • Miller, George A., Richard Beckwith, Christiane Fellbaum, DerekGross, and Katherine J. Miller, 1990, “Introduction to WordNet:An On-Line Lexical Database”,International Journal ofLexicography, 3(4): 235–244. doi:10.1093/ijl/3.4.235
  • Mitchell, Melanie, 2011,Complexity: A Guided Tour, NewYork: Oxford University Press.
  • Nowak, Martin A. and Karl Sigmund, 1992, “Tit for Tat inHeterogeneous Populations”,Nature, 355(6357):250–253. doi:10.1038/355250a0
  • O’Connor, Cailin, 2017, “The Cultural Red KingEffect”,The Journal of Mathematical Sociology, 41(3):155–171. doi:10.1080/0022250X.2017.1335723
  • –––, 2023,Modelling ScientificCommunities. Cambridge: Cambridge Univ. Press.
  • O’Connor, Cailin and Justin Bruner, 2019, “Dynamicsand Diversity in Epistemic Communities”,Erkenntnis,84(1): 101–119. doi:10.1007/s10670-017-9950-y
  • O’Connor, Cailin, Liam Kofi Bright, and Justin P. Bruner,2019, “The Emergence of Intersectional Disadvantage”,Social Epistemology, 33(1): 23–41.doi:10.1080/02691728.2018.1555870
  • O’Connor, Cailin and James Owen Weatherall, 2018,“Scientific Polarization”,European Journal forPhilosophy of Science, 8(3): 855–875.doi:10.1007/s13194-018-0213-9
  • –––, 2019,The Misinformation Age: How FalseBeliefs Spread, New Haven: Yale University Press.
  • O’Leary, Daniel J., 1991, “Principia Mathematica andthe Development of Automated Theorem Proving”, inPerspectives on the History of Mathematical Logic, ThomasDrucker (ed.), Boston, MA: Birkhäuser Boston, 47–53.doi:10.1007/978-0-8176-4769-8_4
  • Olsson, Erik J., 2011, “A Simulation Approach to VeritisticSocial Epistemology”,Episteme, 8(2): 127–143.doi:10.3366/epi.2011.0012
  • –––, 2013, “A Bayesian Simulation Model ofGroup Deliberation and Polarization”, inBayesianArgumentation, Frank Zenker (ed.), Dordrecht: SpringerNetherlands, 113–133. doi:10.1007/978-94-007-5357-0_6
  • Oppenheimer, Paul E. and Edward N. Zalta, 2011, “AComputationally-Discovered Simplification of the OntologicalArgument”,Australasian Journal of Philosophy, 89(2):333–349. doi:10.1080/00048401003674482
  • Overton, James A., 2013, “‘Explain’ inScientific Discourse”,Synthese, 190(8):1383–1405. doi:10.1007/s11229-012-0109-8
  • Page, Scott E., 2007,The Difference: How the Power ofDiversity Creates Better Groups, Firms, Schools, and Societies,Princeton, NJ: Princeton University Press.
  • Paulson, Lawrence C., 1990, “Isabelle: The Next 700 TheoremProvers”, in Piergiorgio Odifreddi (ed.),Logic and ComputerScience, Cambridge, MA: Academic Press, pp. 361–386.
  • Pearl, Judea, 1988,Probabilistic Reasoning in IntelligentSystems, San Mateo, CA: Morgan Kaufmann.
  • –––, 2000,Causality: Models, Reasoning, andInference, Cambridge, MA: Cambridge Univ. Press.
  • Pearl, Judea and Dana Mackenzie, 2018,The Book of Why: TheNew Science of Cause and Effect, New York: Basic Books.
  • Pence, Charles H. and Grant Ramsey, 2018, “How to Do DigitalPhilosophy of Science”,Philosophy of Science, 85(5):930–941. doi:10.1086/699697
  • Pollock, John L., 1989,How to Build a Person: AProlegomenon, Cambridge, MA: MIT Press.
  • –––, 1995,Cognitive Carpentry: A Bluepointfor How to Build a Person, Cambridge, MA: MIT Press.
  • –––, 2006,Thinking about Acting: LogicalFoundations for Rational Decision Making, Oxford: OxfordUniversity Press. doi:10.1093/acprof:oso/9780195304817.001.0001
  • Portoraro, Frederic, 2001 [2019], “AutomatedReasoning”, inThe Stanford Encyclopedia of Philosophy,(Spring 2019), Edward N. Zalta (ed.), URL = <https://plato.stanford.edu/archives/spr2019/entries/reasoning-automated/>
  • Reijula, Samuli and Jaakko Kuorikoski, 2019, “ModelingEpistemic Communities”, in Miranda Fricker, Peter J. Graham,David Henderson and Nikolaj J.L.L. Pedersen (eds),The RoutledgeHandbook of Social Epistemology, Abingdon-on-Thames: Routledge,chapter 24.
  • Rendsvig, Rasmus and John Symons, 2006 [2019], “EpistemicLogic”, inThe Stanford Encyclopedia of Philosophy,(Summer 2019), Edward N. Zalta (ed.), URL = <https://plato.stanford.edu/archives/sum2019/entries/logic-epistemic/>
  • Rescher, Nicholas, 2012,Leibniz and Cryptography,University Library System, University of Pittsburgh.
  • Riegler, Alexander and Igor Douven, 2009, “Extending theHegselmann–Krause Model III: From Single Beliefs to ComplexBelief States”,Episteme, 6(2): 145–163.doi:10.3366/E1742360009000616
  • –––, 2010, “Extending theHegselmann-Krause Model II”, in T. Czarnecki, K. Kijania-Placek,O. Poller, and J. Wolenski (eds.),The Analytical Way: Proceedingsof the 6th European Congress of Analytic Philosophy,London: College Publications.
  • Riazanov, Alexandre and Andrei Voronkov, 2002, “The Designand Implementation of VAMPIRE”,AI Communications,15(2–3): 91–110.
  • Robson, Arthur J., 1990, “Efficiency in Evolutionary Games:Darwin, Nash and the Secret Handshake”,Journal ofTheoretical Biology, 144(3): 379–396.doi:10.1016/S0022-5193(05)80082-7
  • Rosenstock, Sarita, Justin Bruner, and Cailin O’Connor,2017, “In Epistemic Networks, Is Less Really More?”,Philosophy of Science, 84(2): 234–252.doi:10.1086/690717
  • Rubin, Hannah and Cailin O’Connor, 2018,“Discrimination and Collaboration in Science”,Philosophy of Science, 85(3): 380–402.doi:10.1086/697744
  • Rushby, John, 2018, “A Mechanically Assisted Examination ofBegging the Question in Anselm’s Ontological Argument”,Journal of Applied Logics, 5(7): 1473–1496.
  • Sargent, R G, 2013, “Verification and Validation ofSimulation Models”,Journal of Simulation, 7(1):12–24. doi:10.1057/jos.2012.20
  • Schelling, Thomas C., 1971, “Dynamic Models ofSegregation”,The Journal of Mathematical Sociology,1(2): 143–186. doi:10.1080/0022250X.1971.9989794
  • –––, 1978,Micromotives andMacrobehavior, New York: Norton.
  • Shults, F. LeRon, 2019, “Computer Modeling in Philosophy ofReligion”,Open Philosophy, (special issue on computermodeling in philosophy) 2(1): 108–125.doi:10.1515/opphil-2019-0011
  • Sidgwick, Henry, 1886,Outlines of the History of Ethics forEnglish Readers, London: Macmillan, Indianapolis IN: Hackett1988.
  • Siekmann, Jörg and G. Wrightson (eds.), 1983,Automationof Reasoning: Classical Papers on Computational Logic,1957–1965, volume 1, Berlin: Springer
  • Singer, Daniel J., 2019, “Diversity, Not Randomness, TrumpsAbility”,Philosophy of Science, 86(1): 178–191.doi:10.1086/701074
  • Singer, Daniel J., Aaron Bramson, Patrick Grim, Bennett Holman,Jiin Jung, Karen Kovaka, Anika Ranginani, and William J. Berger, 2019,“Rational Social and Political Polarization”,Philosophical Studies, 176(9): 2243–2267.doi:10.1007/s11098-018-1124-5
  • Singer, Daniel J., Aaron Bramson, Patrick Grim, Bennett Holman,Karen Kovaka, Jiin Jung, and William J. Berger, 2021,“Don’t Forget Forgetting: The Social Epistemic Importanceof How We Forget”,Synthese, 198: 5373–5394.doi:10.1007/s11229-019-02409-0
  • Skyrms, Brian, 1996,Evolution of the Social Contract,Cambridge: Cambridge University Press.doi:10.1017/CBO9780511806308
  • –––, 2004,The Stag Hunt and the Evolutionof Social Structure, New York: Cambridge University Press.doi:10.1017/CBO9781139165228
  • –––, 2010,Signals: Evolution, Learning andInformation, New York: Oxford University Press.doi:10.1093/acprof:oso/9780199580828.001.0001
  • Solomon, Miriam, 1994a, “Social Empiricism”,Noûs, 28(3): 325–343. doi:10.2307/2216062
  • –––, 1994b, “A More SocialEpistemology”, in Frederick E. Schmitt (ed.),SocializingEpistemology: The Social Dimensions of Knowledge, Lanham, MD:Rowman and Littlefield, pp. 217–233.
  • Spirtes, Peter, Clark Glymour, and Richard Scheines, 1993,Causation, Prediction, and Search, (Lecture Notes inStatistics 81), New York: Springer New York.doi:10.1007/978-1-4612-2748-9
  • Sprenger, Jan and Stephen Hartmann, 2019,Bayesian Philosophyof Science, Oxford: Oxford University Press.doi:10.1093/oso/9780199672110.001.0001
  • Steen, Alexander and Christoph Benzmüller, 2018, “TheHigher-Order Prover Leo-III”, inKI 2019: Advances inArtificial Intelligence: 42nd German Conference on AI, DidierGalmiche, Stephan Schulz, and Roberto Sebastiani (eds.), (LectureNotes in Computer Science 10900), Cham: Springer InternationalPublishing, 108–116. doi:10.1007/978-3-319-94205-6_8
  • St. Denis, Paul and Patrick Grim, 1997, “Fractal Images ofFormal Systems”,Journal of Philosophical Logic, 26(2):181–222. doi:10.1023/A:1004280900954
  • Steinhart, Eric, 1994, “NETMET: A Program for Generating andInterpreting Metaphors”,Computers and the Humanities,28(6): 383–392. doi:10.1007/BF01829973
  • Steinhart, Eric and Eva Kittay, 1994, “Generating Metaphorsfrom Networks: A Formal Interpretation of the Semantic Field Theory ofMetaphor”, inAspects of Metaphor, Jaakko Hintikka(ed.), Dordrecht: Springer Netherlands, 41–94.doi:10.1007/978-94-015-8315-2_3
  • Swift, J., 1726,Gulliver’s Travels URL= <https://www.gutenberg.org/files/829/829-h/829-h.htm>
  • Thagard, Paul, 1988,Computational Philosophy of Science,Cambridge MA: MIT Press.
  • –––, 1992,Conceptual Revolutions,Princeton, NJ: Princeton University Press.
  • –––, 2012,The Cognitive Science of Science:Explanation, Discovery, and Conceptual Change, Cambridge MA: MITPress.
  • Thicke, Michael, forthcoming, “Evaluating Formal Models ofScience”,Journal for General Philosophy of Science,First online: 1 February 2019. doi:10.1007/s10838-018-9440-1
  • Thoma, Johanna, 2015, “The Epistemic Division of LaborRevisited”,Philosophy of Science, 82(3):454–472. doi:10.1086/681768
  • Thompson, Abigail, 2014, “Does Diversity Trump Ability?: AnExample of the Misuse of Mathematics in the Social Sciences”,Notices of the American Mathematical Society, 61(9):1024–1030.
  • Turing, A. M., 1936–1937, “On Computable Numbers, withan Application to theEntscheidungsproblem”,Proceedings of the London Mathematical Society, s2-42(1):230–265. doi:10.1112/plms/s2-42.1.230
  • van Benthem, Johan, 2006, “Epistemic Logic and Epistemology:The State of Their Affairs”,Philosophical Studies,128(1): 49–76. doi:10.1007/s11098-005-4052-0
  • van de Rijt, Arnout, David Siegel, and Michael Macy, 2009,“Neighborhood Chance and Neighborhood Change: A Comment on Bruchand Mare”,American Journal of Sociology, 114(4):1166–1180. doi:10.1086/588795
  • Van Den Hoven, Jeroen and Gert‐Jan Lokhorst, 2002,“Deontic Logic and Computer‐Supported ComputerEthics”,Metaphilosophy, 33(3): 376–386.doi:10.1111/1467-9973.00233
  • Van Ditmarsch, Hans, Wiebe van der Hoek, and Barteld Kooi, 2008,Dynamic Epistemic Logic, (Synthese Library 337), Dordrecht:Springer Netherlands. doi:10.1007/978-1-4020-5839-4
  • Vanderschraaf, Peter, 2006, “War or Peace?: A DynamicalAnalysis of Anarchy”,Economics and Philosophy, 22(2):243–279. doi:10.1017/S0266267106000897
  • Wagner, Elliott, 2009, “Communication and StructuredCorrelation”,Erkenntnis, 71(3): 377–393.doi:10.1007/s10670-009-9157-y
  • Waldrop, M. Mitchell, 1992,Complexity: The Emerging Scienceat the Edge of Order and Chaos, New York: Simon andSchuster.
  • Wang, Hao, 1960, “Toward Mechanical Mathematics”,IBM Journal of Research and Development, 4(1): 2–22.doi:10.1147/rd.41.0002
  • Watts, Duncan J. and Steven H. Strogatz, 1998, “CollectiveDynamics of ‘Small-World’ Networks”,Nature, 393(6684): 440–442. doi:10.1038/30918
  • Weatherall, James Owen, Cailin O’Connor and Jerome Bruner,2018, “How to Beat Science and Influence People,”British Journal for the Philosophy of Science, 71(4):1157–1186.
  • Weisberg, Michael, 2013,Simulation and Similarity: UsingModels to Understand the World, Oxford: Oxford University Press.doi:10.1093/acprof:oso/9780199933662.001.0001
  • Weisberg, Michael and Ryan Muldoon, 2009, “EpistemicLandscapes and the Division of Cognitive Labor”,Philosophyof Science, 76(2): 225–252. doi:10.1086/644786
  • Weymark, John A., 2015, “Cognitive Diversity, BinaryDecisions, and Epistemic Democracy”,Episteme, 12(4):497–511. doi:10.1017/epi.2015.34
  • Wheeler, Billy, 2019, “Computer Simulations in Metaphysics:Possibilities and Limitations,”Manuscrito, 42(3):108–148
  • Whitehead, Alfred North and Bertrand Russell, 1910, 1912, 1913,Principia Mathematica, 3 volumes, Cambridge: CambridgeUniversity Press.
  • Windrum, Paul, Giorgio Fagiolo, and Alessio Moneta, 2007,“Empirical Validation of Agent-Based Models: Alternatives andProspects”,Journal of Artificial Societies and SocialSimulation, 10(2): 8. URL =http://jasss.soc.surrey.ac.uk/10/2/8.html>
  • Wiseman, Thomas and Okan Yilankaya, 2001, “Cooperation,Secret Handshakes, and Imitation in the Prisoners’Dilemma”,Games and Economic Behavior, 37(1):216–242. doi:10.1006/game.2000.0836
  • Wright, Sewall, 1932, “The Roles of Mutation, Inbreeding,Crossbreeding, and Selection in Evolution”,Proceedings ofthe Sixth International Congress on Genetics, vol. 1, D. F. Jones(ed.), pp. 355–366.
  • Young, H.P., 1993, “An Evolutionary Model ofBargaining”,Journal of Economic Theory, 59(1):145–168. doi:10.1006/jeth.1993.1009
  • Zalta, Edward, 2020,Principia Metaphysica, unpublishedmanuscript. URL = <https://mally.stanford.edu/principia.pdf>
  • Zollman, Kevin James Spears, 2005, “Talking to Neighbors:The Evolution of Regional Meaning”,Philosophy ofScience, 72(1): 69–85. doi:10.1086/428390
  • –––, 2007, “The Communication Structure ofEpistemic Communities”,Philosophy of Science, 74(5):574–587. doi:10.1086/525605
  • –––, 2010a, “The Epistemic Benefit ofTransient Diversity”,Erkenntnis, 72(1): 17–35.doi:10.1086/525605
  • –––, 2010b, “Social Structure and theEffects of Conformity”,Synthese, 172(3):317–340. doi:10.1007/s11229-008-9393-8

Other Internet Resources

Computational philosophy encompasses many different tools andtechniques. The aim of this section is to highlight a few of the mostcommonly used tools.

A large amount of computational philosophy uses agent-basedsimulations. An extremely popular tool for producing and analyzingagent-based simulations is the free toolNetLogo, which was produced and is maintained by Uri Wilensky and The Centerfor Connected Learning and Computer-Based Modeling at NorthwesternUniversity. NetLogo is a simple but powerful platform for creating andrunning agent-based simulations, used in all of the examples below,which run using the NetLogo web platform. NetLogo includes a number oftutorials to help people completely new to programming. It alsoincludes advanced tools, like BehaviorSpace and BehaviorSearch, whichlet the research run large “experiments” of simulationsand easily implement genetic algorithms and other search techniques toexplore model parameters. NetLogo is a very popular simulationlanguage among computational philosophers, but there are otheragent-based modelling environments that are similar, such asSwarm, as well as tools to help analyze agent-based models, such asOpenMOLE. Computational philosophy simulations may also be written and analyzedin Python, Java, and C, all of which are general programming languagesbut are much less friendly to beginners.

For analyzing data (from models or elsewhere) and creating graphs andcharts,the statistical environment R is popular.Mathematica andMATLAB are also sometimes used to check or prove mathematical claims. Allthree of these are advanced tools that are not easily accessible tobeginners. For beginners, Microsoft Excel can be used to analyze andvisualize smaller data sets.

As mentioned above, common tools used for theorem proving includeVampire andIsabelle/HOL.

Just as philosophical methodology is diverse, so too are thecomputational tools used by philosophers. Because it is common tomention tools used in the course of research, further tools can befound in the literature of computational philosophy.

Computational Model Examples

Below is a list of the example computational models mentioned above.Each model can be run on Netlogoweb in your browser. Alternatively,any of the models can be downloaded and run on Netlogo desktop byclicking on “Export: Netlogo” in the top right of themodel screen.

  • Interactive simulation of the Hegselmann and Krause bounded confidence model. To start the model, click “setup” and then“go” (near the top left corner). To restart the model,click “setup” again. Near the top right corner, you canchange the display to show the history of the histogram of opinionsover time or show the trajectories through time of individual agents.For more information about the model, scroll down and click on“Model Info”.
  • Interactive simulation of Axelrod’s Polarization Model. To start the model, click “setup” and then“go” (near the top left corner). To restart the model,click “setup” again. Each “patch” in thedisplay represents one person. Where there are dark black linesbetween people, the people share no traits. The line gets lighter asthey share more traits. This model runs quite slowly in web browsers,so try speeding it up by manually pulling the “modelspeed” slider to the right. For more information about themodel, scroll down and click on “Model Info”.
  • Interactive simulation of Zollman’s Networked-Researchers Model. To start the model, click “setup” and then“go” (near the top left corner). To restart the model,click “setup” again. In this model (a simplified versionof the model discussed in Zollman 2007), agents play a bandit problem(like a slot machine with two arms that have different probabilitiesof paying off). They usually play the arm they think it mostprofitable, except that they deviate with a small chance to make surethey aren’t missing something better on the other arm. The modelallows agents to share information either in a ring or in a completenetwork. For more information about the model, scroll down and clickon “Model Info”.
  • Interactive simulation of Grim and Singer’s networked agents on an epistemic landscape. To start the model, click “setup” and then“go”. To restart the model, unclick “go” ifthe model is still running and then click “setup” again.Initially, agents are assigned random beliefs (locations on the x-axisof the epistemic landscape). On each round the imitate theirhighest-performing network-neighbor by moving toward their belief witha certain speed and uncertainty about their neighbor’s view. Themodel allows simulation of many different kinds of networks andlandscapes. For more information about the model, scroll down andclick on “Model Info”.
  • Interactive simulation of Weisberg and Muldoon’s model of agents on an epistemic landscape. To start the model, click “setup” and then“go”. To restart the model, unclick “go” ifthe model is still running and then click “setup” again.Initially, mavericks and followers are dropped on parts of thelandscape that aren’t on the “hills”. Both kinds ofagents then use their own method for hill climbing. As mentionedabove, Alexander et al. (2015) argue that there’s a technicalproblem with the original model. This simulation includes a togglebetween the original model and a critic’s preferred version ofit. For more information about the model, scroll down and click on“Model Info”.
  • Interactive simulation of the Hong and Page model of group deliberation. To setup the model, which includes setting up the landscape and thetwo groups (random group and group of highest-performers), click“setup”. Note: Setup may be slow, since it tests allpossible heuristics (unless quick-setup-experts is activated).Clicking “go” then calculates the scores of the twogroups. This simulation extends Hong and Page’s original modelto allow for landscape smoothing (instead of the original randomlandscape). It also includes a “tournament” group dynamicsthat is different from the group dynamics of the original model. Formore information about the model, scroll down and click on“Model Info”.
  • Interactive simulation of a Repeated Prisoner’s Dilemma Model. To start the model, click “setup” and then“go-once” (to have agents play and imitate once) or“go” (to have agents repeatedly play and imitate theirneighbors). To restart the model, click “setup” again.Each “patch” in the display represents one agent. Agentsstart with a randomly-assigned strategy, play each of their 8neighbors rounds_to_play times and then imitate their best-performingneighbors. This model runs slowly in web browsers, but it runs a lotmore quickly in Netlogo Desktop (you can download the model code byclicking on “Export: Netlogo” near the top right). Formore information about the model, scroll down and click on“Model Info”.
  • Interactive simulation of residential segregation. To start the model, click “setup” and then“go” (near the top left corner). To restart the model,click “setup” again. Change the threshold below whichagents move by changing “%-similar-wanted”, and change howfull the grid is at the beginning by changing “density”.For more information about the model, scroll down and click on“Model Info”.
  • Interactive simulation of an emergence of signaling model from Grim et al. (2004). In this model, each agent (each patch in the display) starts with arandom communication strategy (a way of responding to and producingsignals). As the model runs, the agents are potentially helped (fed bythe fish) or hurt (by wolves) depending on how they act (in part, inresponse to the signals they hear). Each 100 rounds, agents copy thesignaling strategy of their healthiest neighbor. Doing so results inso-called “perfect communication” strategies eventuallydominating, though that can take tens of thousands of rounds. For moreinformation about the model, scroll down and click on “ModelInfo”.

Additional Internet Resources

Acknowledgments

The authors are grateful to Anthony Beavers, ChristophBenzmüller, Gregor Betz, Selmer Bringsjord, Branden Fitelson,Ryan Muldoon, Eric Steinhart, Michael Weisberg, and Kevin Zollman forconsultation, contributions, and assistance.

Copyright © 2024 by
Patrick Grim<patrick.grim@stonybrook.edu>
Daniel Singer<singerd@phil.upenn.edu>

Open access to the SEP is made possible by a world-wide funding initiative.
The Encyclopedia Now Needs Your Support
Please Read How You Can Help Keep the Encyclopedia Free

Browse

About

Support SEP

Mirror Sites

View this site from another server:

USA (Main Site)Philosophy, Stanford University

The Stanford Encyclopedia of Philosophy iscopyright © 2025 byThe Metaphysics Research Lab, Department of Philosophy, Stanford University

Library of Congress Catalog Data: ISSN 1095-5054


[8]ページ先頭

©2009-2025 Movatter.jp