Movatterモバイル変換


[0]ホーム

URL:


SEP home page
Stanford Encyclopedia of Philosophy

The Philosophy of Computer Science

First published Tue Aug 20, 2013; substantive revision Mon Feb 3, 2025

The philosophy of computer science is concerned with the ontologicaland methodological issues arising from within the academic disciplineof computer science, and from the practice of software development andits commercial and industrial deployment. More specifically, thephilosophy of computer science considers the ontology and epistemologyof computational systems, focusing on problems associated with theirspecification, programming, implementation, verification and testing.The complex nature of computer programs ensures that many of theconceptual questions raised by the philosophy of computer science haverelated ones in thephilosophy of mathematics, the philosophy of empirical sciences, and thephilosophy of technology. We shall provide an analysis of such topics that reflects the layerednature of the ontology of computational systems in Sections 1–5;we then discuss topics involved in their methodology in Sections6–8.


1. Computational Systems

Computational systems are widespread in everyday life. Their design,development and analysis are the proper object of study of thediscipline of computer science. The philosophy of computer sciencetreats them instead as objects of theoretical analysis. Its first aimis to define such systems, i.e., to develop an ontology ofcomputational systems. The literature offers two main approaches onthe topic. A first one understands computational systems as defined bydistinct ontologies for software and hardware, usually taken to betheir elementary components. A different approach sees computationalsystems as comprising several other elements around thesoftware-hardware dichotomy: under this second view, computationalsystems are defined on the basis of a hierarchy of levels ofabstraction, arranging hardware levels at the bottom of such ahierarchy and extending upwards to elements of the design anddownwards to include the user. In the following we present these twoapproaches.

1.1 Software and Hardware

Usually, computational systems are seen as composed of twoontologically distinct entities: software and hardware. Algorithms,source codes, and programs fall in the first category of abstractentities; microprocessors, hard drives, and computing machines areconcrete, physical entities.

Moor (1978) argues that such a duality is one of the three myths ofcomputer science, in that the dichotomy software/hardware has apragmatic, but not an ontological, significance. Computer programs, asthe set of instructions a computer may execute, can be examined bothat the symbolic level, as encoded instructions, and at the physicallevel, as the set of instructions stored in a physical medium. Moorstresses that no program exists as a pure abstract entity, that is,without a physical realization (a flash drive, a hard disk on aserver, or even a piece of paper). Early programs were even hardwireddirectly and, at the beginning of the computer era, programs consistedonly in patterns of physical levers. By the software/hardwareopposition, one usually identifies software with the symbolic level ofprograms, and hardware with the corresponding physical level. Thedistinction, however, can be only pragmatically justified in that itdelimits the different tasks of developers. For them, software may begiven by algorithms and the source code implementing them, whilehardware is given by machine code and the microprocessors able toexecute it. By contrast, engineers realizing circuits implementinghardwired programs may be inclined to call software many physicalparts of a computing machine. In other words, what counts as softwarefor one professional may count as hardware for another one.

Suber (1988) goes even further, maintaining that hardware is a kind ofsoftware. Software is defined as any pattern that is amenable to beingread and executed: once one realizes that all physical objects displaypatterns, one is forced to accept the conclusion that hardware, as aphysical object, is also software. Suber defines a pattern as“any definite structure, not in the narrow sense that requiressome recurrence, regularity, or symmetry” (1988, 90) and arguesthat any such structure can indeed be read and executed: for anydefinite pattern to which no meaning is associated, it is alwayspossible to conceive a syntax and a semantics giving a meaning,thereby making the pattern an executable program.

Colburn (1999, 2000), while keeping software and hardware apart,stresses that the former has a dual nature, it is a “concreteabstraction” as being both abstract and concrete. To definesoftware, one needs to make reference to both a “medium ofdescription”, i.e., the language used to express an algorithm,and a “medium of execution”, namely the circuits composingthe hardware. While software is always concrete in that there is nosoftware without a concretization in some physical medium, it isnonetheless abstract, because programmers do not consider theimplementing machines in their activities: they would rather develop aprogram executable by any machine. This aspect is called by Colburn(1999) “enlargement of content” and it defines abstractionin computer science as an “abstraction of content”:content is enlarged rather than deleted, as happens with mathematicalabstraction.

Irmak (2012) criticizes the dual nature of software proposed byColburn (1999, 2000). He understands an abstract entity as one lackingspatio-temporal properties, while being concrete means having thoseproperties. Defining software as a concrete abstraction wouldtherefore imply for software to have contradictory properties.Software does have temporal properties: as an object of humancreation, it starts to exist at some time once conceived andimplemented; and it can cease to exist at a certain subsequent time.Software ceases to exist when all copies are destroyed, their authorsdie and nobody else remembers the respective algorithms. As an objectof human creation, software is an artifact. However, software lacksspatial properties in that it cannot be identified with any concreterealization of it. Destroying all the physical copies of a givensoftware would not imply that a particular software ceases to exist,as stated above, nor, for the very same reason, would deleting alltexts implementing the software algorithms in some high-levellanguage. Software is thus an abstract entity endowed with temporalproperties. For these reasons, Irmak (2010) defines software as anabstract artifact.

Duncan (2011) points out that distinguishing software from hardwarerequires a finer ontology than the one involving the simpleabstract/concrete dichotomy. Duncan (2017) aims at providing such anontology by focusing on Turner’s (2011) notion of specificationas an expression that gives correctness conditions for a program (see§2). Duncan (2017) stresses that a program acts also as a specificationfor the implementing machine, meaning that a program specifies allcorrect behaviors that the machine is required to perform. If themachine does not act consistently with the program, the machine issaid to malfunction, in the same way a program which is not correctwith respect to its specification is said to be flawed or containing abug. Another ontological category necessary to define the distinctionsoftware/hardware is that of artifact, which Duncan (2017) defines asa physical, spatio-temporal entity, which has been constructed so asto fulfill some functions and such that there is a communityrecognizing the artifact as serving that purpose. That said, softwareis defined as a set of instructions encoded in some programminglanguage which act as specifications for an artifact able to readthose instructions; hardware is defined as an artifact whose functionis to carry out the specified computation.

1.2 The Method of Levels of Abstractions

As shown above, the distinction between software and hardware is not asharp one. A different ontological approach to computational systemsrelies on the role of abstraction. Abstraction is a crucial element incomputer science, and it takes many different forms. Goguen &Burstall (1985) describe some of this variety, of which the followingexamples are instances. Code can be repeated during programming, bynaming text and a parameter, a practice known as proceduralabstraction. This operation has its formal basis in the abstractionoperation of the lambda calculus (see the entry on thelambda calculus) and it allows a formal mechanism known as polymorphism (Hankin 2004).Another example is typing, typical of functional programming, whichprovides an expressive system of representation for the syntacticconstructors of the language. Or else, in object-oriented design,patterns (Gamma et al. 1994) are abstracted from the common structuresthat are found in software systems and used as interfaces between theimplementation of an object and its specification.

All these examples share an underlying methodology in the Levels ofAbstraction (henceforth LoA), used also in mathematics (Mitchelmoreand White 2004) and philosophy (Floridi 2008). Abstractions inmathematics are piled upon each other in a never-ending search formore and more abstract concepts. Similarly, abstraction in computerscience is a dynamic, layered, process creating new levels ofabstraction from previous ones (Turner 2021). Consider the case ofabstract data types, which are defined only on the basis of theoperations performed abstracting from the physical properties of anyconcrete type. For instance, alist as an abstract type canbe defined by operations such asnil (returning an emptylist),cons (constructing a list),head (returningthe first element), andtail (returning the rest of thelist). Further abstractions performed over lists lead to the creationof new abstract data types. The permutation function (perm)between two lists allows the definition of the typebag,while performing the operationsDup (removing duplicates) andOrd (removing order), one may obtain the typefiniteset (Turner 2021).

On this account, abstraction is self-contained: an abstractmathematical object takes its meaning only from the system withinwhich it is defined and the only constraint is that new objects berelated to each other in a consistent system that can be operated onwithout reference to previous or external meanings. Some argue that,in this respect at least, abstraction in computer science isfundamentally different from abstraction in mathematics: computationalabstraction must leave behind an implementation trace and this meansthat information is hidden but not destroyed (Colburn & Shute2007). Any details that are ignored at one LoA must not be ignored byone of the lower LoAs: for example, programmers need not worry aboutthe precise location in memory associated with a particular variable,but the virtual machine is required to handle all memory allocations.This reliance of abstraction on different levels is reflected in theproperty of computational systems to depend upon the existence of animplementation: for example, even though classes hide details of theirmethods, they must have implementations. Hence, computationalabstractions preserve both an abstract guise and animplementation.

A full formulation of LoAs for the ontology of digital computationalsystems has been devised in Primiero (2016), including:

  • Intention
  • Specification
  • Algorithm
  • High-level programming language instructions
  • Assembly/machine code operations
  • Execution

Intention is the cognitive act that defines a computationalproblem to be solved: it formulates the request to create acomputational process to perform a certain task. Requests of this sortare usually provided by customers, users, and other stakeholdersinvolved in a given software development project.Specification is the formulation of the set of requirementsnecessary for solving the computational problem at hand: it concernsthe possibly formal determination of the operations the software mustperform, through the process known as requirements elicitation.Algorithm expresses the procedure providing a solution to theproposed computational problem, one which must meet the requirementsof the specification.High-level programming language (suchas C, Java, or Python)instructions constitute the linguisticimplementation of the proposed algorithm, often called the sourcecode, and they can be understood by trained programmers but cannot bedirectly executed by a machine. The instructions coded in high-levellanguage are compiled, i.e., translated, by a compiler intoassembly code and then assembled inmachine codeoperations, executable by a processor. Finally, theexecution LoA is the physical level of the running software,i.e., of the computer architecture executing the instructions.

According to this view, no LoA taken in isolation is able to definewhat a computational system is, nor to determine how to distinguishsoftware from hardware. Computational systems are rather defined bythe whole abstraction hierarchy; each LoA in itself expresses asemantic level associated with a realization, either linguistic orphysical.

2. Intention and Specification

Intention refers to a cognitive state outside the computational systemwhich expresses the formulation of a computational problem to besolved. Specifications describe the functions that the computationalsystem to be developed must fulfil. Whereas intentions,perse, do not pose specific philosophical controversies inside thephilosophy of computer science, issues arise in connection with thedefinition of what a specification is and its relation withintentions.

2.1 Intentions

Intentions articulate the criteria to determine whether acomputational system is appropriate (i.e., correct, see§7), and therefore it is considered as the first LoA of the computationalsystem appropriate to that problem. For instance, customers and usersmay require a smartphone app able to filter out annoying calls fromcall centers; such request constitutes the intention LoA in thedevelopment of a computational system able to perform such a task. Inthe software development process of non-naive systems, intentions areusually gathered by such techniques as brainstorming, surveys,prototyping, and even focus groups (Clarke and Moreira 1999), aimed atdefining a structured set of the various stakeholders’intentions. At this LoA, no reference is made tohow to solvethe computational problem, but only the description of the problemthat must be solved is provided.

In contemporary literature, intentions have been the object ofphilosophical inquiry at least since Anscombe (1963). Philosophershave investigated “intentions with which” an action isperformed (Davidson 1963), intentions of doing something in the future(Davidson 1978), and intentional actions (Anscombe 1963, Baier 1970,Ferrero 2017). Issues arise concerning which of the three kinds ofintention is primary, how they are connected, the relation betweenintentions and belief, whether intentions are or presuppose specificmental states, and whether intentions act as causes of actions (seethe entry onintention). More formal problems concern the opportunity for an agent of havinginconsistent intentions and yet being considered rational (Bratman1987, Duijfet al. 2019).

In their role as the first LoA in the ontology of computationalsystems, intentions can certainly be acknowledged as intentions forthe future, in that they express the objective of constructing systemsable to perform some desired computational tasks. Since intentions, asstated above, confine themselves to the definition of thecomputational problem to be solved, without specifying itscomputational solution, their ontological and epistemological analysisdoes not differ from those referred to in the philosophicalliterature. In other words, there is nothing specificallycomputational in the intentions defining computational systems whichdeserves a separate treatment in the philosophy of computer science.What matters here is the relation between intention and specification,in that intentions provide correctness criteria for specifications;specifications are asked to express how the computational problem putforward by intentions is to be solved.

2.2 Definitions and Specifications

Consider the example of the call filtering app again; a specificationmay require to create a black-list of phone numbers associated withcall centers; to update the list everyn days; to check, uponan incoming call, whether the number is on the black-list; tocommunicate to the call management system not to allow the incomingcall in case of an affirmative answer, and to allow the call in caseof negative answer.

The latter is a full-fledged specification, though expressed in anatural language. Specifications are often advanced in a naturallanguage to be closer to the stakeaholder’s intentions and onlysubsequently they are formalized in a proper formal language.Specifications may be expressed by means of graphical languages suchas UML (Fowler 2003), or more formal languages such as TPL (Turner2009a) and VDM (Jones 1990), using predicate logic, or Z (Woodcock andDavies 1996), focusing on set theory. For instance, Type PredicateLogic (TPL) expresses the requirements of computational systems usingpredicate logic formulas, wherein the type of the quantified variablesis specified. The choice of the variable types allows one to definespecifications at the more appropriate abstraction level. Whetherspecifications are expressed in an informal or formal guise oftendepends on the development method followed, with formal specificationsusually preferred in the context of formal development methods.Moreover, formal specifications facilitate verification of correctnessfor computational systems (see§6).

Turner (2018) asks what difference is there between models andspecifications, both of which are extensively used in computerscience. The difference is located in what Turner (2011) calls theintentional stance: modelsdescribe an intendedsystem to be developed and, in case of a mismatch between the two, themodels are to be refined; specificationsprescribe how thesystem is to be built so as to comply with the intended functions, andin case of mismatch it is the system that needs to be refined.Accordingly, models and specification can be equivalent in theirinformational content but differ in theirdirection ofgovernance (Turner 2020). Matching between model and systemreflects a correspondence between intentions — describingwhat system is to be constructed in terms of thecomputational problem the system must be able to solve — andspecifications — determininghow the system is to beconstructed, in terms of the set of requirements necessary for solvingthe computational problem, as exemplified for the call filtering app.In Turner’s (2011) words, “something is a specificationwhen it is given correctness jurisdiction over an artefact”:specifications provide correctness criteria for computational systems.Computational systems are thus correct when they comply with theirspecifications, that is, when they behave according to them.Conversely, they provide criteria of malfunctioning (§7.3): a computational system malfunctions when it does not behaveconsistently with its specifications. Turner (2011) is careful tonotice that such a definition of specifications is an idealization:specifications are themselves revised in some cases, such as when thespecified computational systems cannot be realized because of physicallaws constraints or cost limitations, or when it turns out that theadvanced specifications are not correct formalizations of theintentions of clients and users.

More generally, the correctness problem does not only deal withspecifications, but with any two LoAs defining computational systems,as the next subsection will examine.

2.3 Specifications and Functions

Fully implemented and constructed computational systems aretechnical artifacts, i.e., human-made systems designed andimplemented with the explicit aim of fulfilling specific functions(Kroes 2012). Technical artifacts so defined include tables,screwdrivers, cars, bridges, or televisions, and they are distinctboth from natural objects (e.g. rocks, cats, or dihydrogen monoxidemolecules), which are not human-made, and artworks, which do notfulfill functions. As such, the ontology of computational systemsfalls under that of technical artifacts (Meijers 2000) characterizedby aduality, as they are defined by bothfunctionalandstructural properties (Kroes 2009, see also the entry onphilosophy of technology). Functional properties specify the functions the artifact is requiredto perform; structural properties express the physical propertiesthrough which the artifact can perform them. Consider a screwdriver:functional properties may include the function of screwing andunscrewing; structural properties can refer to a piece of metalcapable of being inserted on the head of the screw and a plastichandle that allows a clockwise and anticlockwise motion. Functions canbe realized in multiple ways by their structural counterparts. Forinstance, the function for the screwdriver could well be realized by afull metal screwdriver, or by an electric screwdriver defined by verydifferent structural properties.

The layered ontology of computational systems characterized by manydifferent LoAs seems to extend the dual ontology defining technicalartifacts (Floridiet al. 2015). Turner (2018) argues thatcomputational systems are still artifacts in the sense of (Kroes 2009,2012), as each LoA is a functional level for lower LoAs and astructural level for upper LoAs:

  • the intention expresses the functions that the system must achieveand is implemented by the specification;
  • the specification plays a functional role, explaining in detailsthe concrete functions that the software must implement, and it isrealized by an algorithm, its structural level;
  • the algorithm expresses the procedures that the high-levellanguage program, its structural level, must implement;
  • instructions in high level language define the functionalproperties for the machine language code, which realizes them;
  • machine code, finally, expresses the functional propertiesimplemented by the execution level, which expresses physicalstructural properties.

It follows, according to Turner (2018), that structural levels neednot be necessarily physical levels, and that the notion of abstractartifact holds in computer science. For this reason, Turner (2011)comes to define high-level language programs themselves as technicalartifacts, in that they constitute a structural level implementingspecifications as their functional level (see§4.2).

A first consequence is that each LoA – expressingwhatfunction to accomplish – can be realized by a multiplicity ofpotential structural levels expressinghow those functionsare accomplished: an intended functionality can be realized by aspecification in multiple ways; a computational problem expressed by aspecification has solutions by a multiplicity of different algorithms,which can differ for some important properties but are all equallyvalid (see§3); an algorithm may be implemented in different programs, each writtenin a different high-level programming language, all expressing thesame program if they implement the same algorithm (Angius andPrimiero 2019); source code can be compiled in a multiplicity ofmachine languages, adopting different ISAs (Instruction SetArchitectures); executable code can be installed and run on amultiplicity of machines (provided that these share the same ISA).

A second consequence is that each LoA as a functional level providescorrectness criteria for lower levels (Primiero 2020). Not just at theimplementation level, correctness is required at any LoA fromspecification to execution, and the cause of malfunctions may belocated at any LoA not correctly implementing its proper functionallevel (see§7.3 and Fresco, Primiero (2013)). According to Turner (2018), thespecification level can be said to be correct or incorrect withrespect to intentions, despite the difficulty of verifying theircorrectness. Correctness of any non-physical layer can be verifiedmathematically through formal verification, and the execution physicallevel can be verified empirically, through testing (§6). Verifying correctness of specifications with respect toclients’ intentions would require instead having access to themental states of the involved agents.

This latter problem relates to the more general one of establishinghow artifacts possess functions, and what it means that structuralproperties are related to the intentions of agents. The problem iswell-known also in the philosophy of biology and the cognitivesciences, and two main theories have been put forward as solutions.According to thecausal theory of function (Cummins 1975),functions are determined by the physical capacities of artifacts: forexample, the physical ability of the heart of contracting andexpanding determines its function of pumping blood in the circulatorysystem. However, this theory faces serious problems when applied totechnical artifacts. First, it prevents defining correctness andmalfunctioning (Kroes 2010): suppose the call filtering app installedon our smartphone starts banning calls from contacts in our mobilephonebook; according to the causal theory of function this would be anew function of the app. Second, the theory does not distinguishintended functions from side effects (Turner 2011): in case of along-lasting call, our smartphone would certainly start heating;however, this is not a function intended by clients or developers.According to theintentional theory of function (McLaughlin2001, Searle 1995), the function fixed by the designer or the user isthe intended one of the artifact, and structural properties ofartifacts are selected so as to be able to fulfill it. This theory isable to explain correctness and malfunction, as well as to distinguishside effects from intended functions. However, it does not say wherethe function actually resides, whether in the artifact or in the mindof the agent. In the former case, one is back at the question of howartifacts possess functions. In the latter case, a further explanationis needed about how mental states are related to physical propertiesof artifacts (Kroes 2010). Turner (2018) holds that the intuitionsbehind both the causal and the intentional theories of function areuseful to understand the relation between function and structure incomputational systems, and suggests that the two theories be combinedinto a single one. On the one hand, there is no function withoutimplementation; on the other hand, there is no intention withoutclients, developers, and users.

A similar position can be maintained also for natural computationalsystems. Following a causal theory, functions are caused by thestructural level and there seems not to beintended orspecified functions. However, natural computational systemsare characterized bymultiple specifiability: a physical orbiological structure may implement different functions, i.e. it can bespecified in multiple ways (Fresco et al. 2021). A straightforwardexample is a gate implementing either a Boolean conjunction or aBoolean inclusive disjunction, depending on which voltage ranges areinterpreted as, or labeled with, true and false. Accordingly, whilethe function computed by the system remainsindeterminate,thelabeling scheme chosen by an agent fixes it(Curtis-Trudel 2022).

3. Algorithms

Even though known and widely used since antiquity, the problem ofdefining what algorithms are is still open (Vardi 2012). The word“algorithm” originates from the name of theninth-century Persian mathematician Abū JaʿfarMuḥammad ibn Mūsāal-Khwārizmī, whoprovided rules for arithmetic operations using Arabic numerals.Indeed, the rules one follows to compute basic arithmetic operationssuch as multiplication or division, are everyday examples ofalgorithms. Other well-known examples include rules to bisect an angleusing compass and straightedge, or Euclid’s algorithm forcalculating the greatest common divisor. Intuitively, an algorithm isa set of instructions allowing the fulfillment of a given task.Despite this ancient tradition in mathematics, only modern logical andphilosophical reflection put forward the task of providing adefinition of what an algorithm is, in connection with thefoundational crisis of mathematics of the early twentieth century (seethe entry on thephilosophy of mathematics). The notion ofeffective calculability arose from logicalresearch, providing some formal counterpart to the intuitive notion ofalgorithm and giving birth to the theory of computation. Since then,different definitions of algorithms have been proposed, ranging fromformal to non-formal approaches, as sketched in the next sections.

3.1 Classical Approaches

Markov (1954) provides a first precise definition of algorithm as acomputational process that isdetermined,applicable, andeffective. A computational processisdetermined if the instructions involved are precise enoughnot to allow for any “arbitrary choice” in theirexecution. The (human or artificial) computer must never be unsureabout what step to carry out next. Algorithms areapplicablefor Markov in that they hold for classes of inputs (natural numbersfor basic arithmetic operations) rather than for single inputs(specific natural numbers). Markov (1954:1) defineseffectiveness as “the tendency of the algorithm toobtain a certain result”. In other words, an algorithm iseffective in that it will eventually produce the answer to thecomputational problem.

Kleene (1967) specifiesfiniteness as a further importantproperty: an algorithm is a procedure which can be described by meansof a finite set of instructions and needs a finite number of steps toprovide an answer to the computational problem. As a counterexample,consider awhile loop defined by a finite number of steps,but which runs forever since the condition in the loop is alwayssatisfied. Instructions should also be amenable tomechanicalexecution, that is, no insight is required for the machine to followthem. Following Markov’s determinability and strengtheningeffectiveness, Kleene (1967) additionally specifies that instructionsshould be able to recognize that the solution to the computationalproblem has been achieved, and halt the computation.

Knuth (1973) recalls and deepens the analyses of Markov (1954) andKleene (1967) by stating that:

Besides merely being a finite set of rules that gives a sequence ofoperations for solving a specific type of problem, an algorithm hasfive important features:

  1. Finiteness. An algorithm must always terminate after afinite number of steps. […]
  2. Definiteness. Each step of an algorithm must be preciselydefined; the actions to be carried out must be rigorously andunambiguously specified for each case. […]
  3. Input. An algorithm has zero or more inputs.[…]
  4. Output. An algorithm has zero or more outputs.[…]
  5. Effectiveness. An algorithm is also generally expected tobe effective, in the sense that its operations must all besufficiently basic that they can in principle be done exactly and in afinite length of time by someone using pencil and paper. (Knuth 1973:4–6) […]

As in Kleene (1967), finiteness affects both the number ofinstructions and the number of implemented computational steps. As inMarkov’s determinacy, Knuth’s definiteness principlerequires that each successive computational step be unambiguouslyspecified. Furthermore, Knuth (1973) more explicitly requires thatalgorithms have (potentially empty sets of) inputs and outputs. Byalgorithms with no inputs or outputs Knuth probably refers toalgorithms using internally stored data as inputs or algorithms notreturning data to an external user (Rapaport 2023, ch. 7). As foreffectiveness, besides Markov’s tendency “to obtain acertain result”, Knuth requires that the result be obtained in afinite amount of time and that the instructions be atomic, that is,simple enough to be understandable and executable by a human orartificial computer.

3.2 Formal Approaches

Gurevich (2011) maintains, on the one hand, that it is not possible toprovide formal definitions of algorithms, as the notion continues toevolve over time: consider how sequential algorithms, used in ancientmathematics, are flanked by parallel, analog, or quantum algorithms incurrent computer science practice, and how new kinds of algorithms arelikely to be envisioned in the near future. On the other hand, aformal analysis can be advanced if concerned only with classicalsequential algorithms. In particular, Gurevich (2000) provides anaxiomatic definition for this class of algorithms.

Any sequential algorithm can be simulated by a sequential abstractstate machine satisfying three axioms:

  1. Thesequential-time postulate associates to any algorithmA a set of statesS(A), a set of initial statesI(A) subset ofS(A), and a map fromS(A) toS(A) of one-step transformations ofA. States aresnapshot descriptions of running algorithms. A run ofA is a(potentially infinite) sequence of states, starting from some initialstate, such that there is a one-step transformation from one state toits successor in the sequence. Termination is not presupposed byGurevich’s definition. One-step transformations need not beatomic, but they may be composed of a bounded set of atomicoperations.
  2. According to theabstract-state postulate, states inS(A) arefirst-order structures, as commonly definedin mathematical logic; in other words, states provide a semantics tofirst-order statements.
  3. Finally, thebounded-exploration postulate states thatgiven two statesX andY ofA there isalways a setT of terms such that, whenX andY coincide overT, the set of updates ofXcorresponds to the set of updates ofY.X andY coincide overT when, for every termt inT, the evaluation oft inX is the same asthe evaluation oft inY. This allows algorithmA to explore only those parts of states which are relative toterms inT.

Moschovakis (2001) objects that the intuitive notion of algorithm isnot captured in full by abstract machines. Given a general recursivefunctionf: ℕ → ℕ defined on naturalnumbers, there are usually many different algorithms computing it;“essential, implementation-independent properties” are notcaptured by abstract machines, but rather by asystem of recursiveequations. Consider the algorithmmergesort for sortinglists; there are many different abstract machines formergesort, and the question arises which one is to be chosenas themergesort algorithm. Themergesort algorithmis instead the system of recursive equations specifying the involvedfunction, whereas abstract machines for themergesortprocedure aredifferent implementations of the samealgorithm. Two questions are put forward by Moschovakis’formal analysis: different implementations of thesamealgorithm should beequivalent implementations, and yet, anequivalence relation among algorithm implementations is to be formallydefined. Furthermore, it remains to be clarified what the intuitivenotion of algorithm formalized by systems of recursive equationsamounts to.

Primiero (2020) proposes a reading of the nature of algorithms atthree different levels of abstraction. At a very high LoA, algorithmscan be defined abstracting from the procedure they describe, allowingfor many different sets of states and transitions. At this LoAalgorithms can be understood asinformal specifications, thatis, as informal descriptions of a procedureP. At a lowerLoA, algorithms specify the instructions needed to solve the givencomputational problem; in other words, they specify a procedure.Algorithms can thus be defined asprocedures, or descriptionsin some given formal languageL of how to execute a procedureP. Many important properties of algorithms, including thoserelated to complexity classes and data structures, cannot bedetermined at the procedural LoA, and instead make reference to anabstract machine implementing the procedure is needed. At a bottomLoA, algorithms can be defined asimplementable abstractmachines, viz. as the specification, in a formal languageL, of the executions of a programP for a givenabstract machineM. The threefold definition of algorithmsallows Primiero (2020) to supply a formal definition of equivalencerelations for algorithms in terms of the algebraic notions ofsimulation andbisimulation (Milner 1973, see alsoAngius and Primero 2018). A machineMi executing aprogramPi implements the same algorithm of amachineMj executing a programPj if and only if the abstract machinesinterpretingMi andMj are ina bisimulation relation.

3.3 Informal Approaches

Vardi (2012) underlines how, despite the many formal and informaldefinitions available, there is no general consensus on what analgorithm is. The approaches of Gurevich (2000) and Moschovakis(2001), which can even be proved to be logically equivalent, onlyprovide logical constructs for algorithms, leaving unanswered the mainquestion. Hill (2013) suggests that an informal definition ofalgorithms, taking into account the intuitive understanding one hasabout algorithms, may be more useful, especially for the publicdiscourse and the communication between practitioners and users.

Rapaport (2012, Appendix) provides an attempt to summarize the threeclassical definitions of algorithm sketched above stating that:

An algorithm (for executor E to accomplish goal G) is:
  1. a procedure, that is, a finite set (or sequence) of statements (orrules, or instructions), such that each statement is:
    • composed of a finite number of symbols (or marks) from a finitealphabet
    • and unambiguous for E—that is,
      1. E knows how to do it
      2. E can do it
      3. it can be done in a finite amount of time
      4. and, after doing it, E knows what to do next—
  2. which procedure takes a finite amount of time (that is, ithalts),
  3. and that ends with G accomplished.

Rapaport stresses that an algorithm is a procedure, i.e., a finitesequence of statements taking the form of rules or instructions.Finiteness is here expressed by requiring that instructions contain afinite number of symbols from a finite alphabet.

Hill (2016) aims at providing an informal definition of algorithm,starting from Rapaport’s (2012):

An algorithm is a finite, abstract, effective, compound controlstructure, imperatively given, accomplishing a given purpose, undergiven provisions.(Hill 2016: 48).

First of all, algorithms arecompound structures rather thanatomic objects, i.e., they are composed of smaller units, namelycomputational steps. These structures are finite and effective, asexplicitly mentioned by Markov, Kleene, and Knuth. While these authorsdo not explicitly mention abstractness, Hill (2016) maintains it isimplicit in their analysis. Algorithms areabstract simply inthat they lack spatio-temporal properties and are independent fromtheir instances. They providecontrol, that is,“content that brings about some kind of change from one state toanother, expressed in values of variables and consequentactions” (p. 45). Algorithms areimperatively given, asthey command state transitions to carry out specified operations.Finally, algorithms operate to achieve certainpurposes undersome usually well-specifiedprovisions, or preconditions.From this viewpoint, the author argues, algorithms are on a par withspecifications in their specifying a goal under certain resources.This definition allows to distinguish algorithms from other compoundcontrol structures. For instance, recipes are not algorithms becausethey are not effective; nor are games, which are not imperativelygiven.

4. Programs

The ontology of computer programs is strictly related to the subsumednature of computational systems (see§1). If computational systems are defined on the basis of thesoftware-hardware dichotomy, programs are abstract entitiesinterpreting the former and opposed to the concrete nature ofhardware. Examples of such interpretations are provided in§1.1 and include the “concrete abstraction” definition byColburn (2000), the “abstract artifact” characterizationby Irmak (2012), and programs as specifications of machines proposedby Duncan (2011). By contrast, under the interpretation ofcomputational systems by a hierarchy of LoAs, programs areimplementations of algorithms. We refer to§5 on implementation for an analysis of the ontology of programs in thissense. This section focuses on definitions of programs with asignificant relevance in the literature, namely those views thatconsider programs as theories or as artifacts, with a focus on theproblem of the relation between programs and the world.

4.1 Programs as Theories

The view that programs are theories goes back to approaches incognitive science. In the context of the so-called InformationProcessing Psychology (IPP) for the simulative investigation on humancognitive processes, Newell and Simon (1972) advanced the thesis thatsimulative programs areempirical theories of their simulatedsystems. Newell and Simon assigned to a computer program the role oftheory of the simulated system as well as of the simulative system,namely the machine running the program, to formulate predictions onthe simulated system. In particular, the execution traces of thesimulative program, given a specific problem to solve, are used topredict the mental operation strategies that will be performed by thehuman subject when asked to accomplish the same task. In case of amismatch between execution traces and the verbal reports of theoperation strategies of the human subject, the empirical theoryprovided by the simulative program is revised. The predictive use ofsuch a computer program is comparable, according to Newell and Simon,to the predictive use of the evolution laws of a system that areexpressed by differential or difference equations.

Newell and Simon’s idea that programs are theories has beenshared by the cognitive scientists Pylyshyn (1984) and Johnson-Laird(1988). Both agree that programs, in contrast to typical theories, arebetter at facing the complexity of the simulative process to bemodelled, forcing one to fill-in all the details that are necessaryfor the program to be executed. Whereas incomplete or incoherenttheories may be advanced at some stage of scientific inquiry, this isnot the case for programs.

On the other hand, Moor (1978) considers the programs-as-theoriesthesis another myth of computer science. As programs can only simulatesome set of empirical phenomena, at most they play the role ofcomputationalmodels of those phenomena. Moor notices thatfor programs to be acknowledged as models, semantic functions arenevertheless needed to interpret the empirical system being simulated.However, the view that programs are models should not be mistaken forthe definition of programs as theories: theoriesexplain andpredict the empirical phenomena simulated by models, whilesimulation by programs does not offer that.

According to computer scientist Paul Thagard (1984), understandingprograms as theories would require asyntactic or asemantic view of theories (see the entry onthe structure of scientific theories). But programs do not comply with either of the two views. According tothe syntactic view (Carnap 1966, Hempel 1970), theories are sets ofsentences expressed in some defined language able to describe targetempirical systems; some of those sentences define the axioms of thetheory, and some are law-like statements expressing regularities ofthose systems. Programs are sets of instructions written in somedefined programming language which, however, do not describe anysystem, insofar as they are procedural linguistic entities and notdeclarative ones. To this, Rapaport (2023) objects that proceduralprogramming languages can often be translated into declarativelanguages and that there are languages, such as Prolog, that can beinterpreted both procedurally and declaratively. According to thesemantic view (Suppe 1989, Van Fraassen 1980), theories are introducedby a collection of models, defined as set-theoretic structuressatisfying the theory’s sentences. However, in contrast to Moor(1978), Thagard (1984) denies programs the epistemological status ofmodels: programs simulate physical systems without satisfyingtheories’ laws and axioms. Rather, programs include, forsimulation purposes, implementation details for the programminglanguage used, but not of the target system being simulated.

A yet different approach to the problem of whether programs aretheories comes from the computer scientist Peter Naur (1985).According to Naur, programming is a theory building process not in thesense that programs are theories, but because the successfulprogram’s development and life-cycle require that programmersand developers have theories of programs available. A theory is hereunderstood, following Ryle (2009), as a corpus of knowledge shared bya scientific community about some set of empirical phenomena, and notnecessarily expressed axiomatically or formally. Theoriesofprograms are necessary during the program life-cycle to be able tomanage requests of program modifications pursuant to observedmiscomputations or unsatisfactory solutions to the computationalproblem the program was asked to solve. In particular, theories ofprograms should allow developers to modify the program so that newsolutions to the problem at stake can be provided. For this reason,Naur (1985) deems such theories more fundamental, in softwaredevelopment, than documentations and specifications.

For Turner (2010, 2018 ch. 10), programming languages are mathematicalobjects defined by a formal grammar and a formal semantics. Inparticular, each syntactic construct, such as an assignment, aconditional or a while loop, is defined by a grammatical ruledetermining its syntax, and by a semantic rule associating a meaningto it. Depending on whether an operational or a denotational semanticsis preferred, meaning is given in terms of respectively the operationsof an abstract machine or of mathematical partial functions from setof states to set of states. For instance, the simple assignmentstatement \(x := E\) is associated, under an operational semantics,with the machine operation \(update(s,x,v)\) which assigns variable\(v\) interpreted as \(E\) to variable \(x\) in state \(s\). Both inthe case of an operational and of a denotational semantics, programscan be understood as mathematical theories expressing the operationsof an implementing machine. Consider operational semantics: asyntactic rule of the form \(\langle P,s \rangle \Downarrow s'\)states semantically that program \(P\) executed in state \(s\) resultsin \(s'.\) According to Turner (2010, 2018), a programming languagewith an operational semantics is akin to anaxiomatic theory ofoperations in which rules provide axioms for the relation\(\Downarrow\).

4.2 Programs as Technical Artifacts

Programs can be understood as technical artifacts because programminglanguages are defined, as any other artifact, on the basis of bothfunctional and structural properties (Turner 2014, 2018 ch. 5).Functional properties of (high level) programming languages areprovided by the semantics associated with each syntactic construct ofthe language. Turner (2014) points out that programming languages canindeed be understood as axiomatic theories only when their functionallevel is isolated. Structural properties, on the other hand, arespecified in terms of the implementation of the language, but notidentified with physical components of computing machines: given asyntactic construct of the language with an associated functionaldescription, its structural property is determined by the physicaloperations that a machine performs to implement an instruction for theconstruct at hand. For instance, the assignment construct \(x := E\)is to be linked to the physical computation of the value of expression\(E\) and to the placement of the value of \(E\) in the physicallocation \(x\).

Another requirement for a programming language to be considered atechnical artifact is that it has to be endowed with a semanticsproviding correctness criteria for the language implementation. Theprogrammer attests to functional and structural properties of aprogram by taking the semantics to have correctness jurisdiction overthe program.

4.3 Programs and their Relation to the World

The problem of whether computer programs are theories is tied with therelation that programs entertain with the outside world. If programswere theories, they would have to represent some empirical system, anda semantic relation would be directly established between the programand the world. By contrast, some have argued that the relation betweenprograms and natural systems is mediated by models of the outsideworld (Colburnet al. 1993, Smith 1985). In particular, Smith(1985) argues that models are abstract descriptions of empiricalsystems, and computational systems operating in them have programsthat act as models of the models, i.e., they represent abstract modelsof reality. Such an account of the ontology of programs comes in handywhen describing the correctness problem in computer science (see§ 7): if specifications are considered as models requiring certainbehaviors from computational systems, programs can be seen as modelssatisfying specifications.

Two views of programs can be given depending on whether one admitstheir relation with the world (Rapaport 2023, ch. 16). According to afirst view, programs are “wide”, “external”and “semantic”: they grant direct reference to objects ofan empirical system and operations on those objects. According to asecond view, programs are “narrow”,“internal”, and “syntactic”: they make onlyreference to the atomic operations of an implementing machine carryingout computations. Rapaport (2023) argues that programsneednot be “external” and “semantic”. First,computation itself needs not to be “external”: a Turingmachine executes the instructions contained in its finite table byusing data written on its tape and halting after the data resultingfrom the computation have been written on the tape. Data are not,strictly speaking, in-put-from and out-put-to an external user.Furthermore, Knuth (1973) required algorithms to havezero ormore inputs and outputs (see§ 3.1). A computer program requiring no inputs may be a program, say,outputting all prime numbers from 1; and a program with no outputs canbe a program that computes the value of some given variable x withoutreturning the value stored in x as output. Second, programs need notbe “external”, teleological, i.e., goal oriented. Thisview opposes other known positions in the literature. Suber (1988)argues that, without considering goals and purposes, it would not bepossible to assess whether a computer program is correct, that is, ifit behaves as intended. And as recalled in§3.3., Hill (2016) specifies in her informal definition that algorithmsaccomplish “a given purpose, under given provisions.”(Hill 2016: 48). To these views, Rapaport (2023, ch. 16) replies thatwhereas goals, purposes, and programmers’ intentions may be veryuseful for a human computor to understand a program, they are notnecessary for an artificial computer to carry out the computationsinstructed by the program code. Indeed, the principle of effectivenessthat classical approaches require for algorithms (see§3.1) demands, among other properties, that algorithms be executed withoutany recourse to intuition. In other words, a machine executing aprogram for adding natural numbers does not “understand”that it is adding; at the same time, knowing that a given programperforms addition may help a human agent to understand theprogram’s code.

According to this view, computing involves just symbols, not meanings.Turing machines become symbols manipulators and not a single butmultiple meanings can be associated with its operations. How can thenone identify when two programs are thesame program, if notby their meanings, that is, by considering what function they perform?One answer comes from Piccini’s analysis of computation and its“internal semantics” (Piccini 2008, 2015 ch. 3):two programs can be identified as identical by analysing only theirsyntax and the operations the programs carry out on their symbols. Theeffects of string manipulation operations can be considered aninternal semantics of a program. The latter can be easily determinedby isolating subroutines or methods in the program’s code andcan afterwards be used to identify a program or to establish whethertwo programs are the same, namely when they are defined by the samesubroutines.

However, it has been argued that there are cases in which it is notpossible to determine whether two programs are the same without makingreference to an external semantics. Sprevak (2010) proposes toconsider two programs for addition which differ from the fact that oneoperates on Arabic, the other one on Roman numerals. The two programscompute the same function, namely addition, but this cannot always beestablished by inspecting the code with its subroutines; it must bedetermined by assigning content to the input/output strings,interpreting Arabic and Roman numerals as numbers. In that regard,Angius and Primiero (2018) underline how the problem of identity forcomputer programs does not differ from the problem of identity fornatural kinds (Lowe 1998) and technical artifacts (Carrara et al.2014). The problem can be tackled by fixing an identity criterion,namely a formal relation, that any two programs should entertain inorder to be defined as identical. Angius and Primiero (2018) show howto use the process algebra relation of bisimulation between the twoautomata implemented by two programs under examination as such anidentity criterion. Bisimulation allows to establish matchingstructural properties of programs implementing the same function, aswell as providing weaker criteria for copies in terms of simulation.This brings the discussion back to the notion of programs asimplementations. We now turn to analyze this latter concept.

5. Implementation

The word ‘implementation’ is often associated with aphysical realization of a computing system, i.e., to a machineexecuting a computer program. In particular, according to the dualontology of computing systems examined in§1.1, implementation in this sense reduces to the structural hardware, asopposed to the functional software. By contrast, following the methodof the levels of abstraction (§ 1.2), implementation becomes a wider relation holding between any LoAdefining a computational system and the levels higher in thehierarchy. Accordingly, an algorithm is an implementation of a (setof) specification(s); a program expressed in a high level programminglanguage can be defined as an implementation of an algorithm (see§4); assembly and machine code instructions can be seen as animplementation of a set of high-level programming languageinstructions with respect to a given ISA; finally, executions arephysical, observable, implementations of those machine codeinstructions. By the same token, programs formulated in a high-levellanguage are also implementations of specifications, and, as similarlyargued by the dual-ontology paradigm, executions are implementationsof high-level programming language instructions. According to Turner(2018), even the specification can be understood as an implementationof what has been called intention.

What remains to be examined here is the nature of the implementationrelation thus defined. Analyzing this relation is essential to definethe notion ofcorrectness (§7). Indeed, a correct program amounts to a correct implementation of analgorithm; and a correct computing system is a correct implementationof a set of specifications. In other words, under this view, thenotion of correctness is paired with that of implementation for anyLoA: any level can be said to be correct with respect to upper levelsif and only if it is a correct implementation thereof.

The following three subsections examine three main definitions of theimplementation relation that have been advanced in the philosophy ofcomputer science literature.

5.1 Implementation as Semantic Interpretation

A first philosophical analysis of the notion of implementation incomputer science is advanced by Rapaport (1999, 2005). He defines animplementationI as thesemantic interpretation of asyntactic or abstract domainA in a medium of implementationM. If implementation is understood as a relation holdingbetween a given LoA and any upper level in the hierarchical ontologyof a computational system, it follows that Rapaport’s definitionextends accordingly, so that any LoA provides a semanticinterpretation in a given medium of implementation for the upperlevels. Under this view, specifications provide semanticinterpretations of intentions expressed by stakeholders in thespecification (formal) language, and algorithms provide semanticinterpretations of specifications using one of the many languagesalgorithms can be formulated in (natural languages, pseudo-code, logiclanguages, functional languages etc.). The medium of implementationcan be either abstract or concrete. A computer program is theimplementation of an algorithm in that the former provides a semanticinterpretation of the syntactic constructs of the latter in ahigh-level programming language as its medium of implementation. Theprogram’s instructions interpret the algorithm’s tasks ina programming language. Also the execution LoA provides a semanticinterpretation of the assembly/machine code operations into the mediumgiven by the structural properties of the physical machine. Accordingto the analysis in (Rapaport 1999, 2005), implementation is anasymmetric relation: ifI is an implementation ofA,A cannot be an implementation ofI. However, theauthor argues that any LoA can be both a syntactic and a semanticlevel, that is, it can play the role of both the implementation I andof a syntactic domain A. Whereas an algorithm is assigned a semanticinterpretation by a program expressed in a high-level language, thesame algorithm provides a semantic interpretation for thespecification. It follows that the abstraction-implementation relationpairs the functional-structural relation for computationalsystems.

Primiero (2020) considers this latter aspect as one main limit ofRapaport’s (1999, 2005) account of implementation:implementation reduces to aunique relation between asyntactic level and its semantic interpretation and it does notaccount for the layered ontology of computational systems seen in§1.2. In order to extend the present definition of implementation to allLoAs, each level has to be reinterpreted each time either as syntacticor as a semantic level. This, in turn, has a repercussion on thesecond difficulty characterizing, according to Primero (2020),implementation as a semantic interpretation: on the one hand, thisapproach does not take into accountincorrectimplementations; on the other hand, for a given incorrectimplementation, the unique relation so defined can relateincorrectness only to one syntactic level, excluding all other levelsas potential error locations.

Turner (2018) aims to show that semantic interpretation not only doesnot account for incorrect implementation, but not even to correctones. One first example is provided by the implementation of onelanguage into another: the implementing language here is not providinga semantic interpretation of the implemented language, unless theformer is associated with a semantics providing meaning andcorrectness criteria for the latter. Such semantics will remainexternal to the implementation relation: whereas correctness isassociated with semantic interpretation, implementation does notalways come with a semantic interpretation. A second example is givenby considering an abstract stack implemented by an array; again, thearray does not provide correctness criteria for the stack. Quite tothe contrary, it is the stack that specifies correctness criteria forany of its implementation, arrays included.

5.2 Implementation as the Relation Specification-Artifact

The fact that correctness criteria for the implementation relation areprovided by the abstract level induces Turner (2012, 2014, 2018) todefine implementation as the relationspecification-artefact.As examined in§2, specifications have correctness jurisdiction over artifacts, that is,they prescribe the allowed behaviors of artifacts. Also recall thatartifacts can be both abstract and concrete entities, and that any LoAcan play the role of specification for lower levels. This amounts tosaying that the specification-artefact relation is able to define anyimplementation relation across the layered ontology of computationalsystems.

Depending on how the specification-artifact relation is defined,Turner (2012) distinguishes as many as three different notions ofimplementation. Consider the case of a physical machine implementing agiven abstract machine. According to anintentional notion ofimplementation, an abstract machine works as a specification for aphysical machine, provided it advances all the functional requirementsthe latter must fulfill, i.e., it specifies (in principle) all theallowed behaviors of the implementing physical machine. According toanextensional notion of implementation, a physical machineis a correct implementation of an abstract machine if and only ifisomorphisms can be established mapping states of the latter to statesof the former, and transitions in the abstract machine correspond toactual executions (computational traces) of the artifact. Finally, anempirical notion of implementation requires the physicalmachine to display computations that match those prescribed by theabstract machine; that is to say, correct implementation has to beevaluated empirically through testing.

Primiero (2020) underlines how, while this approach addresses theissue of correctness and miscomputation as it allows to distinguish acorrect from an incorrect implementation, it still identifies a uniqueimplementation relation between a specification level and an artifactlevel. Again, if this account is allowed to involve the layeredontology of computational systems by reinterpreting each time any LoAeither as a specification or artifact, Turner’s account preventsone from referring to more than one level at the same time as thecause of miscomputation: a miscomputation always occurs here as anincorrect implementation of a specification by an artifact. Bydefining implementation as a relation holding across all the LoAs, onewould be able to identify multiple incorrect implementations which donot directly refer to the abstract specification. A miscomputation mayindeed be caused by an incorrect implementation of lower levels whichis then inherited all the way down to the execution level.

5.3 Implementation for LoAs

Primiero (2020) proposes a definition of implementation not as arelation between two fixed levels, but one that is allowed to rangeover any LoA. Under this view, an implementationI is arelation of instantiation holding between a LoA and any otherone higher in the abstraction hierarchy. Accordingly, a physicalcomputing machine is an implementation of assembly/machine codeoperations; by transitivity, it can also be considered as aninstantiation of a set of instructions expressed in high-levelprogramming language instructions. A program expressed in a high-levellanguage is an implementation of an algorithm; but it can also betaken to be the instantiation of a set of specifications.

Such a definition of implementation allows Primiero (2020) to providea general definition of correctness: a physical computing system iscorrect if and only if it is characterized by correct implementationsat any LoA. Hence correctness and implementation are coupled anddefined at any LoA.Functional correctness is the property ofa computational system that displays the functionalities required bythe specifications of that system.Procedural correctnesscharacterizes computational systems displaying the functionalitiesintended by the implemented algorithms. Andexecutionalcorrectness is defined as the property of a system that is ableto correctly execute the program on its architecture. Each of theseforms of correctness can also be classified quantitatively, dependingon the amount of functionalities being satisfied. A functionallyefficient computational system displays a minimal subset ofthe functionalities required by the specifications; a functionallyoptimal system is able to display a maximal subset of thosefunctionalities. Similarly, the author defines procedurally as well asexecutionally efficient and optimal computational systems.

5.4 Physical Computation

According to this definition, implementation shifts from level tolevel: a set of algorithms defining a computational system areimplemented as procedures in some formal language, as instructions ina high-level language, or as operations in a low-level programminglanguage. An interesting question is whetherany system,beyond computational artifacts, implementing procedures of this sortqualifies as a computational system. In other words, asking about thenature of physical implementation amounts to asking what is acomputational system. If any system implementing an algorithm wouldqualify as computational, the class of such systems could be extendedto biological systems, such as the brain or the cell; to physicalsystems, including the universe or some portion of it; and eventuallyto any system whatsoever, a thesis known aspancomputationalism (for an exhaustive overview on the topicsee Rapaport 2018).

Traditionally, a computational system is intended as amechanicalartifact that takes input data, elaborates themalgorithmically according to a set of instructions, andreturns manipulated data as outputs. For instance, von Neumann (1945,p.1) states that “An automatic computing system is a (usuallyhighly composite) device, which can carry out instructions to performcalculations of a considerable order of complexity”. Such aninformal and well-accepted definition leaves some questions open,including whether computational systems have to be machines, whetherthey have to process data algorithmically and, consequently, whethercomputations have to be Turing complete.

Rapaport (2018) provides a more explicit characterization of acomputational system defined as any “physical plausibleimplementation of anything logically equivalent to a universal Turingmachine”. Strictly speaking personal computers are not physicalTuring machines, but register machines are known to be Turingequivalent. To qualify as computational, systems must beplausible implementations thereof, in that Turing machines,contrary to physical machines, have access to infinite memory spaceand are, as abstract machines, error free. According toRapaport’s (2018) definition,any physicalimplementation of this sort is thus a computational system, includingnatural systems. This raises the question about which class of naturalsystems is able to implement Turing equivalent computations. Searlefamously argued that anything can be an implementation of a Turingmachine, or of a logical equivalent model (Searle 1990). His argumentlevers on the fact that being a Turing machine is a syntacticproperty, in that it is all about manipulating tokens of 0’s and1’s. According to Searle, syntactic properties are not intrinsicto physical systems, but they are assigned to them by an observer. Inother words, a physical state of a system is not intrinsically acomputational state: there must be an observer, or user, who assignsto that state a computational role. It follows that any system whosebehavior can be described as syntactic manipulation of 0’s and1’s is a computational system.

Hayes (1997) objects to Searle (1990) that if everything was acomputational system, the property “being a computationalsystem” would become vacuous, as all entities would possess it.Instead, there are entities which are computational systems, andentities which are not. Computational systems are those in which thepatterns received as inputs and saved into memory are able to changethemselves. In other words, Hayes makes reference to the fact thatstored inputs can be both data and instructions and that instructions,when executed, are able to modify the value of some input data.“If it were paper, it would be ‘magic paper’ onwhich writing might spontaneously change, or new writing appear”(Hayes 1997, p. 393). Only systems able to act as “magicpaper” can be acknowledged as computational.

A yet different approach comes from Piccinini (2007, 2008) in thecontext of his mechanistic analysis of physical computations(Piccinini 2015; see also the entry oncomputation in physical systems). A physical computing system is a system whose behaviors can beexplained mechanistically by describing the computingmechanism that brings about those behaviors. Mechanisms can be definedby “entities and activities organized such that they areproductive of regular changes from start or set-up to finish ortermination condition” (Machamer et al. 2000; see the entry onmechanisms in science). Computations, as physical processes, can be understood as thosemechanisms that “generate output strings from input strings inaccordance with general rules that apply to all input strings anddepend on the input (and sometimes internal states)” (Piccinini2007, p. 108). It is easy to identify set-up and terminationconditions for computational processes. Any system which can beexplained by describing an underlying computing mechanism is to beconsidered a computational system. The focus on explanation helpsPiccinini avoid the Searlean conclusion that any system is acomputational system: even if one may interpret, in principle, anygiven set of entities and activities as a computing mechanism, onlythe need to explain a certain observed phenomenon in terms of acomputing mechanism defines the system under examination ascomputational.

A more recent attempt to avoid the Searlean conclusion (Curtis-Trudel,2021) proposes a notion of implementation as aresemblancerelation, applicable to both artificial and natural computationalsystems. According to this notion, a physical system implements acomputation in case it resembles acomputationalarchitecture. For instance, the formal definition of a Turingmachine provides a blueprint for any physical Turing machine. Aphysical system is a physical Turing machine just in case it resemblesthe computational architecture of a Turing machine. For the generalcase of a digital computer, defining its architecture amounts tospecifying itsinstruction set architecture (ISA).Resemblance is understood in terms of Weisberg’s (2012) notionof similarity: for a given set of features, evaluating resemblancebetween a computational architecture and a physical system requiresestablishing the number of features the two systems share and thenumber of features each system lacks compared to the other.

6. Verification

A crucial step in the software development process is verification.This consists in the process of evaluating whether a givencomputational system is correct with respect to the specification ofits design. In the early days of the computer industry, validity andcorrectness checking methods included several design and constructiontechniques, see for example (Arifet al. 2018). Nowadays,correctness evaluation methods can be roughly sorted into two maingroups: formal verification and testing. Formal verification (Moninand Hinchey 2003) involves a proof of correctness with mathematicaltools; software testing (Ammann and Offutt 2008) rather consists inrunning the implemented program to observe whether performedexecutions comply or not with the advanced specifications. In manypractical cases, a combination of both methods is used (see forinstance Callahanet al. 1996).

6.1 Models and Theories

Formal verification methods require arepresentation of thesoftware under verification. Intheorem proving (see vanLeeuwen 1990), programs are represented in terms of axiomatic systemsand a set of rules of inference representing the pre- andpost-conditions of program transitions. A proof of correctness is thenobtained by deriving formulas expressing specifications from theaxioms. Inmodel checking (Baier and Katoen 2008), a programis represented in terms of a state transition system, its propertyspecifications are formalised by temporal logic formulas (Krögerand Merz 2008), and a proof of correctness is achieved by adepth-first search algorithm that checks whether those formulas holdof the state transition system.

Axiomatic systems and state transition systems used for correctnessevaluation can be understood astheories of the representedartifacts, in that they are used to predict and explain their futurebehaviors. Methodologically state transition systems in model checkingcan be compared with scientific models in empirical sciences (Angiusand Tamburrini 2011). For instance, Kripke Structures (see Clarkeet al. 1999 ch. 2) are in compliance with Suppes’(1960) definition of scientific models as set-theoretic structuresestablishing proper mapping relations with models of data collected bymeans of experiments on the target empirical system (see also theentry onmodels in science). Kripke Structures and other state transition systems utilized informal verification methods are often called system specifications.They are distinguished from common specifications, also calledproperty specifications. The latter specify some required behavioralproperties the program to be encoded must instantiate, while theformer specify (in principle) all potential executions of an alreadyencoded program, thus allowing for algorithmic checks on its traces(Clarkeet al. 1999). In order to achieve this goal, systemspecifications are considered asabductive structures,hypothesizing the set of potential executions of a targetcomputational system on the basis of the program’s code and theallowed state transitions (Angius 2013b). Indeed, once it has beenchecked whether some temporal logic formula holds of the modeledKripke Structure, the represented program is empirically testedagainst the behavioral property corresponding to the checked formula,in order to evaluate whether the model-hypothesis is an adequaterepresentation of the target computational system. Indeed, beforeperforming verification, when the full behavior of a program is stillunknown, system specifications can be considered as hypotheses overprograms (Turner 2020). The descriptive and abductive character ofstate transition systems in model checking is an additional andessential feature putting state transition systems on a par withscientific models.

6.2 Testing and Experiments

Testing is the more ‘empirical’ process of launching aprogram and observing its executions in order to evaluate whether theycomply with the supplied property specifications. Such technique isextensively used in the software development process. Philosophers andphilosophically-minded computer scientists have considered softwaretesting under the light of traditional methodological approaches inscientific discovery (Snelting 1998; Gagliardi 2007; Northoveretal. 2008; Angius 2014) and questioned whether software tests canbe acknowledged asscientific experiments evaluating thecorrectness of programs (Schiaffonati and Verdicchio 2014,Schiaffonati 2015; Tedre 2015).

Dijkstra’s well-known dictum “Program testing can be usedto show the presence of bugs, but never to show their absence”(Dijkstra 1970, p.7), introduces Popper’s (1959) principle offalsifiability into computer science (Snelting 1998). Testinga program against an advanced property specification for a giveninterval of time may exhibit some failures, but if no failure occurswhile observing the running program one cannot conclude that theprogram is correct. An incorrect execution might be observed at thevery next system’s run. The reason is that testers can onlylaunch the program with a finite subset of the potentialprogram’s input set and only for a finite interval of time;accordingly, not all potential executions of the program to be testedcan be empirically observed. For this reason, the aim of softwaretesting is to detect programs’ faults and not to guarantee theirabsence (Ammann and Offutt 2008, p. 11). A program is falsifiable inthat tests can reveal faults (Northoveret al. 2008). Hence,given a computational system and a property specification, a test isakin to a scientific experiment which, by observing the system’sbehaviors, tries to falsify the hypothesis that the program is correctwith respect to the interested specification.

However, other methodological and epistemological traitscharacterizing scientific experiments are not shared by softwaretests. A first methodological distinction can be recognized in that afalsifying test leads to the revision of the computational system, notof the hypothesis, as in the case of testing scientific hypotheses.This is due to the difference in the intentional stance ofspecifications and empirical hypotheses in science (Turner 2011).Specifications are requirements whose violation demands for programrevisions until the program becomes a correct instantiation of thespecifications.

For this, among other reasons, the traditional notion of scientificexperiment needs to be ‘stretched’ in order to be appliedto software testing activities (Schiaffonati 2015).Theory-drivenexperiments, characterizing most of the experimental sciences,find no counterpart in actual computer science practice. If oneexcludes the cases wherein testing is combined with formal methods,most experiments performed by software engineers are ratherexplorative,i.e. aimed at ‘exploring’“the realm of possibilities pertaining to the functioning of anartefact and its interaction with the environment in the absence of aproper theory or theoretical background” (Schiaffonati 2015:662). Software testers often do not have theoretical control on theexperiments they perform; exploration on the behaviors of thecomputational system interacting with users and environments ratherallows testers to formulate theoretical generalizations on theobserved behaviors. Explorative experiments in computer science arealso characterized by the fact that programs are often tested in areal-like environment wherein testers play the role of users. However,it is an essential feature of theory-driven experiments thatexperimenters do not take part in the experiment to be carriedout.

As a result, while some software testing activities are closer to theexperimental activities one finds in empirical sciences, some othersrather define a new typology of experiment that turns out to belong tothe software development process. Five typologies of experiments canbe distinguished in the process of specifying, implementing, andevaluating computational systems (Tedre 2015):

  • feasibility experiments are performed to evaluate whethera system performs the functions specified by users andstakeholders;
  • trial experiments are carried out to evaluate isolatedcapabilities of the system given some set of initial conditions;
  • field experiments are performed in real environments andnot in simulated ones;
  • comparison experiments test similar systems,instantiating in different ways the same function, to evaluate whichinstantiation better performs the desired function both in real-likeand real environments;
  • finally,controlled experiments are used to appraiseadvanced hypotheses on the behaviors of the testing computationalsystem and are the only ones on a par with scientific theory-drivenexperiments, in that they are carried out on the basis of sometheoretical hypotheses under evaluation.

6.3 Explanation

A software test is considered successful when miscomputations aredetected (assuming that no computational artifact is 100% correct).The successive step is to find out what caused the execution to beincorrect, that is, to trace back the fault (more familiarly named‘bug’), before proceeding to the debugging phase and thentesting the system again. In other words, anexplanation ofthe observed miscomputation is to be advanced.

Efforts have been made to consider explanations in computer science(Piccinini 2007; Piccinini and Craver 2011; Piccinini 2015; Angius andTamburrini 2016) in relation to the different models of explanationselaborated in the philosophy of science. In particular, computationalexplanations can be understood as a specific kind ofmechanisticexplanation (Glennan 1996; Machameret al. 2000; Bechteland Abrahamsen 2005), insofar as computing processes can be analyzedas mechanisms (Piccinini 2007; 2015; see also the entry oncomputation in physical systems).

Consider a processor executing an instruction. The involved processcan be understood as a mechanism whose components are states andcombinatory elements in the processor instantiating the functionsprescribed by the relevant hardware specifications (specifications forregisters, for the Arithmetic Logic Unit etc..), organized in such away that they are capable of carrying out the observed execution.Providing the description of such a mechanism counts as advancing amechanist explanation of the observed computation, such as theexplanation of an operational malfunction.

For every type of miscomputation (see§7.3), a corresponding mechanist explanation can be defined at the adequateLoA and with respect to the set of specifications characterizing thatLoA. Indeed, abstract descriptions of mechanisms still supply one witha mechanist explanation in the form of a mechanismschema,defined as “a truncated abstract description of a mechanism thatcan be filled with descriptions of known component parts andactivities” (Machameret al. 2000, p. 15). Forinstance, suppose the very common case in which a machine miscomputesby executing a program containing syntax errors, called slips. Thecomputing machine is unable to correctly implement the functionalrequirements provided by the program specifications. However, forexplanatory purposes, it would be redundant to provide an explanationof the occurred slip at the hardware level of abstraction, byadvancing the detailed description of the hardware components andtheir functional organization. In such cases, a satisfactoryexplanation may consist in showing that the program’s code isnot a correct instantiation of the provided program specifications(Angius and Tamburrini 2016). In order to explain mechanistically anoccurred miscomputation, it may be sufficient to provide thedescription of the incorrect program, abstracting from the rest of thecomputing mechanism (Piccinini and Craver 2011). Abstraction is avirtue not only in software development and specification, but also inthe explanation of computational systems’ behaviors.

7. Correctness

Each of the different approaches on software verification examined inthe previous section assumes a different understanding of correctnessfor software. Standardly, correctness has been understood as arelation holding between an abstraction and its implementation, suchthat it holds if the latter fulfills the properties formulated by theformer. Once computational systems are described as having a layeredontology, correctness needs to be reformulated as the relation thatany structural level entertains with respect to its functional level(Primiero, 2020). Hence, correctness can still be considered as amathematical relationship when formulated between abstract andfunctional level; while it can be considered as an empiricalrelationship when formulated between the functional and theimplementation levels. One of the earlier debates in the philosophy ofcomputer science (De Milloet al. 1979; Fetzer 1988) wasindeed around this distinction.

7.1 Mathematical Correctness

Formal verification methods grant ana-priori analysis of thebehaviors of programs, without requiring the observation of any oftheir implementation or considering their execution. In particular,theorem proving allows one todeduce any potential behaviorof the program under consideration and its behavioral properties froma suitable axiomatic representation. In the case of model checking,one knows in advance the behavioural properties displayed by theexecution of a program by performing an algorithmic search of theformulas valid in a given set-theoretic model. These considerationsfamously led Hoare (1969) to conclude that program development is an“exact science”, which should be characterized bymathematical proofs of correctness, epistemologically on a par withstandard proofs in mathematical practice.

De Millo et al. (1979) question Hoare’s thesis: correctmathematical proofs are usuallyelegant andgraspable, implying that any (expert) reader can“see” that the conclusion follows from the premises (forthe notion of elegance in software see also Hill (2018)). What areoften calledCartesian proofs (Hacking 2014) do not have acounterpart in correctness proofs, typically long and cumbersome,difficult to grasp and not explaining why the conclusion necessarilyfollows from the premises. Yet, many proofs in mathematics are longand complex, but they are in principlesurveyable, thanks tothe use of lemmas, abstractions and the analytic construction of newconcepts leading step by step to the statement to be proved.Correctness proofs, on the contrary, do not involve the creation ofnew concepts, nor the modularity one typically finds in mathematicalproofs (Turner, 2018). And yet, proofs that are not surveyable cannotbe considered mathematical proofs (Wittgenstein 1956).

A second theoretical difficulty concerning proofs of correctness forcomputer programs concerns their complexity and that of the programsto be verified. Already Hoare (1981) admitted that while verificationof correctness is always possible in principle, in practice it ishardly achievable. Except for trivial cases, contemporary software ismodularly encoded, is required to satisfy a large set ofspecifications, and it is developed so as to interact with otherprograms, systems, users. Embedded and reactive software are cases inpoint. In order to verify such complex software, correctness proofsare carried out automatically. Hence, on the one hand, the correctnessproblem shifts from the program under examination to the programperforming the verification, e.g. a theorem prover; on the other hand,proofs carried out by a physical process can go wrong, due tomechanical mistakes of the machine. Against this infinite regressargument, Arkoudas and Bringsjord (2007) argue that one can make useof a proof checker which, by being a relatively small program, isusually easier to verify.

Most recently, formal methods for checking correctness based on acombination of logical and statistical analysis have given newstimulus to this research area: the ability of Separation Logics(Reynolds, 2002) to offer a representation of the logical behavior ofthe physical memory of computational systems, and the possibility ofconsidering probabilistic distributions of inputs as statisticalsource of errors, have allowed formal correctness check of largeinteractive systems like the Facebook platform (see also Pymetal. 2019).

7.2 Physical Correctness

Fetzer (1988) objected that deductive reasoning is only able toguarantee for the correctness of a program with respect to itsspecifications, but not for the correctness of a computational system,that is also accounting for the program’s physicalimplementation. Even if the program were correct with respect to anyof the related upper LoAs (algorithms, specifications, requirements),its implementation could still violate one or more of the intendedspecifications due to a physical malfunctioning. The former kind ofcorrectness can in principle be proved mathematically, but thecorrectness of the execution LoA requires an empirical assessment. Asexamined in§6.2, software testing can show only in principle the correctness of acomputational system. In practice, the number of allowed executions ofnon-trivial systems are potentially infinite and cannot beexhaustively checked in a finite (or reasonable) amount of time(Dijkstra 1974). Most successful testing methods rather see bothformal verification and testing used together to reach a satisfactorycorrection level.

Another objection to the theoretical possibility of mathematicalcorrectness is that since proofs are carried out by a theorem prover,i.e. a physical machine, the knowledge one attains about computationalsystems is nota-priori but empirical (see Turner 2018 ch.25). However, Burge (1988) argues that computer-based proofs ofcorrectness can still be regarded asa-priori, in that eventhough their possibility depends on sensory experience, theirjustification does not (as it is fora-posteriori knowledge).For instance, the knowledge that red is a color isa-priorieven though it requires having sensory experience of red; this isbecause ‘red is a colour’ is true independently of anysensory experience. For further discussion on the nature of the use ofcomputers in mathematical proofs, see (Hales 2008; Harrison 2008;Tymoczko 1979, 1980).

The problem of correctness eventually reduces to asking what it meansfor a physical machine to satisfy an abstract requirement. Accordingto thesimple mapping account, a computational systemS is a correct implementation of specificationSPonly if:

  1. there can be established a morphism from the states ascribed toS to the states defined bySP, and
  2. for any state transition \(s_1 \rightarrow s_2\) inSthere is a state transition \(s'_1 \rightarrow s'_2\) inSPbetween state \(s'_1\) mapping to \(s_1\) and state \(s'_2\) mappingto \(s_2\).

The simple mapping account only demands for an extensional agreementbetween the description ofS andSP. The weakness ofthis account is that it is quite easy to identify an extensionalagreement between any couple of physical system-specification, leavingroom for a pancomputationalist perspective (Anderson and Piccinini2024).

The danger of pancomputationalism has led some authors to attempt anaccount of correct implementation that somehow restricts the class ofpossible interpretations. In particular,

  1. Thecausal account (D. J. Chalmers 1996; Copeland 1996)suggests that the material conditional (if the system is in thephysical state \(s_1\) …) is replaced by a counterfactualone.
  2. Thesemantic account argues that a computational systemmust be associated with a semantic description, specifying what thesystem is to achieve (Sprevak 2012). For example, a physical devicecould be interpreted as an AND gate or an OR gate but without adefinition of the device there is no way of fixing what the artifactis.
  3. Thesyntactic account demands that only physical statesthat can be defined as syntactic can be mapped onto computationalstates. What remains to be examined is what defines a syntactic state(see Piccinini 2015 or the entry oncomputation in physical systems for an overview of thesyntactic account).
  4. Thenormative account (Turner 2012) maintains not onlythat abstract and physical computational processes must be inagreement, but also that the abstract specification has normativeforce over the system. According to such an account, computations arephysical processes whose function is fixed by an abstractspecification. This relationship is stronger than both the semanticaccount, asking for a simple descriptive relationship, and thesyntactic account, focusing on a syntactic object and its semanticinterpretation.
  5. Therobust account (Anderson and Piccinini 2024) arguesthat a system is computational just in case it complies with a robustcomputational description, namely when the mappings between physicalstates and computational states arestable andreliable. Mappings are stable when they are resilient infront of small perturbations of the physical system, and reliable whenphysical states are mapped to computational states bearing the samecomputational information.

7.3 Miscomputations

From what has been said so far, it follows that correctness ofimplemented programs does not automatically establish thewell-functioning of a computational system. Turing (1950) alreadydistinguished betweenerrors of functioning anderrors ofconclusion. The former are caused by a faulty implementationunable to execute the instructions of some high-level languageprogram; errors of conclusion characterize correct abstract machinesthat nonetheless fail to carry out the tasks they were supposed toaccomplish. This may happen in those cases in which a programinstantiates correctly some specifications which do not properlyexpress the users’ requirements on such a program. In bothcases, machines implementing correct programs can still be said tomiscompute.

Turing’s distinction between errors of functioning and errors ofconclusion has been expanded into a complete taxonomy ofmiscomputations (Fresco and Primiero 2013). The classification isestablished on the basis of the different LoAs defining computationalsystems. Errors can be:

  • conceptual: they violate validity conditions requiringconsistency for specifications expressed in propositional conjunctivenormal form;
  • material: they violate the correctness requirements ofprograms with respect to the set of their specifications;
  • performable: they arise when physical constraints arebreached by some faulty implementing hardware.

Performable errors clearly emerge only at the execution level, andthey correspond with Turing’s (1950) error of functioning, alsocalledoperational malfunctions. Conceptual and materialerrors may arise at any level of abstraction from the intention leveldown to the physical implementation level. Conceptual errors engendermistakes, while material errors inducefailures. Forinstance, a mistake at the intention level consists of an inconsistentset of requirements, while at the physical implementation level it maycorrespond to an invalid hardware design (such as in the choice of thelogic gates for the truth-functional connectives). Failures occurringat the specification level may be due to a design that is deemed to beincomplete with respect to the set of desired functional requirements,while a failure at the algorithm level occurs in those frequent casesin which the algorithm is found not to fulfill the specifications.Beyond mistakes, failures, and operational malfunctions,slips are a source of miscomputations at the high-levelprogramming language instructions level: they may be conceptual ormaterial errors due to, respectively, a syntactic or a semantic flawin the program. Conceptual slips appear in all those cases in whichthe syntactical rules of high-level languages are violated; materialslips involve the violation of semantic rules of programminglanguages, such as when a variable is used but not initialized.

A further distinction has to be made betweendysfunctions andmisfunctions for software-based computational systems(Floridi, Fresco and Primiero 2015). Software can only misfunction butcannot ever dysfunction. A software token can dysfunction in case itsphysical implementation fails to satisfy intentions or specifications.Dysfunctions only apply to single tokens since a token dysfunctions inthat it does not behave as the other tokens of the same type do withrespect to the implemented functions. For this reason, dysfunctions donot apply to the intention level and the specification level. On thecontrary, both software types and tokens can misfunction, sincemisfunctions do not depend on comparisons with tokens of the same typebeing able to perform some implemented function or not. Misfunction oftokens usually depends on the dysfunction of some other component,while misfunction of types is often due to poor design. A softwaretoken cannot dysfunction, because all tokens of a given type implementfunctions specified uniformly at the intention and specificationlevels. Those functions are implemented at the algorithmimplementation level before being performed at the execution level; incase of correct implementation, all tokens will behave correctly atthe execution level (provided that no operational malfunction occurs).For the very same reason, software tokens cannot misfunction, sincethey are implementations of the same intentions and specifications.Only software types can misfunction in case of poor design;misfunctioning software types are able to correctly perform theirfunctions but may also produce some undesired side-effect. For theapplication of the notion of malfunctioning to the problem of malwareclassification, see (Primiero et al. 2019).

7.4 Pragmatic Correctness

Buda and Primiero (2024) highlight how the ontology of LoAs does notaccount forpragmatic andcontextual aspects,omitting from the definition of computational artifacts the role ofusers. Accordingly, the notions of mathematical and physicalcorrectness, as well as the associated definition of miscomputationbased on the normative relation between a specification and itsimplementation, omit the regulative role of designers, developers, endusers, policy makers, or sellers of the computational systems. Privacyor copyright laws, for instance, exert normative power on what can orcannot be developed, independently of what can be logically and isphysically implemented from a given set of specifications;specifications themselves are often object of revision during thesoftware life-cycle, due to new requirements or miscomputationsisolated during usage; the choice of a given programming language maybe dictated by hardware requirements, and it may itself influence theway a specification is formulated; new specifications may even emergefrom previous incorrect design or uses, like in the definition of the“resend” function of the GMAIL service.

Including these necessary pragmatic aspects into the ontology ofcomputational artifacts requires an appropriate revision of therelation of implementation and of the correctness property. To thisaim, Buda and Primiero (2024) introduce aUser Levels Modelthat associates to each LoA of a computational system one or more UserLevels defining stakeholders pragmatically contributing to determinethe use of the system and its correctness criteria. Implementationbecomes a correspondence between requested use and modes of use of acomputational system at each user level, i.e. by each set ofstakeholders. And correctness is achieved when each mode and requestof use at each user level is consistent with the specification, eitherconforming with it or being at least compatible with it when added, inturn becoming correct use.

8. The Epistemological Status of Computer Science

Between the 1960s and the 1970s, computer science emerged as anacademic discipline independent from its older siblings, mathematicsand physics, and with it the problem of defining its epistemologicalstatus as influenced by mathematical, empirical, and engineeringmethods (Tedre and Sutien 2008, Tedre 2011, Tedre 2015, Primiero2020). A debate is still in place today concerning whether computerscience has to bemostly considered as a mathematicaldiscipline, a branch of engineering, or as a scientificdiscipline.

8.1 Computer Science as a Mathematical Discipline

Any epistemological characterization of computer science is based onontological, methodological, and epistemological commitments, namelyon assumptions about the nature of computational systems, the methodsguiding the software development process, and the kind of reasoningthereby involved, whether deductive, inductive, or a combination ofboth (Eden 2007).

The origin of the analysis of computation as a mathematical notioncame notoriously from logic, with Hilbert’s question concerningthe decidability of predicate calculus, known as theEntscheidungsproblem (Hilbert and Ackermann 1950): couldthere be a mechanical procedure for deciding of an arbitrary sentenceof logic whether it is provable? To address this question, a rigorousmodel of the informal concept of an effective or mechanical method inlogic and mathematics was required. This is first and foremost amathematical endeavor: one has to develop a mathematical analogue ofthe informal notion. Supporters of the view that computer science ismathematical in nature assume that a computer program can be seen as aphysical realization of such a mathematical entity and that one canreason about programs deductively through the formal methods oftheoretical computer science. Dijkstra (1974) and Hoare (1986) werevery explicit in considering programs’ instructions asmathematical sentences, and considering a formal semantics forprogramming languages in terms of an axiomatic system (Hoare 1969).Provided that program specifications and instructions are advanced inthe same formal language, formal semantics provide the means to provecorrectness. Accordingly, knowledge about the behaviors ofcomputational systems is acquired by the deductive reasoning involvedin mathematical proofs of correctness. The reason at the basis of sucha rationalist optimism (Eden 2007) about what can be known aboutcomputational systems is that they are artifacts, that is,human-made systems and, as such, one can predict theirbehaviors with certainty (Knuth 1974).

Although a central concern of theoretical computer science, the topicsof computability and complexity are covered in existing entries on theChurch-Turing thesis,computational complexity theory, andrecursive functions.

8.2 Computer Science as an Engineering Discipline

In the late 1970s, the increasing number of applications ofcomputational systems in everyday contexts, and the consequent boomingof market demands caused a deviation of interests for computerscientists in Academia and in Industry: from focusing on methods ofproving programs’ correctness, they turned to methods formanaging complexity and evaluating the reliability of those system(Wegner 1976). Indeed, expressing formally the specifications,structure, and input of highly complex programs embedded in largersystems and interacting with users is practically impossible, andhence providing mathematical proofs of their correctness becomesmostly unfeasible. Computer science research developed in thedirection of testing techniques able to provide a statisticalevaluation of correctness, often called reliability (Littlewood andStrigini 2000), in terms of estimation of error distributions in aprogram’s code.

In line with this engineering account of computer science is thethesis that reliability of computational systems is evaluated in thesame way that civil engineering does for bridges and aerospaceengineering for airplanes (DeMilloet al. 1979). Inparticular, whereas empirical sciences examine what exists, computerscience focuses on whatcan exist, i.e., on how to produceartifacts, and it should be therefore acknowledged as an“engineering of mathematics” (Hartmanis 1981). Similarly,whereas scientific inquiries are involved in discovering lawsconcerning the phenomena under observation, one cannot identify properlaws in computer science practice, insofar as the latter is ratherinvolved in the production of phenomena concerning computationalartifacts (Brooks 1996).

8.3 Computer Science as a Scientific Discipline

As examined in§6, because software testing and reliability measuring techniques areknown for their incapability of assuring for the absence of codefaults (Dijkstra 1970), in many cases, and especially for theevaluation of the so-called safety-critical systems (such ascontrollers of airplanes, rockets, nuclear plants etc..), acombination of formal methods and empirical testing is used toevaluate correctness and dependability. Computer science canaccordingly be understood as a scientific discipline, in that it makesuse of both deductive and inductive probabilistic reasoning to examinecomputational systems (Denning et al. 1981, 2005, 2007; Tichy 1998;Colburn 2000).

The thesis that computer science is, from a methodological viewpoint,on a par with empirical sciences traces back to Newell, Perlis, andSimon’s 1967 letter to Science (Newell et al. 1967) anddominated all the 1980’s (Wegner 1976). In the 1975 Turing Awardlecture, Newell and Simon argued:

Computer science is an empirical discipline. We would have called itan experimental science, but like astronomy, economics, and geology,some of its unique forms of observation and experience do not fit anarrow stereotype of the experimental method. Nonetheless, they areexperiments. Each new machine that is built is an experiment. Actuallyconstructing the machine poses a question to nature; and we listen forthe answer by observing the machine in operation and analyzing it byall analytical and measurement means available (Newell and Simon 1975,p. 114)

Since Newell and Simon’s Turing award lecture, it has been clearthat computer science can be understood as an empirical science but ofa special sort, and this is related to the nature of experiments incomputing. Indeed, much current debate on the epistemological statusof computer science concerns the problem of defining what kind ofscience it is (Tedre 2011, Tedre 2015) and, in particular, the natureof experiments in computer science (Schiaffonati and Verdicchio 2014),the nature, if any, of laws and theorems in computing (Hartmanis 1993;Rombach and Seelisch 2008), and the methodological relation betweencomputer science and software engineering (Gruner 2011).

Bibliography

  • Abramsky, Samson & Guy McCusker, 1995, “Games and FullAbstraction for the Lazy \(\lambda\)-Calculus”, in D. Kozen(ed.),Tenth Annual IEEEE Symposium on Logic in ComputerScience, IEEE Computer Society Press, pp. 234–43.doi:10.1109/LICS.1995.523259
  • Abramsky, Samson, Pasquale Malacaria, & Radha Jagadeesan,1994, “Full Abstraction for PCF”, in M. Hagiya & J.C.Mitchell (eds.),Theoretical Aspects of Computer Software:International Symposium TACS ‘94, Sendai, Japan, April19–22, 1994, Springer-Verlag, pp. 1–15.
  • Abrial, Jean-Raymond, 1996,The B-Book: Assigning Programs toMeanings, Cambridge: Cambridge University Press.
  • Alama, Jesse, 2015, “The Lambda Calculus”,TheStanford Encyclopedia of Philosophy (Spring 2015 Edition), EdwardN. Zalta (ed.), URL = <https://plato.stanford.edu/archives/spr2015/entries/lambda-calculus/>.
  • Allen, Robert J., 1997,A Formal Approach to SoftwareArchitecture, Ph.D. Thesis, Computer Science, Carnegie MellonUniversity. Issued as CMU Technical Report CMU-CS-97-144.Allen 1997 available on line
  • Ammann, Paul & Jeff Offutt, 2008,Introduction to SoftwareTesting, Cambridge: Cambridge University Press.
  • Anderson, Neal G., & Piccinini, Gualtiero, 2024,Thephysical signature of computation: A robust mapping account,Oxford University Press.
  • Angius, Nicola, 2013a, “Abstraction and Idealization in theFormal Verification of Software”,Minds and Machines,23(2): 211–226. doi:10.1007/s11023-012-9289-8
  • –––, 2013b, “Model-Based AbductiveReasoning in Automated Software Testing”,Logic Journal ofIGPL, 21(6): 931–942. doi:10.1093/jigpal/jzt006
  • –––, 2014, “The Problem of Justificationof Empirical Hypotheses in Software Testing”,Philosophy& Technology, 27(3): 423–439.doi:10.1007/s13347-014-0159-6
  • Angius, N., & Primiero, G., 2018, “The logic of identityand copy for computational artefacts”,Journal of Logic andComputation, 28(6): 1293–1322.
  • Angius, Nicola & Guglielmo Tamburrini, 2011, “ScientificTheories of Computational Systems in Model Checking”,Mindsand Machines, 21(2): 323–336.doi:10.1007/s11023-011-9231-5
  • –––, 2017, “Explaining engineeredcomputing systems’ behaviour: the role of abstraction andidealization”,Philosophy & Technology, 30(2):239–258.
  • Anscombe, G. E. M., 1963,Intention, second edition,Oxford: Blackwell.
  • Arkoudas, Konstantine & Selmer Bringsjord, 2007,“Computers, Justification, and Mathematical Knowledge”,Minds and Machines, 17(2): 185–202.doi:10.1007/s11023-007-9063-5
  • Arif, R. Mori, E., and Primiero, G, 2018, “Validity andCorrectness before the OS: the case of LEOI and LEOII”, inLiesbeth de Mol, Giuseppe Primiero (eds.),Reflections onProgramming Systems — Historical and Philosophical Aspects,Philosophical Studies Series, Cham: Springer, pp. 15–47.
  • Ashenhurst, Robert L. (ed.), 1989, “Letters in the ACMForum”,Communications of the ACM, 32(3): 287.doi:10.1145/62065.315925
  • Baier, A., 1970, “Act and Intent”,Journal ofPhilosophy, 67: 648–658.
  • Baier, Christel & Joost-Pieter Katoen, 2008,Principles ofModel Checking, Cambridge, MA: The MIT Press.
  • Bass, Len, Paul C. Clements, & Rick Kazman, 2003 [1997],Software Architecture in Practice, second edition, Reading,MA: Addison-Wesley; first edition 1997; third edition, 2012.
  • Bechtel, William & Adele Abrahamsen, 2005, “Explanation:A Mechanist Alternative”,Studies in History and Philosophyof Science Part C: Studies in History and Philosophy of Biological andBiomedical Sciences, 36(2): 421–441.doi:10.1016/j.shpsc.2005.03.010
  • Boghossian, Paul A., 1989, “The Rule-followingConsiderations”,Mind, 98(392): 507–549.doi:10.1093/mind/XCVIII.392.507
  • Bourbaki, Nicolas, 1968,Theory of Sets, Ettore MajoranaInternational Science Series, Paris: Hermann.
  • Bratman, M. E., 1987,Intention, Plans, and PracticalReason, Cambridge, MA: Harvard University Press.
  • Bridges, Douglas & Palmgren Erik, 2013, “ConstructiveMathematics”,The Stanford Encyclopedia of Philosophy(Winter 2013 Edition), Edward N. Zalta (ed.), URL = <https://plato.stanford.edu/archives/win2013/entries/mathematics-constructive/>.
  • Brooks, Frederick P. Jr., 1995,The Mythical Man Month: Essayson Software Engineering, Anniversary Edition, Reading, MA:Addison-Wesley.
  • –––, 1996, “The Computer Scientist asToolsmith II”,Communications of the ACM, 39(3):61–68. doi:10.1145/227234.227243
  • Buda, Alessandro G., & Primiero, Giuseppe, 2024, “APragmatic Theory of Computational Artefacts”,Minds andMachines, 34(Suppl 1), 139–170.
  • Burge, Tyler, 1998, “Computer Proof, Apriori Knowledge, andOther Minds”,Noûs, 32(S12): 1–37.doi:10.1111/0029-4624.32.s12.1
  • Bynum, Terrell Ward, 2008, “Milestones in the History ofInformation and Computer Ethics”, in Himma and Tavani 2008:25–48. doi:10.1002/9780470281819.ch2
  • Callahan, John, Francis Schneider, & Steve Easterbrook, 1996,“Automated Software Testing Using Model-Checking”, inProceeding Spin Workshop, J.C. Gregoire, G.J. Holzmann and D.Peled (eds.), New Brunswick, NJ: Rutgers University, pp.118–127.
  • Cardelli, Luca & Peter Wegner, 1985, “On UnderstandingTypes, Data Abstraction, and Polymorphism”, 17(4):471–522. [Cardelli and Wegner 1985 available online]
  • Carnap, R., 1966,Philosophical foundations of physics(Vol. 966), New York: Basic Books.
  • Carrara, M., Gaio, S., and Soavi, M., 2014, “Artifact kinds,identity criteria, and logical adequacy”, in M. Franssen, P.Kroes, T. Reydon and P. E. Vermaas (eds.),Artefact Kinds:Ontology and The Human-made World, New York: Springer, pp.85–101.
  • Chalmers, A. F., 1999,What is this thing calledScience?, Maidenhead: Open University Press
  • Chalmers, David J., 1996, “Does a Rock Implement EveryFinite-State Automaton?”Synthese, 108(3):309–33. [D.J. Chalmers 1996 available online] doi:10.1007/BF00413692
  • Clarke, Edmund M. Jr., Orna Grumberg, & Doron A. Peled, 1999,Model Checking, Cambridge, MA: The MIT Press.
  • Colburn, Timothy R., 1999, “Software, Abstraction, andOntology”,The Monist, 82(1): 3–19.doi:10.5840/monist19998215
  • –––, 2000,Philosophy and ComputerScience, Armonk, NY: M.E. Sharp.
  • Colburn, T. R., Fetzer, J. H. , and Rankin T. L., 1993,Program Verification: Fundamental Issues in Computer Science,Dordrecht: Kluwer Academic Publishers.
  • Colburn, Timothy & Gary Shute, 2007, “Abstraction inComputer Science”,Minds and Machines, 17(2):169–184. doi:10.1007/s11023-007-9061-7
  • Copeland, B. Jack, 1993,Artificial Intelligence: APhilosophical Introduction, San Francisco: John Wiley &Sons.
  • –––, 1996, “What is Computation?”Synthese, 108(3): 335–359. doi:10.1007/BF00413693
  • –––, 2015, “The Church-TuringThesis”,The Stanford Encyclopedia of Philosophy(Summer 2015 Edition), Edward N. Zalta (ed.), URL = <https://plato.stanford.edu/archives/sum2015/entries/church-turing/>.
  • Copeland, B. Jack & Oron Shagrir, 2007, “PhysicalComputation: How General are Gandy’s Principles forMechanisms?”Minds and Machines, 17(2): 217–231.doi:10.1007/s11023-007-9058-2
  • –––, 2011, “Do Accelerating TuringMachines Compute the Uncomputable?”Minds and Machines,21(2): 221–239. doi:10.1007/s11023-011-9238-y
  • Cummins, Robert, 1975, “Functional Analysis”,TheJournal of Philosophy, 72(20): 741–765.doi:10.2307/2024640
  • Curtis-Trudel, André, 2020, “Implementation asresemblance”,Philosophy of Science, 88(5):1021–1032.
  • –––, 2022, “The determinacy ofcomputation”,Synthese, 200(1), 43.
  • Davidson, D., 1963, “Actions, Reasons, and Causes,”reprinted inEssays on Actions and Events, Oxford: OxfordUniversity Press, 1980, pp. 3–20.
  • –––, 1978, “Intending”, reprinted inEssays on Actions and Events, Oxford: Oxford UniversityPress, 1980, pp. 83–102.
  • De Millo, Richard A., Richard J. Lipton, & Alan J. Perlis,1979, “Social Processes and Proofs of Theorems andPrograms”,Communications of the ACM, 22(5):271–281. doi:10.1145/359104.359106
  • Denning, Peter J., 2005, “Is Computer ScienceScience?”,Communications of the ACM, 48(4):27–31. doi:10.1145/1053291.1053309
  • –––, 2007, “Computing is a NaturalScience”,Communications of the ACM, 50(7):13–18. doi:10.1145/1272516.1272529
  • Denning, Peter J., Edward A. Feigenbaum, Paul Gilmore, Anthony C.Hearn, Robert W. Ritchie, & Joseph F. Traub, 1981, “ADiscipline in Crisis”,Communications of the ACM,24(6): 370–374. doi:10.1145/358669.358682
  • Devlin, Keith, 1994,Mathematics: The Science of Patterns: TheSearch for Order in Life, Mind, and the Universe, New York: HenryHolt.
  • Dijkstra, Edsger W., 1970,Notes on StructuredProgramming, T.H.-Report 70-WSK-03, Mathematics TechnologicalUniversity Eindhoven, The Netherlands. [Dijkstra 1970 available online]
  • –––, 1974, “Programming as a Discipline ofMathematical Nature”,American Mathematical Monthly,81(6): 608–612. [Dijkstra 1974 available online
  • Distributed Software Engineering, 1997,The DarwinLanguage, Department of Computing, Imperial College of Science,Technology and Medicine, London. [Darwin language 1997 available online]
  • Duhem, P., 1954,The Aim and Structure of PhysicalTheory, Princeton: Princeton University Press.
  • Duijf, H., Broersen, J., and Meyer, J. J. C., 2019,“Conflicting intentions: rectifying the consistencyrequirements”,Philosophical Studies, 176(4):1097–1118.
  • Dummett, Michael A.E., 2006,Thought and Reality, Oxford:Oxford University Press.
  • Duncan, William, 2011, “Using Ontological Dependence toDistinguish between Hardware and Software”,Proceedings ofthe Society for the Study of Artificial Intelligence and Simulation ofBehavior Conference: Computing and Philosophy, University ofYork, York, UK. [Duncan 2011 available online (zip file)]
  • –––, 2017, “Ontological Distinctionsbetween Hardware and Software”,Applied Ontology,12(1): 5–32.
  • Eden, Amnon H., 2007, “Three Paradigms of ComputerScience”,Minds and Machines, 17(2): 135–167.doi:10.1007/s11023-007-9060-8
  • Egan, Frances, 1992, “Individualism, Computation, andPerceptual Content”,Mind, 101(403): 443–59.doi:10.1093/mind/101.403.443
  • Edgar, Stacey L., 2003 [1997],Morality and Machines:Perspectives on Computer Ethics, Sudbury, MA: Jones &Bartlett Learning.
  • Ferrero, L., 2017, “Intending, Acting, and Doing,”Philosophical Explorations, 20 (Supplement 2):13–39.
  • Fernández, Maribel, 2004,Programming Languages andOperational Semantics: An Introduction, London: King’sCollege Publications.
  • Fetzer, James H., 1988, “Program Verification: The VeryIdea”,Communications of the ACM, 31(9):1048–1063. doi:10.1145/48529.48530
  • –––, 1990,Artificial Intelligence: ItsScope and Limits, Dordrecht: Springer Netherlands.
  • Feynman, Richard P., 1984–1986,Feynman Lectures onComputation, Cambridge, MA: Westview Press, 2000.
  • Flanagan, Mary, Daniel C. Howe, & Helen Nissenbaum, 2008,“Embodying Values in Technology: Theory and Practice”, inInformation Technology and Moral Philosophy, Jeroen van denHoven and John Weckert (eds.), Cambridge: Cambridge University Press,pp. 322–353.
  • Floridi, Luciano, 2008, “The Method of Levels ofAbstraction”,Minds and Machines, 18(3): 303–329.doi:10.1007/s11023-008-9113-7
  • Floridi, Luciano, Nir Fresco, & Giuseppe Primiero, 2015,“On Malfunctioning Software”,Synthese, 192(4):1199–1220. doi:10.1007/s11229-014-0610-3
  • Floyd, Robert W., 1979, “The Paradigms ofProgramming”,Communications of the ACM, 22(8):455–460. doi:10.1145/1283920.1283934
  • Fowler, Martin, 2003,UML Distilled: A Brief Guide to theStandard Object Modeling Language, 3rd edition,Reading, MA: Addison-Wesley.
  • Franssen, Maarten, Gert-Jan Lokhorst, & Ibio van de Poel,2013, “Philosophy of Technology”,The StanfordEncyclopedia of Philosophy (Winter 2013 Edition), Edward N. Zalta(ed.), URL = <https://plato.stanford.edu/archives/win2013/entries/technology/>.
  • Frege, Gottlob, 1914, “Letter to Jourdain”, reprintedin Frege 1980: 78–80.
  • –––, 1980, Gottlob Frege:Philosophical andMathematical Correspondence, G. Gabriel, H. Hermes, F. Kambartel,C. Thiel, and A. Veraart (eds.), Oxford: Blackwell Publishers.
  • Fresco, Nir & Giuseppe Primiero, 2013,“Miscomputation”,Philosophy & Technology,26(3): 253–272. doi:10.1007/s13347-013-0112-0
  • Fresco, Nir, Copeland, B. Jack , & Wolf, Marty. J., 2021,“The indeterminacy of computation”,Synthese,199(5), 12753–12775.
  • Friedman, Batya & Helen Nissenbaum, 1996, “Bias inComputer Systems”,ACM Transactions on Information Systems(TOIS), 14(3): 330–347. doi:10.1145/230538.230561
  • Frigg, Roman & Stephan Hartmann, 2012, “Models inScience”,The Stanford Encyclopedia of Philosophy (Fall2012 Edition), Edward N. Zalta (ed.), URL =<https://plato.stanford.edu/archives/fall2012/entries/models-science/>.
  • Gagliardi, Francesco, 2007, “Epistemological Justificationof Test Driven Development in Agile Processes”,AgileProcesses in Software Engineering and Extreme Programming: Proceedingsof the 8th International Conference, XP 2007, Como, Italy, June18–22, 2007, Berlin: Springer Berlin Heidelberg, pp.253–256. doi:10.1007/978-3-540-73101-6_48
  • Gamma, Erich, Richard Helm, Ralph Johnson, & John Vlissides,1994,Design Patterns: Elements of Reusable Object-OrientedSoftware, Reading, MA: Addison-Wesley.
  • Glennan, Stuart S., 1996, “Mechanisms and the Nature ofCausation”,Erkenntnis, 44(1): 49–71.doi:10.1007/BF00172853
  • Glüer, Kathrin & Åsa Wikforss, 2015, “TheNormativity of Meaning and Content”,The StanfordEncyclopedia of Philosophy (Summer 2015 Edition), Edward N. Zalta(ed.), URL = <https://plato.stanford.edu/archives/sum2015/entries/meaning-normativity/>.
  • Goguen, Joseph A. & Rod M. Burstall, 1985,“Institutions: Abstract Model Theory for ComputerScience”,Report CSLI-85-30, Center for the Study ofLanguage and Information at Stanford University.
  • –––, 1992, “Institutions: Abstract ModelTheory for Specification and Programming”,Journal of theACM (JACM), 39(1): 95–146. doi:10.1145/147508.147524
  • Gordon, Michael J.C., 1979,The Denotational Description ofProgramming Languages, New York: Springer-Verlag.
  • Gotterbarn, Donald, 1991, “Computer Ethics: ResponsibilityRegained”,National Forum: The Phi Beta Kappa Journal,71(3): 26–31.
  • –––, 2001, “Informatics and ProfessionalResponsibility”,Science and Engineering Ethics, 7(2):221–230. doi:10.1007/s11948-001-0043-5
  • Gotterbarn, Donald, Keith Miller, & Simon Rogerson, 1997,“Software Engineering Code of Ethics”,InformationSociety, 40(11): 110–118. doi:10.1145/265684.265699
  • Gotterbarn, Donald & Keith W. Miller, 2009, “The Publicis the Priority: Making Decisions Using the Software Engineering Codeof Ethics”,IEEE Computer, 42(6): 66–73.doi:10.1109/MC.2009.204
  • Gruner, Stefan, 2011, “Problems for a Philosophy of SoftwareEngineering”,Minds and Machines, 21(2): 275–299.doi:10.1007/s11023-011-9234-2
  • Gunter, Carl A., 1992,Semantics of Programming Languages:Structures and Techniques, Cambridge, MA: MIT Press.
  • Gupta, Anil, 2014, “Definitions”,The StanfordEncyclopedia of Philosophy (Fall 2014 Edition), Edward N. Zalta(ed.), URL = <https://plato.stanford.edu/archives/fall2014/entries/definitions/>.
  • Gurevich, Y., 2000, “Sequential Abstract-State MachinesCapture Sequential Algorithms”,ACM Transactions onComputational Logic (TOCL), 1(1): 77–111.
  • –––, 2012, “What is an algorithm?”,inInternational conference on current trends in theory andpractice of computer science, Heidelberg, Berlin: Springer, pp.31–42.
  • Hacking, I., 2014,Why is there a Philosophy of Mathematics atall?, Cambridge: Cambridge University Press.
  • Hagar, Amit, 2007, “Quantum Algorithms: PhilosophicalLessons”,Minds and Machines, 17(2): 233–247.doi:10.1007/s11023-007-9057-3
  • Hale, Bob, 1987,Abstract Objects, Oxford: BasilBlackwell.
  • Hales, Thomas C., 2008, “Formal Proof”,Notices ofthe American Mathematical Society, 55(11): 1370–1380.
  • Hankin, Chris, 2004,An Introduction to Lambda Calculi forComputer Scientists, London: King’s CollegePublications.
  • Harrison, John, 2008, “Formal Proof—Theory andPractice”,Notices of the American MathematicalSociety, 55(11): 1395–1406.
  • Hartmanis, Juris, 1981, “Nature of Computer Science and ItsParadigms”, pp. 353–354 (in Section 1) of “QuoVadimus: Computer Science in a Decade”, J.F. Traub (ed.),Communications of the ACM, 24(6): 351–369.doi:10.1145/358669.358677
  • –––, 1993, “Some Observations About theNature of Computer Science”, inInternational Conference onFoundations of Software Technology and Theoretical ComputerScience, Springer Berlin Heidelberg, pp. 1–12.doi:10.1007/3-540-57529-4_39
  • Hayes, P. J., 1997, “What is a Computer?”,TheMonist, 80(3): 389–404.
  • Hempel, C. G., 1970, “On the ‘standardconception’ of scientific theories”,Minnesota Studiesin the Philosophy of Science, 4: 142–163.
  • Henson, Martin C., 1987,Elements of FunctionalProgramming, Oxford: Blackwell.
  • Hilbert, David, 1931, “The Grounding of Elementary NumberTheory”, reprinted in P. Mancosu (ed.), 1998,From Brouwerto Hilbert: the Debate on the Foundations of Mathematics in the1920s, New York: Oxford University Press, pp. 266–273.
  • Hilbert, David & Wilhelm Ackermann, 1928,GrundzügeDer Theoretischen Logik, translated asPrinciples ofMathematical Logic, Lewis M. Hammond, George G. Leckie, and F.Steinhardt (trans.), New York: Chelsea, 1950.
  • Hill, R.K., 2016, “What an algorithm is”,Philosophy & Technology, 29(1): 35–59.
  • –––, 2018, “Elegance in Software”,in Liesbeth de Mol, Giuseppe Primiero (eds.),Reflections onProgrammings Systems - Historical and Philosophical Aspects(Philosophical Studies Series), Cham: Springer, pp.273–286.
  • Hoare, C.A.R., 1969, “An Axiomatic Basis for ComputerProgramming”,Communications of the ACM, 12(10):576–580. doi:10.1145/363235.363259
  • –––, 1973, “Notes on DataStructuring”, in O.J. Dahl, E.W. Dijkstra, and C.A.R. Hoare(eds.),Structured Programming, London: Academic Press, pp.83–174.
  • –––, 1981, “The Emperor’s OldClothes”,Communications of the ACM, 24(2):75–83. doi:10.1145/1283920.1283936
  • –––, 1985,Communicating SequentialProcesses, Englewood Cliffs, NJ: Prentice Hall. [Hoare 1985 available online (pdf)]
  • –––, 1986,The Mathematics of Programming:An Inaugural Lecture Delivered Before the University of Oxford on Oct.17, 1985, Oxford: Oxford University Press.
  • Hodges, Andrews, 2011, “Alan Turing”,The StanfordEncyclopedia of Philosophy (Summer 2011 Edition), Edward N. Zalta(ed.), URL = <https://plato.stanford.edu/archives/sum2011/entries/turing/>.
  • Hodges, Wilfrid, 2013, “Model Theory”,TheStanford Encyclopedia of Philosophy (Fall 2013 Edition), EdwardN. Zalta (ed.), URL = <https://plato.stanford.edu/archives/fall2013/entries/model-theory/>.
  • Hopcroft, John E. & Jeffrey D. Ullman, 1969,FormalLanguages and their Relation to Automata, Reading, MA:Addison-Wesley.
  • Hughes, Justin, 1988, “The Philosophy of IntellectualProperty”,Georgetown Law Journal, 77: 287.
  • Irmak, Nurbay, 2012, “Software is an AbstractArtifact”,Grazer Philosophische Studien, 86(1):55–72.
  • Johnson, Christopher W., 2006, “What are Emergent Propertiesand How Do They Affect the Engineering of Complex Systems”,Reliability Engineering and System Safety, 91(12):1475–1481. [Johnson 2006 available online]
  • Johnson-Laird, P. N., 1988,The Computer and the Mind: AnIntroduction to Cognitive Science, Cambridge, MA: HarvardUniversity Press.
  • Jones, Cliff B., 1990 [1986],Systematic Software DevelopmentUsing VDM, second edition, Englewood Cliffs, NJ: Prentice Hall. [Jones 1990 available online]
  • Kimppa, Kai, 2005, “Intellectual Property Rights inSoftware—Justifiable from a Liberalist Position? Free SoftwareFoundation’s Position in Comparison to John Locke’sConcept of Property”, in R.A. Spinello & H.T. Tavani (eds.),Intellectual Property Rights in a Networked World: Theory andPractice, Hershey, PA: Idea, pp. 67–82.
  • Kinsella, N. Stephan, 2001, “Against IntellectualProperty”,Journal of Libertarian Studies, 15(2):1–53.
  • Kleene, S. C., 1967,Mathematical Logic, New York:Wiley.
  • Knuth, D. E., 1973,The Art of Computer Programming,second edition, Reading, MA: Addison-Wesley.
  • –––, 1974a, “Computer Programming as anArt”,Communications of the ACM, 17(12): 667–673.doi:10.1145/1283920.1283929
  • –––, 1974b, “Computer Science and ItsRelation to Mathematics”,The American MathematicalMonthly, 81(4): 323–343.
  • –––, 1977, “Algorithms”,Scientific American, 236(4): 63–80.
  • Kripke, Saul, 1982,Wittgenstein on Rules and PrivateLanguage, Cambridge, MA: Harvard University Press.
  • Kroes, Peter, 2010, “Engineering and the Dual Nature ofTechnical Artefacts”,Cambridge Journal of Economics,34(1): 51–62. doi:10.1093/cje/bep019
  • –––, 2012,Technical Artefacts: Creations ofMind and Matter: A Philosophy of Engineering Design, Dordrecht:Springer.
  • Kroes, Peter & Anthonie Meijers, 2006, “The Dual Natureof Technical Artefacts”,Studies in History and Philosophyof Science, 37(1): 1–4.doi:10.1016/j.shpsa.2005.12.001
  • Kröger, Fred & Stephan Merz, 2008,Temporal Logicsand State Systems, Berlin: Springer.
  • Ladd, John, 1988, “Computers and Moral Responsibility: aFramework for An Ethical Analysis”, in Carol C. Gould, (ed.),The Information Web: Ethical & Social Implications of ComputerNetworking, Boulder, CO: Westview Press, pp. 207–228.
  • Landin, P.J., 1964, “The Mechanical Evaluation ofExpressions”,The Computer Journal, 6(4):308–320. doi:10.1093/comjnl/6.4.308
  • Littlewood, Bev & Lorenzo Strigini, 2000, “SoftwareReliability and Dependability: a Roadmap”,ICSE ’00Proceedings of the Conference on the Future of SoftwareEngineering, pp. 175–188. doi:10.1145/336512.336551
  • Locke, John, 1690,The Second Treatise of Government. [Locke 1690 available online]
  • Loewenheim, Ulrich, 1989, “Legal Protection for ComputerPrograms in West Germany”,Berkeley Technology LawJournal, 4(2): 187–215. [Loewenheim 1989 available online] doi:10.15779/Z38Q67F
  • Long, Roderick T., 1995, “The Libertarian Case AgainstIntellectual Property Rights”,Formulations, Autumn,Free Nation Foundation.
  • Loui, Michael C. & Keith W. Miller, 2008, “Ethics andProfessional Responsibility in Computing”,WileyEncyclopedia of Computer Science and Engineering, Benjamin Wah(ed.), John Wiley & Sons. [Loui and Miller 2008 available online]
  • Lowe, E. J., 1998,The Possibility of Metaphysics: Substance,Identity, and Time, Oxford: Clarendon Press.
  • Luckham, David C., 1998, “Rapide: A Language and Toolset forCausal Event Modeling of Distributed System Architectures”, inY. Masunaga, T. Katayama, and M. Tsukamoto (eds.),WorldwideComputing and its Applications, WWCA’98, Berlin: Springer,pp. 88–96. doi:10.1007/3-540-64216-1_42
  • Machamer, Peter K., Lindley Darden, & Carl F. Craver, 2000,“Thinking About Mechanisms”,Philosophy ofScience, 67(1): 1–25. doi:10.1086/392759
  • Magee, Jeff, Naranker Dulay, Susan Eisenbach, & Jeff Kramer,1995, “Specifying Distributed Software Architectures”,Proceedings of 5th European Software Engineering Conference (ESEC95), Berlin: Springer-Verlag, pp. 137–153.
  • Markov, A., 1954, “Theory of algorithms”, Tr. Mat.Inst. Steklov 42, pp. 1–14. trans. by Edwin Hewitt inAmerican Mathematical Society Translations, Series 2, Vol. 15(1960).
  • Martin-Löf, Per, 1982, “Constructive Mathematics andComputer Programming”, inLogic, Methodology and Philosophyof Science (Volume VI: 1979), Amsterdam: North-Holland, pp.153–175.
  • McGettrick, Andrew, 1980,The Definition of ProgrammingLanguages, Cambridge: Cambridge University Press.
  • McLaughlin, Peter, 2001,What Functions Explain: FunctionalExplanation and Self-Reproducing Systems, Cambridge: CambridgeUniversity Press.
  • Meijers, A.W.M., 2001, “The Relational Ontology of TechnicalArtifacts”, in P.A. Kroes and A.W.M. Meijers (eds.),TheEmpirical Turn in the Philosophy of Technology, Amsterdam:Elsevier, pp. 81–96.
  • Mitchelmore, Michael & Paul White, 2004, “Abstraction inMathematics and Mathematics Learning”, in M.J. Høines andA.B. Fuglestad (eds.),Proceedings of the 28th Conference of theInternational Group for the Psychology of Mathematics Education(Volume 3), Bergen: Programm Committee, pp. 329–336. [Mitchelmore and White 2004 available online]
  • Miller, Alexander & Crispin Wright (eds), 2002,RuleFollowing and Meaning, Montreal/Ithaca: McGill-Queen’sUniversity Press.
  • Milne, Robert & Christopher Strachey, 1976,A Theory ofProgramming Language Semantics, London: Chapman and Hall.
  • Milner, R., 1971, “An algebraic definition of simulationbetween programs”, Technical Report,CS-205, pp.481–489, Department of Computer Science, StanfordUniversity.
  • Mitchell, John C., 2003,Concepts in ProgrammingLanguages, Cambridge: Cambridge University Press.
  • Monin, Jean François, 2003,Understanding FormalMethods, Michael G. Hinchey (ed.), London: Springer (this isMonin’s translation of his own Introduction aux MéthodesFormelles, Hermes, 1996, first edition; 2000, second edition),doi:10.1007/978-1-4471-0043-0
  • Mooers, Calvin N., 1975, “Computer Software andCopyright”,ACM Computing Surveys, 7(1): 45–72.doi:10.1145/356643.356647
  • Moor, James H., 1978, “Three Myths of ComputerScience”,The British Journal for the Philosophy ofScience, 29(3): 213–222.
  • Morgan, C., 1994,Programming From Specifications,Englewood Cliffs: Prentice Hall. [Morgan 1994 available online]
  • Moschovakis, Y. N., 2001, “What is an algorithm?”, inMathematics Unlimited—2001 and Beyond, Heidelberg,Berlin: Springer, pp. 919–936.
  • Naur, P., 1985, “Programming as theory building”,Microprocessing and microprogramming, 15(5):253–261.
  • Newell, A., and Simon, H. A., 1961, “Computer simulation ofhuman thinking”Science, 134(3495):2011–2017.
  • ––– 1972,Human Problem Solving,Englewood Cliffs, NJ: Prentice-Hall.
  • –––, 1976, “Computer Science as EmpiricalInquiry: Symbols and Search”,Communications of theACM, 19(3): 113–126. doi:10.1145/1283920.1283930
  • Newell, Allen, Alan J. Perlis, & Herbert A. Simon, 1967,“Computer Science”,Science, 157(3795):1373–1374. doi:10.1126/science.157.3795.1373-b
  • Nissenbaum,Helen, 1998, “Values in the Design of ComputerSystems”,Computers and Society, 28(1):38–39.
  • Northover, Mandy, Derrick G. Kourie, Andrew Boake, Stefan Gruner,& Alan Northover, 2008, “Towards a Philosophy of SoftwareDevelopment: 40 Years After the Birth of Software Engineering”,Journal for General Philosophy of Science, 39(1):85–113. doi:10.1007/s10838-008-9068-7
  • Pears, David Francis, 2006,Paradox and Platitude inWittgenstein’s Philosophy, Oxford: Oxford University Press.doi:10.1093/acprof:oso/9780199247707.001.0001
  • Piccinini, Gualtiero, 2007, “Computing Mechanisms”,Philosophy of Science, 74(4): 501–526.doi:10.1086/522851
  • –––, 2008, “Computation withoutRepresentation”,Philosophical Studies, 137(2):206–241. [Piccinini 2008 available online] doi:10.1007/s11098-005-5385-4
  • –––, 2008, “Computers”,PacificPhilosophical Quarterly, 89: 32–73.
  • –––, 2015,Physical Computation: AMechanistic Account, Oxford: Oxford University Press.doi:10.1093/acprof:oso/9780199658855.001.0001
  • Piccinini, Gualtiero & Carl Craver, 2011, “IntegratingPsychology and Neuroscience: Functional Analyses as MechanismSketches”,Synthese, 183(3): 283–311.doi:10.1007/s11229-011-9898-4
  • Popper, Karl R., 1959,The Logic of Scientific Discovery,London: Hutchinson.
  • Primiero, G., 2016, “Information in the philosophy ofcomputer science”, in Floridi L. (ed.),The RoutledgeHandbook of Philosophy of Information, London: Routledge, pp.90–106.
  • –––, 2020,On the Foundations ofComputing. New York: Oxford University Press.
  • Primiero, G., D.F. Solheim & J.M. Spring, 2019 “OnMalfunction, Mechanisms and Malware Classification”,Philos.Technol. 32: 339–362.https://doi.org/10.1007/s13347-018-0334-2
  • Pylyshyn, Z. W., 1984,Computation and Cognition: Towards aFoundation for Cognitive Science, Cambridge, MA: MIT Press.
  • Pym, D., J.M. Spring, & P. O’Hearn, 2019, “WhySeparation Logic Works”,Philosophy & Technology,32: 483–516.
  • Rapaport, William J., 1995, “Understanding Understanding:Syntactic Semantics and Computational Cognition”, in Tomberlin(ed.),Philosophical Perspectives, Vol. 9: AI, Connectionism,and Philosophical Psychology, Atascadero, CA: Ridgeview, pp.49–88. [Rapaport 1995 available online] doi:10.2307/2214212
  • –––, 1999, “Implementation Is SemanticInterpretation”,The Monist, 82(1): 109–30. [Rapaport 1999 available online]
  • –––, 2005, “Implementation as SemanticInterpretation: Further Thoughts”,Journal ofExperimental& Theoretical Artificial Intelligence, 17(4):385–417. [Rapaport 2005 available online]
  • –––, 2012, “Semiotic systems, computers,and the mind: how cognition could be computing”,International Journal of Signs and Semiotic Systems, 2(1):32–71
  • –––, 2018, “What is a Computer? ASurvey”,Minds and Machines, 28(3): 385–426.
  • –––, 2023,Philosophy of Computer Science:an introduction to the issues and the literature. Hoboken: JohnWiley & Sons.
  • Reynolds, J.C., 2002, “Separation Logic: a logic for sharedmutable data structures”, inProceedings of the 17th AnnualIEEE Symposium on Logic in Computer Science, IEEE, pp.55–74.
  • Rombach, Dieter & Frank Seelisch, 2008, “Formalisms inSoftware Engineering: Myths Versus Empirical Facts”, inBalancing Agility and Formalism in Software Engineering,Springer Berlin Heidelberg, pp. 13–25.doi:10.1007/978-3-540-85279-7_2
  • Rosenberg, A., 2012,The Philosophy of Science, London:Routledge.
  • Ryle G., 1949 [2009],The Concept of Mind, Abingdon:Routledge
  • Schiaffonati, Viola, 2015, “Stretching the TraditionalNotion of Experiment in Computing: Explorative Experiments”,Science and Engineering Ethics, 22(3): 1–19.doi:10.1007/s11948-015-9655-z
  • Schiaffonati, Viola & Mario Verdicchio, 2014, “Computingand Experiments”,Philosophy & Technology, 27(3):359–376. doi:10.1007/s13347-013-0126-7
  • Searle, J. R., 1990, “Is the brain a digitalcomputer?”Proceedings and Addresses of the AmericanPhilosophical Association, 64(3): 21–37.
  • Searle, John R., 1995,The Construction of SocialReality, New York: Free Press.
  • Setiya, K., “Intention”,The Stanford Encyclopediaof Philosophy (Fall 2018 Edition), Edward N. Zalta (ed.), URL = <https://plato.stanford.edu/archives/fall2018/entries/intention/>.
  • Shanker, S.G., 1987, “Wittgenstein versus Turing on theNature of Church’s Thesis”,Notre Dame Journal ofFormal Logic, 28(4): 615–649. [Shanker 1987 available online] doi:10.1305/ndjfl/1093637650
  • Shavell, Steven & Tanguy van Ypersele, 2001, “RewardsVersus Intellectual Property Rights”,Journal of Law andEconomics, 44: 525–547
  • Skemp, Richard R., 1987,The Psychology of LearningMathematics, Hillsdale, NJ: Lawrence Erlbaum Associates.
  • Smith, Brian Cantwell, 1985, “The Limits of Correctness inComputers”,ACM SIGCAS Computers and Society,14–15(1–4): 18–26. doi:10.1145/379486.379512
  • Snelting, Gregor, 1998, “Paul Feyerabend and SoftwareTechnology”,Software Tools for Technology Transfer,2(1): 1–5. doi:10.1007/s100090050013
  • Sommerville, Ian, 2016 [1982],Software Engineering,Reading, MA: Addison-Wesley; first edition, 1982.
  • Sprevak, M., 2010, “Computation, individuation, and thereceived view on representation”,Studies in History andPhilosophy of Science, 41(3): 260–270.
  • –––, 2012, “Three Challenges to Chalmerson Computational Implementation”,Journal of CognitiveScience, 13(2): 107–143.
  • Stoy, Joseph E., 1977,Denotational Semantics: TheScott-Strachey Approach to Programming Language Semantics,Cambridge, MA: MIT Press.
  • Strachey, Christopher, 2000, “Fundamental Concepts inProgramming Languages”,Higher-Order and SymbolicComputation, 13(1–2): 11–49.doi:10.1023/A:1010000313106
  • Suber, Peter, 1988, “What Is Software?”Journal ofSpeculative Philosophy, 2(2): 89–119. [Suber 1988 available online]
  • Summerville, I., 2012,Software Engineering, Reading, MA:Addison-Wesley; first edition, 1982.
  • Suppe, Frederick, 1989,The Semantic Conception of Theoriesand Scientific Realism, Chicago: University of IllinoisPress.
  • Suppes, Patrick, 1960, “A Comparison of the Meaning and Usesof Models in Mathematics and the Empirical Sciences”,Synthese, 12(2): 287–301. doi:10.1007/BF00485107
  • –––, 1969, “Models of Data”, inStudies in the Methodology and Foundations of Science,Dordrecht: Springer Netherlands, pp. 24–35.
  • Technical Correspondence, Corporate, 1989,Communications ofthe ACM, 32(3): 374–381. Letters from James C. Pleasant,Lawrence Paulson/Avra Cohen/Michael Gordon, William Bevier/MichaelSmith/William Young, Thomas Clune, Stephen Savitzky, James Fetzer.doi:10.1145/62065.315927
  • Tedre, Matti, 2011, “Computing as a Science: A Survey ofCompeting Viewpoints”,Minds and Machines, 21(3):361–387. doi:10.1007/s11023-011-9240-4
  • –––, 2015,The Science of Computing: Shapinga Discipline, Boca Raton: CRC Press, Taylor and FrancisGroup.
  • Tedre, Matti & Ekki Sutinen, 2008, “Three Traditions ofComputing: What Educators Should Know”,Computer ScienceEducation, 18(3): 153–170.doi:10.1080/08993400802332332
  • Thagard, P., 1984, “Computer programs as psychologicaltheories”,Mind, Language and Society, Vienna:Conceptus-Studien, pp. 77–84.
  • Thomasson, Amie, 2007, “Artifacts and Human Concepts”,in Eric Margolis and Stephen Laurence (eds.),Creations of theMind: Essays on Artifacts and Their Representations, Oxford:Oxford University Press, pp. 52–73.
  • Thompson, Simon, 2011,Haskell: The Craft of FunctionalProgramming, third edition, Reading, MA: Addison-Wesley; firstedition, 1996.
  • Tichy, Walter F., 1998, “Should Computer ScientistsExperiment More?”,IEEE Computer, 31(5): 32–40.doi:10.1109/2.675631
  • Turing, A.M., 1936, “On Computable Numbers, with anApplication to the Entscheidungsproblem”,Proceedings of theLondon Mathematical Society (Series 2), 42: 230–65.doi:10.1112/plms/s2-42.1.230
  • –––, 1950, “Computing Machinery andIntelligence”,Mind, 59(236): 433–460.doi:10.1093/mind/LIX.236.433
  • Turner, Raymond, 2007, “Understanding ProgrammingLanguages”,Minds and Machines, 17(2): 203–216.doi:10.1007/s11023-007-9062-6
  • –––, 2009a,Computable Models, Berlin:Springer. doi:10.1007/978-1-84882-052-4
  • –––, 2009b, “The Meaning of ProgrammingLanguages”,APA Newsletters, 9(1): 2–7. (This APANewsletter is available online; see the Other InternetResources.)
  • –––, 2010, “Programming Languages asMathematical Theories”, in J. Vallverdú (ed.),Thinking Machines and the Philosophy of Computer Science: Conceptsand Principles, Hershey, PA: IGI Global, pp. 66–82.
  • –––, 2011, “Specification”,Minds and Machines, 21(2): 135–152.doi:10.1007/s11023-011-9239-x
  • –––, 2012, “Machines”, in H. Zenil(ed.),A Computable Universe: Understanding and Exploring Natureas Computation, London: World Scientific PublishingCompany/Imperial College Press, pp. 63–76.
  • –––, 2014, “Programming Languages asTechnical Artefacts”,Philosophy and Technology, 27(3):377–397; first published online 2013.doi:10.1007/s13347–012–0098-z
  • –––, 2018,Computational artifacts: Towardsa philosophy of computer science, Berlin Heidelberg:Springer.
  • –––, 2020, “Computationalintention”,Studies in Logic, Grammar and Rhetoric,63(1), 19–30.
  • –––, 2021, “Computationalabstraction”,Entropy, 23(2), 213.
  • Tymoczko, Thomas, 1979, “The Four Color Problem and ItsPhilosophical Significance”,The Journal of Philosophy,76(2): 57–83. doi:10.2307/2025976
  • –––, 1980, “Computers, Proofs andMathematicians: A Philosophical Investigation of the Four-ColorProof”,Mathematics Magazine, 53(3):131–138.
  • Van Fraassen, Bas C., 1980,The Scientific Image, Oxford:Oxford University Press. doi:10.1093/0198244274.001.0001
  • –––, 1989,Laws and Symmetry, Oxford:Oxford University Press. doi:10.1093/0198248601.001.0001
  • Van Leeuwen, Jan (ed.), 1990,Handbook of Theoretical ComputerScience. Volume B: Formal Models and Semantics, Amsterdam:Elsevier and Cambridge, MA: MIT Press.
  • Vardi, M., 2012, “What is an algorithm?”,Communications of the ACM, 55(3): 5.doi:10.1145/2093548.2093549
  • Vermaas, Pieter E. & Wybo Houkes, 2003, “AscribingFunctions to Technical Artifacts: A Challenge to Etiological Accountsof Function”,British Journal of the Philosophy ofScience, 54: 261–289. [Vermaas and Houkes 2003 available online]
  • Vliet, Hans van, 2008,Software Engineering: Principles andPractice, 3rd edition, Hoboken, NJ: Wiley. (First edition,1993)
  • von Neumann, J. (1945). “First draft report on theEDVAC”,IEEE Annals of the History of Computing, 15(4):27–75.
  • Wang, Hao, 1974,From Mathematics to Philosophy, London:Routledge, Kegan & Paul.
  • Wegner, Peter, 1976, “Research Paradigms in ComputerScience”, inProceedings of the 2nd international Conferenceon Software Engineering, Los Alamitos, CA: IEEE Computer SocietyPress, pp. 322–330.
  • White, Graham, 2003, “The Philosophy of ComputerLanguages”, in Luciano Floridi (ed.),The Blackwell Guide tothe Philosophy of Computing and Information, Malden:Wiley-Blackwell, pp. 318–326.doi:10.1111/b.9780631229193.2003.00020.x
  • Wiener, Norbert, 1948,Cybernetics: Control and Communicationin the Animal and the Machine, New York: Wiley & Sons.
  • –––, 1964,God and Golem, Inc.: A Comment onCertain Points Where Cybernetics Impinges on Religion, Cambridge,MA: MIT press.
  • Wittgenstein, Ludwig, 1953 [2001],PhilosophicalInvestigations, translated by G.E.M. Anscombe, 3rd Edition,Oxford: Blackwell Publishing.
  • –––, 1956 [1978],Remarks of the Foundationsof Mathematics, G.H. von Wright, R. Rhees, and G.E.M. Anscombe(eds.), translated by G.E.M. Anscombe, revised edition, Oxford: BasilBlackwell.
  • –––, 1939 [1975],Wittgenstein’sLectures on the Foundations of Mathematics, Cambridge 1939, C.Diamond (ed.), Cambridge: Cambridge University Press.
  • Woodcock, Jim & Jim Davies, 1996,Using Z: Specification,Refinement, and Proof, Englewood Cliffs, NJ: Prentice Hall.
  • Wright, Crispin 1983,Frege’s Conception of Numbers asObjects, Aberdeen: Aberdeen University Press.

Other Internet Resources

Copyright © 2025 by
Nicola Angius<nicola.angius@unime.it>
Giuseppe Primiero<giuseppe.primiero@unimi.it>
Raymond Turner<turnr@essex.ac.uk>

Open access to the SEP is made possible by a world-wide funding initiative.
The Encyclopedia Now Needs Your Support
Please Read How You Can Help Keep the Encyclopedia Free

Browse

About

Support SEP

Mirror Sites

View this site from another server:

USA (Main Site)Philosophy, Stanford University

The Stanford Encyclopedia of Philosophy iscopyright © 2025 byThe Metaphysics Research Lab, Department of Philosophy, Stanford University

Library of Congress Catalog Data: ISSN 1095-5054


[8]ページ先頭

©2009-2025 Movatter.jp