For nearly every field of study, there is a branch of philosophy,called the philosophy of that field. …Since the main purpose ofa given field of study is to contribute to knowledge, the philosophy ofX is, at least in part, a branch of epistemology. Its purposeis to provide an account of the goals, methodology, and subject matterofX. (Shapiro 1983: 525)
The philosophy of computer science is concerned with thosephilosophical issues that arise from within the academic discipline ofcomputer science. It is intended to be the philosophical endeavor thatstands to computer science asphilosophy of mathematicsdoes to mathematics andphilosophy of technologydoes to technology. Indeed, the abstractnature of computer science, coupled with its technological ambitions,ensures that many of the conceptual questions that arise in thephilosophies of mathematics and technology have computationalanalogues. In addition, the subject will draw in variants of some ofthe central questions in the philosophies of mind, language andscience. We shall concentrate on a tightly related group of topicswhich form the spine of the subject. These include specification,implementation, semantics, programs, programming, correctness,abstraction and computation.
Computer science underpins our Facebook pages, controls air trafficaround the world, and ensures that we will not be too surprised when itsnows. It has been applied in algebra, car manufacturing, lasersurgery, banking, gastronomy, astronomy and astrology. Indeed, itis hard to find an area of life that has not been fundamentally changedand enhanced by its application. But what is it that is applied? Whatare the things that give substance to such applications? The triteanswer is the entities that computer scientists construct, theartifacts of computer science,computational artifacts, if youwill. Much of the philosophy of the subject is concerned with theirnature, specification, design and construction.
Folklore has it that computational artifacts fall into two camps:hardware and software. Presumably, software includes compilers andnatural language understanding systems whereas laptops and tablets arehardware. But how is this distinction drawn: how do we delineate whatwe take to be software and what we take to be hardware?
A standard way identifies the distinction with the abstract/physicalone (see the entry onabstract objects) where hardware is taken to be physical and softwareto be abstract. Unfortunately, this does not seem quite right. AsMoor (1978) points out, programs, which are normally seen as software,and therefore under this characterization abstract, may also bephysical devices. In particular, programs were once identified withsequences of physical lever pulls and pushes. There are differentreactions to this observation. Some have suggested there is nodistinction. In particular, Suber (1988) argues that hardware is aspecial case of software, and Moor (1978) that the distinction isontologically insignificant. On the other hand, Duncan (2009—seeOther Internet Resources) insists that there is an importantdifference but it is one that can only be made within an ontologicalframework that supports finer distinctions than the simpleabstract/physical one (e.g., B. Smith 2012). Irmak (2012) also thinksthat software and hardware are different: software is an abstractartifact, but apparently not a standard one since it has temporalproperties.
Whether or not the software/hardware distinction can be madesubstantial, most writers agree that, while a program can be taken asan abstract thing, it may also be cashed out as a sequence of physicaloperations. Consequently, they (e.g., Colburn 2000; Moor 1978) insistthat programs have a dual nature: they have both an abstract guise anda physical one. Indeed, once this is conceded, it would seem to applyto the majority of computational artifacts. On the one hand they seemto have an abstract guise which enables us to reflect and reason aboutthem independently of any physical manifestation. This certainlyapplies to abstract data types (Cardelli and Wegner 1985). Forexample, the list abstract data type consists of the carrier typetogether with operations that support the formation and manipulationof lists. Even if not made explicit, these are determined by severalaxioms that fix their properties e.g., if one adds an element to thehead of a list to form a new list, and then removes the head, the oldlist is returned. Similarly, an abstract stack is determined byaxioms that governpush andpop operations. Usingsuch properties one may reason about lists and stacks in amathematical way, independently of any concrete implementation. Andone needs to. One cannot design nor program without such reasoning;one cannot construct correct programs without reasoning about what theprograms are intended to do. If this is right, computational artifactshave an abstract guise that is separable from their physicalrealization or implementation. Indeed, this requirement to entertainabstract devices to support reasoning about physical ones is notunique to computer science. The necessity to abstract is clearly madeby the physicist Duhem.
When a physicist does an experiment, two very distinct representationsof the instrument on which he is working fill his mind: one is theimage of the concrete instrument that he manipulates in reality; theother is a schematic model of the same instrument, constructed withthe aid of symbols supplied by theories; and it is on this ideal andsymbolic instrument that he does his reasoning, and it is to it thathe applies the laws and formulas of physics. A manometer, for example,is on the one hand, a series of glass tubes, solidly connected to oneanother filled with a very heavy metallic liquid called mercury and onthe other by the perfect fluid in mechanics, and having at each pointa certain density and temperature defined by a certain equation ofcompressibility and expansion. (Duhem 1954: 155–156)
Wittgenstein talks about a similar notion of abstraction when heargues that in kinematics one abstracts away from actual physicalproperties.
In kinematics we talk of a connecting rod—not meaning a rod madeof brass or steel or what-not. We use the word ‘connectingrod’ in ordinary life, but in kinematics we use it in a quitedifferent way, although we say roughly the same things about it as wesay about the real rod; that is goes forward and back, rotates,etc. But then the real rod contracts and expands, we say. What are weto say of this rod? Does it contract and expand?—And so we sayit can't. But the truth is that there is no question of it contractingor expanding. It is a picture of a connecting rod, a symbol used inthis symbolism for a connecting rod. And in this symbolism there isnothing which corresponds to a contraction or expansion of theconnecting rod. (Wittgenstein 1975 [1939]: 198)
Much the same could be said about computers: reasoning about themrequires the employment of an abstract machine that captures thecomputers function. Moreover, there is no question of abstract machinesoverheating or exploding.
Seemingly, computational artifacts have an abstract guise thatsupports our reasoning about them. On the other hand, somewhere downthe line, they must have a physical implementation that enables them tobe used as things in the physical world. This is obviously trueof machines, but it is equally so for programs: programmers writeprograms to control physical devices. A program or abstract machinethat has no physical realization is of little use as a practical devicefor performing humanly intractable computations. For instance, aprogram that monitors heart rate must be underpinned by a physicaldevice that actually performs the task. The computer scientist Dijkstraputs it as follows.
A programmer designs algorithms, intended for mechanical execution,intended to control existing or conceivable computerequipment. (Dijkstra 1974: 1)
On the duality view, computer science is not an abstractmathematical discipline that is independent of the physical world. Tobe used these things must have physical substance. And once thisobservation is made there is a clear link with a central notion in thephilosophy of technology (Kroes 2010; Franssen et al. 2010).
Technical artifacts include all the common objects of everyday lifesuch as toilets, paper clips, tablets and dog collars. They areintentionally produced things. This is an essential part of being atechnical artifact. For example, a physical object thataccidentally carries out arithmetic is not by itself a calculator. Thisteleological aspect distinguishes them from other physical objects, andhas led philosophers to argue that technical artifacts have a dualnature fixed by two sets of properties (e.g., Kroes 2010;Meijers 2001; Thomasson 2007; Vermaas and Houkes 2003).
Functional properties say what the artifact does. For example, akettle is for boiling water and a car is for transportation. On theother hand, structural properties pertain to its physical makeup. Theyinclude its weight, color, size, shape, its chemical constitution etc.In particular, we might insist that our car is red and has whiteseats.
The notion of a technical artifact will help to conceptualize andorganize some of the central questions and issues in the philosophy ofcomputer science. We begin with a concept that underpins much of theactivity of the subject. Indeed, it is the initial expression offunctional properties.
In computer science, thefunction of an artifact isinitially laid out in a (functional) specification (Summerville 2012;Vliet 2008). Indeed, on the way to a final device, a whole series ofspecification/artifact pairs of varying degrees of abstractness comeinto existence. The activities of specification, implementation andcorrectness raise a collection of overlapping conceptual questions andproblems (B. C. Smith 1985; Turner 2011; Franssen et al. 2010).
Specifications are expressed in a variety of ways, including ordinaryvernacular. But the trend in computer science has been towards moreformal and precise forms of expression. Indeed, specialized languageshave been developed that range from those designed primarily forprogram specification (e.g., VDM, Jones 1990; Z, Woodcock and Davies1996; B, Abrial 1996), wide spectrum languages such UML (Fowler 2003),through to specialized ones that are aimed at architecturaldescription (e.g., Rapide, Luckham 1998; Darwin, 1997; Wright, Allen1997). They differ with respect to the underlying ontology and theirmeans of articulating requirements.
Z is based upon predicate logic and set theory. It is largelyemployed for the specification of suites of individual program modulesor simple devices. For example, consider the following definition of amachine. The machine is to hold numerical values in locations.There are three finite sets:Store,Location andNumerals that represent the store itself, the locations of thestore, and the values in the locations. In addition, there are twooperations: (the declaration part of the schema) calledLookupwhich is a (partial) function from the Cartesian product ofStore andLocation toNumerals andUpdate which is a total function from the Cartesian product ofStore,Location andNumerals toStore. These operations are governed by formal conditions thatdictate the relationship between them (the predicate part of the Zschema). More specifically, they insist that if a location is updatedwith a new value,Lookup returns that value. Of course,any such set theoretic definition does not automatically refer to aspecific physical device. That is its virtue. In order to provide itsfunction it must abstract away from physical properties. Moreexplicitly, it employs the abstract notion of set to introduce theupdate and the lookup operations as abstract set theoretic functions.As such it defines an abstract machine.
UML (Fowler 2003) has a very rich ontology and a wide variety ofexpression mechanisms. For example, its class language allows thespecification of software patterns (Gamma et al. 1994). In general, anarchitectural description language (ADL) is used to precisely specifythe architecture of a software system (Bass 2003). Typically these languages employ an ontology that includesnotions such ascomponents,connectors,interfaces andconfigurations. In particular,architectural descriptions written in Rapide, Darwin or Wright areprecise expressions in formalisms which are defined using an underlyingmathematical semantics. For example, the Rapide model of computation isbased on partially ordered sets of events, and its semantics is givenin the Pi-calculus (Magee et al. 1995). Wright has an underlying semanticsbased upon the formalism known as communicating sequentialprocesses (Hoare 1985). A specification in Rapide defines asequence of events, and one in Wright describes an abstract process.Such expressions enable the properties of any program or system to beexplored in a way that is independent of any particularimplementation.
But what is the logical function of the expressions of theselanguages? On the face of it, they are just expressions in a formallanguage. However, when the underlying ontology is made explicit, eachof these languages reveals itself to be a formal ontology which maybenaturally cast as a type theory (Turner 2009a). And under thisinterpretation these expressions are stipulative definitions (Gupta2012). As such, each defines a new abstract object within the formalontology of its system.
However, taken by itself a definition need not be a specification ofanything; it may just form part of a mathematical exploration. So whendoes a definition act as a specification? Presumably, just in case thedefinition is taken to point beyond itself to the construction of anartifact. It is the intentional act of giving governance of thedefinition over the properties of a device or system that turns a meredefinition into a specification. The definition then determines whetheror not the device or system has been correctly built. It provides thecriteria of correctness and malfunction. From this perspective, therole of specification is a normative one. If one asksdoes thedevice work, it is the definition functioning as a specificationthat tells us whether it does. Indeed, without it the question would bemoot. At all levels of abstraction, the logical role of specificationis always the same: it provides a criterion for correctness andmalfunction. This is the perspective argued for by Turner (2011).Indeed, this normative role is taken to be part of any general theoryof function (Kroes 2012).
It should go without saying that this is an idealization. Aspecification is not fixed throughout the design and constructionprocess. It may have to be changed because a client changes her mindabout the requirements. Furthermore, it may turn out for a variety ofreasons that the artifact is impossible to build. The underlyingphysical laws may prohibit matters. There may also be cost limitationsthat prevent construction. Indeed, the underlying definition may belogically absurd. In these cases, the current specification will haveto be given up. But the central normative role of specification remainsintact.
In this regard, Turner (2011: 57) observes that the logic ofspecification is not that of scientific theorizing.
…specifications are not intended to be a scientific theory ofanything. If my computer does not meet its specification, I do notexpect the manufacturer to supply me with a revised specification,which is what I would expect if it were functioning as a theory of thedevice. I expect to be given a computer that actually meets theoriginal specification. With specifications it is the artefact that iscorrect or not; with theories it is the theory.
Observe that, unlike functional descriptions, specifications aretaken to be prescribed in advance of the artifact construction; theyguide the implementer. This might be taken to suggest a moresubstantive role for specification i.e., to provide a methodology forthe construction of the artifact. However, the method by which wearrive at the artifact is a separate issue to its specification. Thelatter dictates no such methodology. There is no logical differencebetween a functional specification and functional description;logically they both provide a criterion of correctness.
Software is produced in a series of layers of decreasing levels ofabstraction, where in the early layers both specification and artifactare abstract (Brooks 1995; Summerville 2012; Irmak 2012). Forexample, a specification written in logical notation might be taken tobe a specification of a linguistic program. In turn, the linguisticprogram, with its associated semantics, might be taken as thespecification of a physical device. In other words, we admit abstractentities as artifacts. This is a characteristic feature of softwaredevelopment (Vliet 2008). It distinguishes it from technology ingeneral. The introduction of abstract intermediate artifacts isessential (Brooks 1995; Summerville 2012).Withoutthem logically complexcomputational artifacts would be impossible to construct.
So what happens to the duality thesis? It still holds good, but nowthe structural description does not necessarily provide physicalproperties but another abstract device. For example, an abstract stackcan act as the specification of a more concrete one that is now given astructural description in a programming language as an array. Butthe array is itself not a physical thing, it is an abstract one. Its structural description does not use physical properties butabstract ones i.e., axioms. Of course, eventually, the array will getimplemented in a physical store. However, from the perspective of theimplementer who is attempting to implement stacks in a programminglanguage with arrays as a data type, in her consciousness, the artifactis the abstract array of the programming language. Consequently, theduality thesis must be generalized to allow for abstract artifacts.
Exactly how the physical and intentional conceptualisations of ourworld are related remains a vexing problem to which the long history ofthe mind–body problem in philosophy testifies. This situationalso affects our understanding of technical artefacts: a conceptualframework that combines the physical and intentional (functional)aspects of technical artefacts is still lacking. (Kroes and Meijers 2006: 2)
The literature on technical artefacts (e.g., Kroes 2010;Meijers 2001; Thomasson 2007; Vermaas and Houkes 2003) contains two maintheories about how the two conceptualizations are related: causal roletheories and intentional ones.
Causal role theories insist that actual physical capacitiesdetermine function. Cummins's theory of functional analysis (Cummins 1975) is aninfluential example of such a theory. The underlying intuition is that,without the physical thing, and its actual properties, there can be noartifact. The main criticism of thesetheories concerns the location of any correctness criteria. If all wehave is the physical device, we have no independent measure ofcorrectness (Kroes 2010): the function is fixed by what the deviceactuallydoes.
Causal role theories…..have the tendency to let functionscoincide with actual physical capacities: structure and function becomealmost identical. The main drawback of this approach is that it cannotaccount for the malfunctioning of technical artefacts: an artefact thatlacks the actual capacity for performing its intended function bydefinition does not have that function. The intentions associated withthe artefact have become irrelevant for attributing a function. (Kroes 2010: 3)
This criticism has the same flavor as that made by Kripke in hisdiscussion of rule following.
Actual machines can malfunction: through melting wires or slippinggears they may give the wrong answer. How is it determined when amalfunction occurs? By reference to the program of the machine, asintended by its designer, not simply by reference to the machineitself. Depending on the intent of the designer, any particularphenomenon may or may not count as a machine malfunction. A programmerwith suitable intentions might even have intended to make use of thefact that wires melt or gears slip, so that a machine that ismalfunctioning for me is behaving perfectly for him. Whether a machineever malfunctions and, if so, when, is not a property of the machineitself as a physical object, but is well defined only in terms of itsprogram, stipulated by its designer. Given the program, once again, thephysical object is superfluous for the purpose of determining whatfunction is meant. (Kripke 1982: 34)
The abstract machine provides the notion of correctness for thephysical one. Without some independent expression of the function thereare no notions of correctness and malfunction. In specification terms,the (physical) artifact cannot fix its own criterion of correctness.This argues against such causal theories of function andspecification.
Intentionaltheories insist thatit is agents who ascribe functions to artefacts. Objects and theircomponents possess functions only insofar as they contribute to therealization of a goal. Good examples of this approachare McLaughlin (2001) and Searle (1995).
[t]he function of an artifact is derivative from the purpose of someagent in making or appropriating the object; it is conferred on theobject by the desires and beliefs of an agent. No agent, no purpose, nofunction. (McLaughlin 2001: 60)
But how exactly does the function get fixed by the desires of anagent? One interpretation has it that the function is determined by themental states of the agents i.e., the designers and users of technicalartifacts. In their crude form such theories have difficulty accountingfor how they impose any constraints upon the actual thing that is theartifact.
If functions are seen primarily as patterns of mental states, on theother hand, and exist, so to speak, in the heads of the designers andusers of artifacts only, then it becomes somewhat mysterious how afunction relates to the physical substrate in a particular artifact.(Kroes 2010: 2)
For example, how can the mental states of an agent fix the functionof a device that is intended to perform addition? This question isposed in a rather different context by Kripke.
Given … that everything in my mental history is compatible bothwith the conclusion that I meant plus and with the conclusion that Imeant quus, it is clear that the skeptical challenge is not really anepistemological one. It purports to show that nothing in my mentalhistory of past behavior—not even what an omniscient God wouldknow—could establish whether I meant plus or quus. But then itappears to follow that there was no fact about me that constituted myhaving meant plus rather than quus. (Kripke 1982: 21)
Of course, one might also insist that the artifact is actually inaccord with the specification, but this does not help if the expressionof the function is only located in the mental states of an agent. Thisversion of the intentional theory is really a special case of a causaltheory where theagent's head is the physical device inwhich the function is located.
However, there is an alternative interpretation of the intentionalapproach. On his commentary on Wittgenstein's notion of actingintentionally (Wittgenstein 1953), David Pears suggests that anyonewho acts intentionally must know two things. Firstly, she must knowwhat activity she is engaged in. Secondly, she must know when she hassucceeded (Pears 2006). According to this perspective, establishingcorrectness is an externally observable rule based activity. Therelation between the definition and the artifact is manifest in usingthe definition as a canon of correctness for the device. I must beable to justify my reasons for thinking that it works: if I am askedif it works I must be able to justify that it does with reference tothe abstract definition. The content of the function is laid out inthe abstract definition, but the intention to take it as aspecification is manifest in using it as one(§2.2).
Broadly speaking an implementation is a realization of aspecification. Examples includes the implementation of a UMLspecification in Java, the implementation of an abstract algorithm as aprogram in C, the implementation of an abstract data type in Miranda orthe implementation of a whole programming language. Moreover,implementation is often an indirect process that involves many stagesbefore physical bedrock, and each level involves aspecification/artifact pairing, and each involves a notion ofimplementation. But what is an implementation? Is there just one notionor many?
The most detailed philosophical study of implementation is given byRapaport (1999, 2005). He argues that implementationinvolves two domains: a syntactic one (the abstraction) and a semanticone (the implementation). Indeed, he suggests that a full explicationof the notion requires a third hidden term, a medium of implementation:I is an Implementation ofA in mediumM.HereI is the semantic component,A is theabstraction andM the medium of implementation. He allows forthe target medium to be abstract or physical. This is in line with theclaim that artifacts maybe abstract or concrete.
Superficially, this seems right. In all the examples cited there isa medium of implementation in which the actual thing that is theimplementation is carved out. Perhaps the clearest example is theimplementation of a programming language. Here the syntactic domain isthe actual language and the semantic one its interpretation on anabstract machine: the medium of interpretation. He suggests that weimplement an algorithm when we express it in a computer programminglanguage, and we implement an abstract data type when we express it asa concrete one. Examples that he does not mention might include the UMLdefinition of design patterns implemented in Java (Gamma et al. 1994).
He further argues that there is no intrinsic difference betweenwhich of the domains is semantic and which is syntactic. This isdetermined by the asymmetry of the implementation mapping. For example,a physical computer process that implements a program plays the role ofthe semantics to the linguistic program, while the same linguisticprogram can play the role of semantic domain to an algorithm. Thisasymmetry is parallel to that of the specification/artifact connection.On the face of it there is little to cause any dissension. It is astraightforward description of the actual use of the termimplementation. However, there is an additional conceptual claim thatis less clear.
Apparently, the semantic domain, as its name suggests, is always takento be asemanticrepresentation of the syntacticone; it closes asemantic gap between the abstraction and theimplementation in that the implementation fills in details. This is areferential view of semantics in that the syntactic domain refers toanother domain that provides its meaning. Indeed, there is a strongtradition in computer science that takes referential or denotationalsemantics as fundamental (Stoy 1977; Milne and Strachey 1976; Gordon1979). We shall examine this claim later when we consider thesemantics of programming languages in more detail(§4). For the moment we are only concernedwith the central role of any kind of semantics.
One view of semantics insists that it must be normative. Although theexact form of the normative constraint (Glüer and Wikforse 2009;Miller and Wright 2002) is debated, there is a good deal of agreementon a minimal requirement: a semantic account must fix what it is touse an expression correctly.
The fact that the expression means something implies that there is awhole set of normative truths about my behavior with that expression;namely, that my use of it is correct in application to certain objectsand not in application to others…. The normativity of meaningturns out to be, in other words, simply a new name for the familiarfact that, regardless of whether one thinks of meaning intruth-theoretic or assertion-theoretic terms, meaningful expressionspossess conditions of correct use. Kripke's insight was to realizethat this observation may be converted into a condition of adequacy ontheories of the determination of meaning: any proposed candidate forthe property in virtue of which an expression has meaning, must besuch as to ground the ‘normativity’ of meaning-it ought tobe possible to read off from any alleged meaning constituting propertyof a word, what is the correct use of that word. (Boghossian 1989:513)
On the assumption that this minimal requirement has to be satisfiedby any adequate semantic theory, is implementation always, or evenever, semantic interpretation? Are these two notions at odds with eachother?
One standard instance of implementation concerns the interpretationof one language in another. Here the abstraction and the semanticdomain are both languages. Unfortunately, this does not provide acriterion of correctness unless we have already fixed the semantics ofthe target language. While translating between languages is taken to beimplementation, indeed a paradigm case, it is not, on the presentcriterion, semantic interpretation. It only satisfies the correctnesscriterion when the target language has an independently given notion ofcorrectness. This may be achieved in an informal or in a mathematicalway. But it must not end in another uninterpreted language. So thisparadigm case of implementation does not appear to satisfy thenormative constraints required for semantic interpretation.
Next consider the case where the abstraction is a language and thesemantic medium is set theory. This would be the case with denotationalsemantics (Stoy 1977). This does provide a notion of correctness. Ourshared and agreed understanding of set theory provides this.Unfortunately, it would not normally be taken as an implementation.Certainly, it would not, if an implementation is something that iseventually physically realizable. Presumably, this is a necessarycondition for being an implementation.
Now consider the case where the syntactic component is an abstractstack and the semantic one is an array. Here we must ask what it meansto say that the implementation is correct. Does the medium of thearray fix the correct use of stacks? It would seem not: the array doesnot provide the criteria for deciding whether we have the correctaxioms for stacks or whether we have used them correctly in aparticular application. Rather the stack is providing the correctnesscriteria for the implementation that is the array. Instead, the axiomsprovide the fundamental meanings of the constructs. While the array isan implementation of the stack, it does not provide it with a notion ofcorrectness: the cart and the horse have been interchanged.
Finally, suppose the semantic domain is a physical machine and thesyntactic one is an abstract one. The suggestion is that the physicalmachine provides a semantic interpretation of the abstract one. Butagain, a semantic interpretation must provide us with a notion ofcorrectness and malfunction, and there are compelling argumentsagainst this that are closely related to the causal theories offunction (§2.4). This issue will be morecarefully examined in section (§4) where weconsider programming language semantics.
Given that a semantic account of a language must supply correctnesscriteria, and that the term semantics is to have some bite, these areserious obstacles for the view that implementation is semanticinterpretation. There are several phenomena all rolled into one. Ifthese objections are along the right lines, then the relationshipbetween the source and target is not semantic interpretation. Ofcourse, one may counter all this by arguing against the correctnessrequirement for semantic theory.
An alternative analysis of implementation is implicit in Turner (2014, 2012). Consider the case where the data type of finitesets is implemented in the data types of lists. Each of thesestructures is governed by a few simple axioms. The implementationrepresents finite sets as lists, the union operation on sets as listconcatenation, and equality between sets as extensional equality onlists etc. This is a mathematical relationship where the axioms forsets act as a specification of the artifact, which in this case isimplemented in the medium of lists. It would appear that the logicalconnection between the two is that of specification and artifact. Themapping does not have to be direct, i.e., there does not have to be asimple operation to operation correspondence, but the list propertiesof the implemented operations must satisfy the given set axioms. Instandard mathematical terms, the list medium must provide amathematical model (in the sense of model theory, W. Hodges 2013) ofthe set axioms. The case where one language is implemented in anotheris similar, and fleshed out by the semantic definitions of the twolanguages.
Finally, consider the case where the medium of implementation is aphysical device, e.g., an abstract stack is implemented as a physicalone. Once again the abstract stack must provide the correctnesscriteria for the physical device. This is what happens in practice. Wecheck that the physical operations satisfy the abstract demands givenby the axioms for stacks. There are issues here that have to dowith the adequacy of this notion of correctness. We shall discuss thesewhen we more carefully consider the computer science notion ofcorrectness (§6.4).
If this analysis is along the right lines, implementation is bestdescribed as a relation between specification and artifact.Implementation is not semantic interpretation; indeed, it requires anindependent semantic account in order to formulate a notion ofimplementation correctness. So what is taken to be semanticinterpretation in computer science?
How is a semantic account of a language to be given? What are themain conceptual issues that surround the semantic enterprise? There aremany different semantic candidates in the literature (Gordon 1979;Gunter 1992; Fernandez 2004; Milne and Strachey 1976). One of the most importantdistinctions centers upon the difference between operational anddenotational semantics (Turner 2007; White 2003).
Operational semantics began life with Landin (1964). In its logicalguise (Fernandez 2004) it provides a mechanism of evaluation where, inits simplest form, the evaluation relation is represented asfollows.
P ⇓c
This expresses the idea that the programP converges to thecanonical form given byc. This is usually calledbig step semantics. It is normally given in terms of rulesthat provide the evaluation of a complex program in terms of theevaluation of its parts. For example, a simple rule forsequencing(°) would take the form
These canonical or normal forms are other terms in the programminglanguage which cannot be further reduced by the given rules. But theyare terms of the language. For this reason, this operational approachis often said to be unsatisfactory. According to this criticism, atsome point in the interpretation process, the semantics for a formallanguage must be mathematical.
We can apparently get quite a long way expounding the properties ofa language with purely syntactic rules andtransformations… One such language is the LambdaCalculus and, as we shall see, it can be presented solely as a formalsystem with syntactic conversion rules … But wemust remember that when working like this all we are doing ismanipulating symbols-we have no idea at all of what we are talkingabout. To solve any real problem, we must give some semanticinterpretation. We must say, for example, “these symbols represent theintegers”. (Stoy 1977: 9)
In contrast, operational semantics is taken to besyntactic. In particular, even if one of them is in canonicalform, the relationP⇓c relates syntacticobjects. This does not get at what we are talking about. Unless theconstants of the language have themselves an independently givenmathematical meaning, at no point in this process do we reach semanticbedrock: we are just reducing one syntactic object to another, and thisdoes not yield a normative semantics. This leads to the demand for amore mathematicalapproach.
Apparently, programming languages refer to (or are notations for)abstract mathematical objects not syntactic ones (Strachey 2000;McGettrick 1980; Stoy 1977). In particular, denotational semanticsprovides, for each syntactic objectP, amathematicalone. Moreover, it generally does this in a compositional way: complexprograms have their denotations fixed in terms of the denotations oftheir syntactic parts. These mathematical objects might be settheoretic, category theoretic or type theoretic. But whichever methodis chosen, programs are taken to refer to abstract mathematical things.However, this position relies on a clear distinction between syntacticand mathematical objects.
One way of addressing this is via the observation that mathematicaltheories such as set theory and category theory are axiomatic theories.And it is this that makes them mathematical. This is implicit in themodern axiomatic treatment of mathematics encouraged by (Bourbaki1968) and championed by Hilbert. The following is from Hilbert'sletter to Frege of December 29, 1899 (Frege 1980: 40).
In my opinion, a concept can be fixed logically only by itsrelations to other concepts. These relations, formulated in certainstatements, I call axioms, thus arriving at the view that axioms(perhaps together with propositions assigning names to concepts) arethe definitions of the concepts. I did not think up this view because Ihad nothing better to do, but I found myself forced into it by therequirements of strictness in logical inference and in the logicalconstruction of a theory. I have become convinced that the more subtleparts of mathematics … can be treated with certainty only inthis way; otherwise one is going around in a circle….. [I]t issurely obvious that every theory is only a scaffolding or schema ofconcepts together with their necessary relations to one another, andthat the basic elements can be thought of in any way one likes. If inspeaking of my points I think of some system of things, e.g., thesystem: love, law, chimney-sweep … and then assume all my axiomsas relations between these things, then my propositions, e.g.,Pythagoras' theorem, are also valid for these things. In other words:any theory can always be applied to infinitely many systems of basicelements.
It is worth pointing out that the axiomatic account, as long as itis precise and supports mathematical reasoning, does not need to beformal. If one accepts this as a necessary condition for mathematicalstatus, does it rule out operational accounts?Prima facie itwould seem so. Apparently, programs are reduced to canonical constantswith no axiomatic definitions. But Turner (2009b, 2010) arguesthis is to look in the wrong place for the axiomatization: the latterresides not in the interpreting constants but in the rules ofevaluation i.e., in the theory of reduction given by the axiomaticrelation ⇓.
If this is right, given that both define matters axiomatically, aslong as they agree, it should not matter which we take to define thelanguage as a formal mathematical theory. Unfortunately, they don'talways agree: the notion of equality provided by the operationalaccount, although preserved by the denotational one, is often more finegrained. This has led to very special forms of denotational semanticsbased upon games (Abramsky and McCusker 1995; Abramsky et al. 1994). However,it is clear that practitioners take the operational account asfundamental, and this is witnessed by the fact that they seek to devisedenotational accounts that are in agreement with the operationalones.
Not only is there no metaphysical difference between the settheoretic account and the operational one, but the latter is taken tobe the definitive one.Thisview of programming languages is the perspective of theoreticalcomputer science: programming languages, via their operationaldefinitions, are mathematical theories of computation.
However, programming languages are very combinatorial innature. They are working tools not elegant mathematical theories;it is very hard to explore them mathematically. Does this preventthem from being mathematical theories? There has been very littlediscussion of this issue in the literature Turner (2010) andStrachey (2000) are exceptions. On the face of it, Strachey sees themas mathematical objects pure and simple. Turner is a little morecautious and argues that actual programming languages, while often toocomplex to be explored as mathematical theories, contain a core theoryof computation which may be conservatively extended to the fulllanguage.
However, Turner (2014) further argues that programminglanguages, even at their core, are not just mathematical objects. Heargues that they are best conceptualized as technical artifacts. Whiletheir axiomatic definition provides their function, they also requirean implementation. In the language of technical artifacts, a structuraldescription of the language must say how this is to be achieved: itmust spell out how the constructs of the language are to beimplemented. To illustrate the simplest case, consider the assignmentinstruction.
x :=E
A physical implementation might take the following form.
This is a description of how assignment is to be physicallyrealized. It is a physical description of the process of evaluation. Ofcourse, a complete description will spell out more, but presumably notwhat the actual machine is made of; one assumes that this would be partof the structural description of the underlying computer, the medium ofimplementation. The task of the structural description is only todescribe the process of implementation on a family of similarlystructured physical machines. Building on this, we stipulate how thecomplex constructs of the language are to be implemented. For example,to execute commands in sequence we could add a physical stack thatarranges them for processing in sequence. Of course, matters areseldom this straightforward. Constructs such as iteration and recursionrequire more sophisticated treatment. Indeed, interpretation andcompilation may involve many layers and processes. However, in the endthere must be some interpretation into the medium of a physicalmachine. Turner (2014) concludes that thing that is aprogramming language is a complex package of syntax/semantics(function) together with the implementation as structure.
Somehave suggested thata physical implementation actually defines the semantics of thelanguage. Indeed, this is a common perspective in the philosophy ofcomputer science literature. We have already seen that Rapaport (1999)sees implementation as a semantic interpretation. Fetzer (1988)observes that programs have a different semantic significance totheorems. In particular, he asserts:
…programs are supposed to possess a semantic significancethat theorems seem to lack. For the sequences of lines that compose aprogram are intended to stand for operations and procedures that can beperformed by a machine, whereas the sequences of lines that constitutea proof do not. (Fetzer 1988: 1059)
This seems to say that the physical properties of the implementationcontribute to the meaning of programs written in the language. Colburn(2000) is more explicit when he writes that the simple assignmentstatementA := 13×74 issemantically ambiguous between something like the abstract account wehave given, and the physical one given as:
physical memory locationA receives the value of physicallycomputing 13 times 74. (Colburn 2000: 134)
The phrase physically computing seems to imply that what thephysical machine actually does is semantically significant i.e., whatit actually does determines or contributes to the meaning ofassignment. Is this to be taken to imply that to fix what assignment means we have tocarry out a physical computation? However, if an actual physicalmachine is taken to contribute in any way to the meaning of theconstructs of the language, then their meaning is dependent upon thecontingencies of the physical device. In particular, the meaning of thesimple assignment statement may well vary with the physical state ofthe device and with contingencies that have nothing to with thesemantics of the language, e.g., power cuts. Under thisinterpretation, multiplication does not mean multiplication but ratherwhat the physical machine actually does when it simulatesmultiplication. This criticism parallels that for causal theories offunction (§2.4).
The nature of programs has been the subject of a good amount ofphilosophical and legal reflection. What kinds of things are they? Arethey mathematical/symbolic objects or concrete physical things? Indeed, the legal literature even contains a suggestion that programsconstitute a new kind of (legal) entity.
The exact nature of computer programs is difficult to determine. Onthe one hand, they are related to technological matters. On the otherhand, they can hardly be compared to the usual type of inventions. Theyinvolve neither processes of a physical nature, nor physical products,but rather methods of organization and administration. They are thusreminiscent of literary works even though they are addressed tomachines. Neither industrial property law nor copyright law in theirtraditional roles seems to be the appropriate instrument for theprotection of programs, because both protections were designed for andused to protect very different types of creations. The unique nature ofthe computer program has led to broad support for the creation of suigeneris legislation. (Loewenheim 1989: 1)
This highlights the curious legal status of programs. Indeed, itraises tricky ontological questions about the nature of programs andsoftware: they appear to be abstract, even mathematical objects with acomplex structure, and yet they are aimed at physical devices. In thissection we examine some of the philosophical issues that have arisenregarding the nature of programs and software. Some of them have legalorigins and potentially legal consequences.
What is the content of the claim that programs are mathematicalobjects? In the legal literature the debate seems to centre on thenotion that programs are symbolic objects that can be formallymanipulated (Groklaw 2011, 2012—see Other InternetResources). Indeed, there is a branch of theoretical computer sciencecalled formal language theory that treats grammars as objects ofmathematical study (Hopcroft and Ullman 1969). While this does givesome substance to the claim, this is not the most important sense inwhich programs are mathematical. This pertains to their semantics,where programming languages are taken to be axiomatic theories(§4.2). This perspective locatesprograms as elements in a theory of computation (Turner 2007, 2010).
But no matter in what sense programs are taken to be mathematicalobjects, there are legal ramifications. If programs and software aretaken to be abstract mathematical things, then they cannot easily begoverned by copyright. Normally, copyright law excludes pureabstract ideas. Mooers (1975) attempts to meet thisobjection by drawing a distinction between an idea that is notcopyrightable, and a copyrightable expression of an idea.
Where does the “expression” leave off, and the“idea” take over? The best insight into this matter comesfrom discussions of copyright as applied to novels and dramaticproductions. In these, “expression” is considered toinclude the choice of incident, the personalities and development ofcharacter, the choice of names, the elaboration of the plot, thechoice of locale, and the many other minor details and gimmicks usedto build the story. In other words, “expression” isconsidered to include not only the marks, words, sentences, and so onin the work, but also all these other details or structures as theyaggregate into larger and larger units to make up the expression ofthe entire story … In other words, after the bare abstract“idea” has been chosen (e.g., boy meets girl, boy losesgirl, boy wins girl), the “expression” to which copyrightapplies covers the remaining elements of original choice and artistrywhich are supplied by the author in order for him to develop, express,and convey his version of the bare idea. (Mooers 1975: 50)
However, in the case of programs, is it clear how to mark theboundary between an idea and an expression of it? Is theidea of aprogram contained in its semantics or its specification? How is itrelated to the algorithm/program distinction?
While agreeing that programs have an abstract guise, much of thephilosophical literature (e.g., Colburn 2000; Moor 1978) has itthat they also possess a concrete physical manifestation thatfacilitates their use as thecause of computations in physicalmachines. For example, Moor observes:
It is important to remember that computer programs can be understoodon the physical level as well as the symbolic level. The programming ofearly digital computers was commonly done by plugging in wires andthrowing switches. Some analogue computers are still programmed in thisway. The resulting programs are clearly as physical and as much a partof the computer system as any other part. Today digital machinesusually store a program internally to speed up the execution of theprogram. A program in such a form is certainly physical and part of thecomputer system. (Moor 1978: 215)
The following is of more recent origin, and more explicitlyarticulates the duality thesis in its claim that software has bothabstract and physical guises.
Many philosophers and computer scientists share the intuition thatsoftware has a dual nature (Moor 1978, Colburn 2000). It appears thatsoftware is both an algorithm, a set of instructions, and a concreteobject or a physical causal process. (Irmak 2012: 3)
Some shadow of thus duality emerges in the legal literature: to besubject to patent, programs must have some technical effect.
In this context, the Court described a “technical nature” as an“instruction to a systematic acting by utilizing controllable naturalforces to achieve a causally predictable result”. In other words, thekey element which characterizes the technical nature of an invention isthe control of natural forces to achieve a predicted result.(Loewenheim 1989: 2)
Apparently, to qualify for a patent, a computer program cannot besimply an abstract thing; it must also utilize controllable forces ofnature to achieve predictable results. It must cause a computer to dosomething like display items on screen, store a particular pattern ina memory, or activate a peripheral device: it must provide a technicaleffect. This is in line with the notion that programs are technicalartifacts. However, the legal situation is very far from being allmoonlight and roses. The debate is ongoing (Rapaport 2010; Groklaw2011, 2012—see Other Internet Resources).
Anyone persuaded by the abstract/physical duality for programs isunder an obligation to say something about the relationship betweenthese two forms of existence. This is the major philosophical concernand parallels the question for technical artifacts in general.
One immediate suggestion is that programs, as textual objects,cause mechanical processes. The idea seems to be that somehowthe textual object physically causes the mechanical process. Colburn(2000, 1999) denies that the symbolic text itself has any causaleffect; it is its physical manifestation, the thing on the disk, whichhas such an effect. For him, software is aconcreteabstraction that has a medium of description (the text, theabstraction) and a medium of execution (e.g., a concreteimplementation in semiconductors). The duality is unpacked in a waythat is parallel to that found in the philosophy of mind (see the entryondualism), where thephysical device is taken as a semantic interpretation of the abstractone. This is close to the perspective of Rapaport (1999). However, wehave already alluded to problems with this approach(§3.3).
A slightly different account can be found in Fetzer (1988).He suggests that abstract programs are something like scientifictheories: a program is to be seen as a theory of its physicalimplementation—programs as causal models. In particular, thesimple assignment statement and its semantics is a theory about aphysical store and how it behaves. If this is right, and a programturns out not to be an accurate description of the physical device thatis its implementation, the program must be changed: if the theory whichis enshrined in the program does not fit the physical device, being atheory, it should be changed. But this does not seem to be what happensin practice. While the program may have to be changed, this is notinstigated by any lack of accord with its physical realization, but byan independent abstract semantics for assignment. If this is correct,the abstract semantics appears not to be a theory of its concreteimplementation.
Thealternative picture has itthat the abstract program (determined by its semantics) provides thefunction of the artifact, and the physical artifact, or rather itsdescription, provides its structure. It is the function of the program,expressed in its semantics, that fixes the physical implementation andprovides the criteria of correctness and malfunction. Programs ascomputational artifacts have both an abstract aspect that somehow fixeswhat they do, and a physical aspect that enables them to cause physicalthings to happen.
What is the difference between programming and specification? Onesuggestion is that a specification tells us what it is to do withoutactually saying how to do it. For instance, the following is aspecification written in VDM (Jones 1990).
SQRTP (x:real, y:real)
Pre:x ≧ 0
Post:y∗y =x andy ≧ 0
This is a specification of a square root function with theprecondition that the input is positive. It is a functional descriptionin that it says what it must do without saying how it is to beachieved. One way to unpack thiswhat/howdifference is in terms of the descriptive/imperative distinction.Programs are imperative and say how to achieve the goal whereasspecifications are declarative and only describe the input/outputbehavior of the intended program. Certainly, in the imperativeprogramming paradigm, this seems to capture a substantive difference.But it is not appropriate for all. For example, logic and functionalprogramming languages (Thompson 2011) are not obviously governed byit. The problem is that programming languages have evolved to a pointwhere this way of describing the distinction is not marked by the styleor paradigm of the programming language. Indeed, in practice, a programwritten in Haskell (Thompson 2011) could act as a specification for aprogram written in C (Huss 1997, Other Internet Resources).
A more fundamental difference concerns the direction of governancei.e., which is the normative partner in the relationship, and which isthe submissive one. In the case of the specification of the square rootfunction, the artifact is the linguistic program. When the program istaken as the specification, the artifact is the next level of code, andsoondown to a concreteimplementation. This is in accord with Rapaport (2005) and his notionof the asymmetry of implementation.
One of the earliest philosophical disputes in computer sciencecenters upon the nature of program correctness. The overall dispute wasset in motion by two papers (De Millo et al. 1979; Fetzer 1988) and wascarried on in the discussion forum of the ACM (e.g., Ashenhurst1989; Technical Correspondence 1989). The pivotal issue derives from theduality of programs, and what exactly is being claimed to be correctrelative to what. Presumably, if a program is taken to be amathematical thing, then it has only mathematical properties. But seenas a technical artifact it has physical ones.
On the face of it, Tony Hoare seems to be committed to what we shallcall themathematical perspective i.e., that correctness is amathematical affair i.e., establishing that a program is correctrelative to a specification involves only a mathematical proof.
Computer programming is an exact science in that all the propertiesof a program and all the consequences of executing it in any givenenvironment can, in principle, be found out from the text of theprogram itself by means of purely deductive reasoning. (Hoare 1969: 576)
Consider our specification of a square root function. What does itmean for a programP to satisfy it? Presumably, relative toits abstract semantics, every program (P), carves out arelationshipRP between its input and output, itsextension. The correctness condition insists that this relationsatisfies the above specification, i.e.,
This demands that the abstract program, determined by the semanticinterpretation of its language, satisfies the specification. Thestatement (C) is a mathematical assertion between two abstract objectsand so, in principle, the correctness maybe established mathematically.A mathematical relationship of this kind is surely what Tony Hoare hasin mind, and in terms of the abstract guise of the program, there islittle to disagree with. However, there are several concerns here. Onehas to do with the complexity of modern software (thecomplexity challenge), and the other the nature of physicalcorrectness (theempirical challenge).
Programmers are always surrounded by complexity; we cannot avoid it.Our applications are complex because we are ambitious to use ourcomputers in ever more sophisticated ways. Programming is complexbecause of the large number of conflicting objectives for each of ourprogramming projects. If our basic tool, the language in which wedesign and code our programs, is also complicated, the language itselfbecomes part of the problem rather than part of its solution. (Hoare 1981: 10)
Within the appropriate mathematical framework proving thecorrectness of any linguistic program, relative to its specification,is theoretically possible. However, real software is complex. In suchcases proving correctness might be practically infeasible. One mightattempt to gain some ground by advocating that classical correctnessproofs should be carried out by a theorem prover, or at least oneshould be employed somewhere in the process. However, the latter mustitself be proven correct. While this may reduce the correctness problemto that of a single program, it still means that we are left with thecorrectness problem for a large program. Moreover, in itself thisdoes not completely solve the problem. For both theoretical andpractical reasons, in practice, human involvement is not completelyeliminated. In most cases proofs are constructed by hand with theaid of interactive proof systems. Even so, a rigorous proof ofcorrectness is rarely forthcoming. One might only require thatindividual correctness proofs be checked by a computer rather than ahuman. But of course the proof-checker is itself in need of checking.Arkoudas and Bringsjord (2007) argue that since there is only onecorrectness proof that needs to be checked, namely that of the proofchecker itself, then the possibility of mistakes is significantlyreduced.
This is very much a practical issue. However, there is a deeperconceptual one. Are proofs of program correctness genuine mathematicalproofs, i.e., are such proofs on a par with standard mathematical ones?The authors of (De Millo et al. 1979) claim that correctness proofs areunlike proofs in mathematics. The latter are conceptually interesting,compelling and attract the attention of other mathematicians who wantto study and build upon them. This argument parallels the graspabilityarguments made in the philosophy of mathematics. Proofs that arelong, cumbersome and uninteresting cannot be the bearers of the kind ofcertainty that is attributed to standard mathematical proofs. Thenature of the knowledge obtained from correctness proofs is said to bedifferent to the knowledge that may be gleaned from standard proofs inmathematics. In order to be taken in, proofs must be graspable. Indeed,Wittgenstein would have it that proofs that are not graspable cannotact as norms, and so are not mathematical proofs (Wittgenstein 1956).
Mathematical proofs such as the proof of Gödel's incompletenesstheorem are also long and complicated. But they can be grasped. Whatrenders such complicated proofs transparent, interesting and graspableinvolves the use of modularity techniques (e.g., lemmas), and the use ofabstraction in the act of mathematical creation. The introduction ofnew concepts enables a proof to be constructed gradually, therebymaking the proofssurveyable. Mathematics progresses byinventing new mathematical concepts that facilitate the construction ofproofs that would be far more complex and even impossible without them.A very elementary example concerns the exponent notation. This makes itpossible to extend computation beyond the complexity of multiplication,and to argue about the results. At the other extreme, the invention ofcategory theory facilitated the statement and proof of very generalresults about algebraic structures that automatically apply to a wholerange of such. Mathematics is not just about proof; it also involvesthe abstraction and creation of new concepts and notation. Incontrast, formal correctness proofs do not seem to involve the creationof new concepts and notations. While computer science does involveabstraction, it is not quite in the same way.
One way of addressing the complexity problem is to change the natureof the game. The classical notion of correctness links the formalspecification of programs to its formal semantic representation. It isat one end of the mathematical spectrum. However, chains ofspecification/artifact pairings, positioned at varying degrees ofabstraction, are governed by different notions of correctness. Forexample, in the object oriented approach, the connection between a UMLspecification and a Java program is little more than type checking. Thecorrectness criteria involve structural similarities and identities(Gamma et al. 1994). Here we do not demand that one infinite mathematicalrelation is extensionally governed by another. At higher levels ofabstraction, we may have only connections of structure. These are stillmathematical relationships. However, such methods, while they involveless work, and may even be automatically verified, establish muchless.
The notion of program verification appears to trade upon anequivocation. Algorithms, as logical structures, are appropriatesubjects for deductive verification. Programs, as causal models ofthose structures, are not. The success of program verification as agenerally applicable and completely reliable method for guaranteeingprogram performance is not even a theoretical possibility. (Fetzer 1988: 1)
In fact this issue is alluded to by Hoare in the very text thatFetzer employs to characterize Hoare's mathematical stance oncorrectness.
When the correctness of a program, its compiler, and the hardware ofthe computer have all been established with mathematical certainty, itwill be possible to place great reliance on the results of the program,and predict their properties with a confidence limited only by thereliability of the electronics. (Hoare 1969: 579)
All seemed to be agreed that computational systems are at bottomphysical systems, and some unpredictable behavior may arise because ofthe causal connections. Indeed, even when theorem provers and theproof checkers are used, the results still only yield empiricalknowledge. A proof checker is a program running on a physical machine.It is a program that has been implemented and its results depend upon aphysical computation. Consequently, at some level, we shall need toshow that some physical machine operations meet their specification.Testing and verification seems only to yield empirical evidence.Indeed, the complexity of program proving has led programmers to takephysical testing to be evidence that the abstract program meets itsspecification. Here the assumption is that the underlyingimplementation is correct. Butprima facie, it is only empiricalevidence.
In apparent contrast, Burge (1988) argues that knowledge of suchcomputer proofs can be taken asa priori knowledge. According to Burge,a priori knowledge does not depend for its justification on any sensoryexperience. However, he allows thata priori knowledge may depend forits possibility on sensory experience e.g., knowledge that red is acolor may bea priori even though having this knowledge requires havingsensory experience of red in order to have the concepts required toeven formulate the idea. If correct, this closes the gap betweenapriori anda posteriori claims about computer assisted correctnessproofs, but only by redrawing the boundary betweena priori and aposteriori knowledge so that some empirical assertions can fall intothe former category. For more discussion on the nature of the use ofcomputers in mathematical proofs see Hales 2008; Harrison 2008;Tymoczko 1979, 1980.
Unfortunately, practice often does not even get this far. Generally,software engineers do not construct classical correctness proofs byhand or even automatically. Testing of software against itsspecification on suites of test cases is the best that is normallyachieved. Of course this never yields correctness in the mathematicalsense. Test cases can never be exhaustive (Dijkstra 1974).Furthermore, there is a hidden assumption that the underlyingimplementation is correct: at best, these empirical methods tell ussomething about the whole system. Indeed, the size of the state spaceof a system may be so large and complex that even direct testing isinfeasible. In practice, the construction of mathematical models thatapproximate the behavior of complex systems is the best we can do.
The whole correctness debate carried out in the forum of the ACM(e.g., Ashenhurst 1989; Technical Correspondence 1989) is put into someperspective when programs are considered as technical artifacts. Butthis leaves one final topic: when we have reached physical structure,what notion of correctness operates?
What is it for a physical device to meet its specification? What isit for it to be acorrect physical implementation? Thestarting point for much contemporary analysis is often referred to asthesimple mapping account.
According to the simple mapping account, a physicalsystemS performs as a correct implementation of an abstractspecificationC just in case (i) there is a mapping from thestates ascribed toS by a physical description to the statesdefined by the abstract specificationC, such that (ii) the statetransitions between the physical states mirror the state transitionsbetween the abstract states. Clause (ii) requires that for any abstractstate transition of the forms1 →s2, if the systemis in the physical state that maps ontos1, it thengoes into the physical state that maps ontos2.
To illustrate what the simple mapping account amounts to we considerthe example of our abstract machine (§2.1) where we employ aninstance of the machine that has only two locationsl andr, and twopossible values 0 and 1. Subsequently, we have only four possiblestates (0,0), (0,1),(1,1) and (1,0). The computation table for theupdate operation may be easily computed by hand, and takes the form ofa table with input/output pairings. For example,Update(r,1)sends the state (0,0) the state (0,1). The simple mapping account onlydemands that the physical system can be mapped onto the abstract one insuch a way that the abstract state transitions are duplicated in thephysical version.
Unfortunately, such a device is easy to come by: almost anythingwith enough things to play the role of the physical states will satisfythis quite weak demand of what it is to be an implementation. Forexample, any collection of colored stones arranged as the update tablewill be taken to implement the table. SMA only demands extensionalagreement. It is a de-facto demand. This leads to a form ofpancomputationalism where almost any physical system implements anycomputation.
The danger of pancomputationalism has driven someauthors (D. J. Chalmers 1996; Egan 1992; Sprevak 2012) to attemptto provide an account of implementation that somehow restricts theclass of possible interpretations. In particular, certain authors(D. J. Chalmers 1996; Copeland 1996) seek to impose causalconstraints on such interpretations. One suggestion is that we replacethe material conditional (if the system is in the physical stateS₁ …) by a counterfactual one. In contrast,the semantic account insists that a computation must be associated witha semantic aspect which specifies what the computation is toachieve (Sprevak 2012). For example, a physical device could beinterpreted as an and gate or an or gate. It would seem to depend uponwhat we take to be the definition of the device. Without such there isno way of fixing what the artifact is. The syntactic account demandsthat only physical states that qualify as syntactic may be mapped ontocomputational descriptions, thereby qualifying as computational states.If a state lacks syntactic structure, it is not computational. Ofcourse, what remains to be seen is what counts as a syntactic state. Agood overview can be found in the entry oncomputation in physical systems.
Turner (2012) argues that abstract structure and physicalstructure are linked, not just by being in agreement, but also by theintention to take the former as having normative governance over thelatter. On this account, computations are technical artifactswhose function is fixed by an abstract specification. This relationshipis neither that of theory to physical object nor that of syntacticthing to semantic interpretation.
But there is an ambiguity here that is reflected in the debatebetween those who argue for semantic interpretation (Sprevak 2012),and those who argue against it (Piccinini 2008). Consider programs.What is the function of a program? Is it fixed by its semanticinterpretation or is it fixed by its specification. The ambiguity hereconcerns the function of a program as part of a programming language orits role as part of a larger system. As a program in a language, it isfixed by the semantics of the language as a whole. However, to use aprogram as part of a larger system, one needs only know what it does.The function of the program, as part of a larger system, is given byits specification. When a computation is picked out by a specification,exactly how the program achieves its specification is irrelevant to thesystem designer. The specification acts as aninterface, andthe level of abstraction employed by the system designer iscentral.
Although programming is one of the core activities of computerscience, there has been little sustained philosophical debate about itsnature. What there has been revolves around the following question:what kind of activity is programming? Is it mathematics, engineering,science or art (Knuth 1974)? This debate is about the methodology ofcomputer science rather than the ontological status of programs, butinevitably the latter will leak into the discussion.
One very influential computer scientist claimed that programming is,or should be, methodologically a mathematical activity. This is aimedat the actual process of program construction. Hoare suggests thatprograms can be derived from their specifications in a mathematicallyprecise way using program transformations that are based upon algebraiclaws.
I hold the opinion that the construction of computer programs is amathematical activity like the solution of differential equations, thatprograms can be derived from their specifications through mathematicalinsight, calculation, and proof, using algebraic laws as simple andelegant as those of elementary arithmetic (Hoare 1969: 576).
This can also be seen as an alternative approach to the correctnessproblem: programs are derived from their specifications by laws thatpreserve correctness. Morgan (1994) develops thetransformational approach for imperative style languages. Thefunctional style has also (Henson 1987) embraced this style oftransformational programming. Martin-Löf (1982)advocated the use of constructive mathematics as a means of derivingprograms from (constructive) existence proofs. However, in thecontemporary scene, there is little evidence that real software isdeveloped in this way. Systematic program development fromspecifications is still an active research topic, but the complexityissue is still with us.
The position of Dijkstra (1974) is somewhat softer. Heonly seems to be suggesting that programming involves rigorousreasoning. Here at least, he is not advocating that programs can bemathematically developed from specifications.
With respect to mathematics I believe, however, that most of us canheartily agree upon the following characteristics of most mathematicalwork: 1. compared with other fields of intellectual activity,mathematical assertions tend to be unusually precise. 2.Mathematical assertions tend to be general in the sense that they areapplicable to a large (often infinite) class of instances. 3.Mathematics embodies a discipline of reasoning allowing such assertionsto be made with an unusually high confidence level.
The mathematical method derives its power from the combination of allthese characteristics; conversely, when an intellectual activitydisplays these three characteristics to a strong degree, I feeljustified in calling it “an activity of mathematical nature”,independent of the question whether its subject matter is familiar tomost mathematicians. (Dijkstra 1974: 2)
This claims that the development of programs and software involvesprecise reasoning, presumably about the underlying type structure andcontrol mechanisms. There is little doubt that something like this istrue. While programmers seldom prove rigorous theorems aboutsoftware, this obviously does not mean that they do not reason whendesigning and testing programs. Programming is a precise intellectualactivity that involves precise reasoning about the effects of theirprogramming constructs on some underlying machine. This is a fairlyminimalistic claim that it is hard to dispute.
But even if we accept this, we are not told what kind of mathematicalactivity programming is. We are only told it involves rigorousreasoning about some kind of formal objects. However, on theassumption that programming languages are axiomatic theories ofcomputation, there is a simple characterization. When glossed with theperspective of (§4.2), programs aredefinitions within the mathematical theory of the programminglanguage. This does not argue for any kind of methodological stance,only that programs are definitions and programming is the activity ofcreating them; programming is a definitional enterprise. If oneaccepts this, there is a logical uniformity about specification andprogramming: both specifications and programs are definitions insidetheir containing mathematical theories.
However, the invention of software engineering (Summerville 2012;Vliet 2008) moved the goal posts. While formal methods form apart of many methodologies, modern software development uses a richmixture of methods and techniques. Programming in the raw involves theconstruction of technical artifacts, not mathematical objects. In thismore complete picture, programming is an engineering activity.
There are some general design principles that do have a conceptualring about them. The movement of structured programming (Dijkstra1970) argues that programs should be constructed in a transparent waythat is in some sense compositional. On this basis users were urged toavoid the use ofgoto statements and unstructured jumps. Thishad some impact upon programming language design. The Turing awardlecture (Floyd 1979) contains some further reflections on structuredprogramming and its impact upon software development. In a more modernform, encapsulation (Mitchell 2003) tries to ensure that programs areself-contained units with transparent and self-contained functionalspecifications—avoidleaky modules that make referenceto lower levels of abstraction.
These are principles of design. But these notions have yet to besubject to much philosophical analysis. Indeed, the whole of thephilosophy of engineering design, with all its conceptual questions andapparatus, for example as outlined in Kroes (2012), has yet to beapplied to computational artifacts.
Some claim that software development is a scientific activity.Indeed, there is a hint of such a perspective in many of the Turingaward lectures (ACM 2013—see Other Internet Resources) where onecan find many variations on the more general claim that computerscience is a scientific activity. One folklore claim has it thatprograms act as scientific theories or models. However, such theoriesare always up for revision; they provide defeasible knowledge(Rosenberg 2000; A. F. Chalmers 1999). More importantly, when there isa mismatch between the theory and the natural world, the theories mayhave to be given up in the light of experimentation. Programverification certainly fits the testing and verifying methodology. Butit is not clear that its purpose is the verification of a model ortheory. When a program is tested, it is tested against a specificationnot for its agreement with the physical world. It would seem thatprograms, in the primary role, are not intended to be scientifictheories of anything; they function as artifacts.
Moreover, while programs may themselves function as specifications,whereas scientific theories are concerned with natural objects andphenomena, specifications are concerned with intentionally producedthings. And, as we have already observed,(§4.2), in regard to their underlyinglogic, there appears to be a fundamental difference betweenspecifications and theories.
However, there is a very specific argument that needs consideration.The paper (Angius 2013) contains an argument for theclaim that modern software development is partly a scientific activity.This argument is based on the observation that software testing andverification employ model based techniques; the software system ismodeled because of the complexity of the real thing. Here there is adirect analogy with scientific models. If the model gets things wrongi.e., does not capture the system, then it will be modified orreplaced. This process resembles standard model construction in science(Frigg and Hartmann 2012). However, it would seem that the purpose here isdifferent. The function of the model is to stand proxy for the systemin the process of verification. We are not trying to model the systemas a natural thing but as a technical artifact. The methodology may besimilar, but we are dealing with a designed artifact. The normativerole of the specification is still in force, except that now thespecification has normative force over the system indirectly via themodel. On the face of it, because computer scientists buildmathematical models of complex computational systems, it does not meanthat they are engaged in a scientific activity in the same way thatphysicists are.
Computer science… differs from physics in that it is notactually a science. It does not study natural objects. Neither is it,as you might think, mathematics; although it does use mathematicalreasoning pretty extensively. Rather, computer science is likeengineering; it is all about getting something to do something, ratherthan just dealing with abstractions, as in the pre-Smith geology.(Feynman 2000 [1984–1986]: 8)
Abstraction facilitates computer science. Without it we would nothave progressed from the programming of numerical algorithms to thesoftware sophistication of air traffic control systems, interactiveproof development frameworks, and computer games. It is manifested inthe rich type structure of contemporary programming and specificationlanguages, and underpins the design of these languages with theirbuilt-in mechanisms of abstraction. It has driven the invention ofnotions such as polymorphism, data abstraction, classes, schema, designpatterns, and inheritance. But what is the nature of abstraction incomputer science? Is there just one form of it? Is it the same notionthat we find in mathematics?
This is how Skemp (1987) describes abstraction. Hisconception begins with similarity recognition and results in theembodiment of this similarity in a new concept.
Abstracting is an activity by which we become aware of similarities… among our experiences. Classifying means collecting togetherour experiences on the basis of these similarities. An abstraction issome kind of lasting change, the result of abstracting, which enablesus to recognize new experiences as having the similarities of analready formed class…. To distinguish between abstracting as anactivity and abstraction as its end-product, we shall call the latter aconcept. (Skemp 1987: 5)
In the opening paragraph of his lectures on data structuring (Hoare1973: 83), the computer scientist Tony Hoare describes abstraction inmuch the same way.
Abstraction arises from recognition of similarities between certainobjects, situations, or processes in the real world, and the decisionto concentrate upon those similarities and to ignore for the time beingthe differences.
Such abstraction involves ignoring concrete features, the structuralaspects of the device or process. From this voluntary ignorance a newconcept emerges. This has it that abstraction is a mental process.Unfortunately, so conceived, it employs a much criticized perspectivein the philosophy of mind (see the entry onabstract objects).
A related approach is more logical in nature. Crispin Wright (1983)and Bob Hale (1987) have developed an account of abstraction thatemanates from Frege. Consider the followingequations.
Expressions such asthe direction of a line orthe numberof form the basis of abstraction; they areabstractionprinciples. They appear to hold in virtue of the meaning of theexpressions on the right hands sides: on the face of it, mastery ofthe concept of a direction presupposes mastery of the concept ofparallelism, but not vice versa. This is discussed in more detail inthe entry onabstract objects. However, the adequacy of this account to deal with theabstraction mechanisms of computer science has not beeninvestigated.
Computer science abstraction takes many different forms. We shallnot attempt to describe these in any systematic way here. However,Goguen (Goguen & Burstall 1985) describes some of this varietyof which the following examples are instances.
One kind involves the idea of repeated code: a program text, possiblywith a parameter, is given a name (proceduralabstraction). In Skemp's terms, the procedure brings a newconcept into existence, where the similarity of structure is thecommon code. Formally, this is the abstraction of the Lambda calculus(see the entry on thelambda calculus). The parameter might even be a type, and this leads tothe various mechanisms of polymorphism, which may be formalized inmathematical theories such as the second order Lambda calculus (Hankin2004).
Recursion is an early example of operation or mechanism abstraction:it abstracts away from the mechanisms of the underlying machine. Inturn, this facilitates the solution of complex problems without havingto be aware of the operations of the machine. For example, recursion isimplemented in devices such as stacks, but in principle the user ofrecursion does not need to know this.
The type structure of a programming or specification languagedetermines the ontology of the language: the kinds of entity that wehave at our disposal for representation and problem solving. To a largeextent, types determine the level of abstraction of the language. Arich set of type constructors provides an expressive system ofrepresentation. Abstract and recursive types are commonexamples.
In object oriented design, patterns (Gamma et al. 1994) are abstractedfrom the common structures that are found in software systems. Here,abstraction is the means of interfacing: it dissociates theimplementation an object from its specification. For example,abstract classes act as interfaces by providing nothing but the typestructure of its methods.
In addition, in mathematics (Mitchelmore and White 2004), computerscience and philosophy (Floridi 2008) there are levels of abstraction.Abstractions in mathematics are piled upon each other in a never endingsearch for more and more abstract concepts. Likewise, computer sciencedeals with the design and construction of artifacts through a complexprocess involving sequences of artifacts of decreasing levels ofabstractness, until one arrives at the actual physical device.
In mathematics, once the abstraction is established, the physicaldevice is left behind. On this account, the abstraction isself-contained: an abstract mathematical object takes its meaning onlyfrom the system within which it is defined. The only constraint isthat the new objects be related to each other in a consistent systemwhich can be operated on without reference to their previous meaning.Self-containment is paramount. There are no leaks. This is clearlyexpressed in Hilbert's view of axiomatization(§5.1).
Some argue that, in this respect at least, abstraction in computerscience is fundamentally different to abstraction in mathematics(Colburn & Shute 2007). They claim that computationalabstraction must leave behind an implementation trace. Information ishidden but not destroyed. Any details that are ignored at one level ofabstraction (e.g., programmers need not worry about the precise locationin memory associated with a particular variable) must not be ignored byone of the lower levels (e.g., the virtual machine handles all memoryallocations). At all levels, computational artifacts crucially dependupon the existence of an implementation. For example, even thoughclasses hide the implementation details of their methods, except forabstract ones, they must have implementations. This is in keeping withthe view that computational artifacts have both function and structure:computational abstractions have both an abstract guise and animplementation.
However, matters are not quite so clean cut. While it is true thatabstraction in mathematics generates objects whose meaning is definedby their relationships, the same is so in computer science. Abstractnotions could not have a normative function unless they had suchindependent meanings. Moreover, certain forms ofconstructive mathematics resembles computer science in that therehas to be an implementation trace: one must always be able to recoverimplementation information from proofs byreading between thelines. Of course, this is not the case for classicalmathematics.
Moreover, many would argue that mathematical abstractions do notcompletely leave behind their physical roots.
One aspect of the usefulness of mathematics is the facility withwhich calculations can be made: You do not need to exchange coins tocalculate your shopping bill, and you can simulate a rocket journeywithout ever firing one. Increasingly powerful mathematical theories(not to mention the computer) have led to steady gains in efficiencyand reliability. But a calculational facility would be useless if theresults did not predict reality. Predictions are successful to theextent that mathematical models appropriate aspects of reality andwhether they are appropriate can be validated byexperience. (Mitchelmore and White 2004: 330)
How is it that the axiomatic method has been so successful in thisway? The answer is, in large part, because the axioms do indeed capturemeaningful and correct patterns. … There is nothing to preventanyone from writing down some arbitrary list of postulates andproceeding to prove theorems from them. But the chance of thosetheorems having any practical application [is] slim indeed.… Many fundamental mathematical objects (especially themore elementary ones, such as numbers and their operations) clearlymodel reality. Later developments (such as combinatorics anddifferential equations) are built on these fundamental ideas and soalso reflect reality even if indirectly. Hence all mathematics has somelink back to reality. (Devlin 1994: 54–55)
I want to say: it is essential to mathematics that its signs arealso employed in mufti. It is the use outside mathematics, and sothe meaning of the signs, that makes the sign-game intomathematics. Just as it is not logical in formation either, forme to make a change from one formation to another (say from onearrangement of chairs to another) if these arrangements have not alinguistic function apart from this transformation. (Wittgenstein1956: 268)
If would appear that the difference between abstraction in computerscience and abstraction in mathematics is not so sharp.However,there appears to be animportant conceptual difference. If Turner (2011) is right, incomputer science, the abstract partner is the dominant one in therelationship: it determines correctness. In the case of (applied)mathematics things are reversed: the mathematics is there to model theworld, and it must model it accurately. In computer science, therelationship between the abstraction and its source is thespecification/artifact relationship; in mathematics it is themodel/theory to reality relationship. When things go wrong theblame is laid at a different place: with the artifact in computerscience and with the model in mathematics.
The original motivation for a mathematical analysis of computationcame from mathematical logic. Its origins are to be found in Hilbert'squestion concerning the decidability of predicate calculus: could therebe an algorithm, a procedure, for deciding of an arbitrary sentence ofthe logic whether it was provable (TheEntscheidungsproblem). In orderto address this question, a rigorous model of the informal concept ofan effective or mechanical method in logic and mathematics wasrequired. Providing this is first and foremost a mathematical endeavor:one has to develop a mathematical analogue of the informal notion. Whatis now taken to be the definitive account was given by Turing (1936).
However, Turing was not the first to attempt such acharacterization. Historically, the first notions of computability weresuggested in the early nineteen thirties.
However, Gödel was not convinced; he rejected both versions. Hedid not accept that they express an intensionally adequate notion of amechanical method. They were extensional characterizations with littlemotivational justification. In contrast, Turing's proposed classof devices, now known as Turing machines, appeared to capture theinformal notion. Furthermore, Turing proved that his machines and theLambda calculus are extensionally equivalent. This led to the Turingversion of the thesis, one formulation of which is the following.
- Turing's thesis:
- Logical computing machines (Turing'sname for his machines) can do anything that could be described as“rule of thumb” or “purely mechanical”.
Even today, every program written in an existing implementedprogramming language is Turing computable and conversely, allgeneral-purpose programming languages are Turing complete, i.e., theycontain all the control constructs necessary to simulate a universalTuring machine. But what is so special about the Turing proposal?
Turing placed great emphasis on the nature of the basic instructionsof a Turing machine. The following prose suggests the instructions canbe performed without thought: atomic operations can be implemented inmachines that do not and cannot know the meaning of theirinstructions. This is emphasized by many commentators.
Instructions given the computer must be complete and explicit, andthey must enable it to proceed step by step without requiring that itcomprehend the result of any part of the operations it performs. Such aprogram of instructions is an algorithm. It can demand any finitenumber of mechanical manipulations of numbers, but it cannot ask forjudgments about their meaning … Analgorithm is a set of rules or directions for getting a specific outputfrom a specific input. The distinguishing feature of an algorithm isthat all vagueness must be eliminated; the rules must describeoperations that are so simple and well defined they can be executed bya machine. (Knuth 1977: 63)
The instructions must be complete and explicit and the human computeris not required to comprehend of any part of the basic operations:they must proceed without any appeal to the semantics of theoperations of the machine.
The intuitive notion of an algorithm is rather vague. For example,what is a rule? We would like the rules to be mechanicallyinterpretable, i.e., such that a machine can understand the rule(instruction) and carry it out. In other words, we need to specify alanguage for describing algorithms which is general enough to describeall mechanical procedures and yet simple enough to be interpreted by amachine…. What Turing did was to analyse the human calculatingact and arrive at a number of simple operations which are obviouslymechanical in nature and yet can be shown to be capable of beingcombined to perform arbitrarily complex mechanical operations. (Wang1974: 91)
But is there such a thing as an instruction or operation in aprogramming language that has no meaning? Is it possible that we canfollow an instruction without making judgments about its meaning? Arethese instructions really meaningless?
But given that this is a genuine rule it is anything butmeaningless. Granted, it is so simple that one can envisage an agentdoing it mechanically after a short while, and that is the problem thatwill be looked at last. For the moment we need only see that, howeversimple this rule might be, it does indeed tell one what to do.(Shanker 1987: 637)
The point here seems to be that rule following involves judgmentsabout meaning. While eventually one may be able to perform themwithout thought, initially, the performer must have taken inthe meaning. They could not have beenfollowed withoutthought. This is at the heart of the Wittgenstein-Turing debate on thenature of computation.In Turing's account the normative aspect of meaning (Glüer andWikforse 2009) is replaced by what appears to be a causalinterpretation of the basic operations. This is made explicit in thefollowing analysis given by Shanker.
The only way to remove its normative content is to treat it as adescription of the causal events that occur in the mind/program of thecomputer which trigger off certain reactions. Thus Turing introducedthe premise that ‘the behaviour of the computer at any moment isdetermined by the symbols which he is observing, and his "state ofmind" at that moment’ ([36]: 136). On this picture thecomputer's ‘state of mind’ is the causal intermediarybetween ‘observed symbols’ and subsequent action. (Shanker1987: 637)
If this is correct, the simple instructions get their content fromthe impact they have on the physical device. On this view they must actas theories of the underlying physical device. But then it would appearthat our instructions are not rules with normative force butdescriptions of the physical device. This seems to be in line withColburn (2000). However, we have crossed the normative descriptivedivide. Our basic operations are no longer rules in any normativesense.
Turing Machines are Humans who Calculate (Wittgenstein 1975 [1939]: 1096)
Is calculation/computation something that humans do or somethingthat machines do—or both? Is it a rule governed mathematicalactivity or an empirical one? Is it the rules of addition that providethe correctness criteria for any physical activity that we might wishto call addition, or is it an empirical activity? If the latter, theevaluation of arithmetic expressions is subject to physicalinterference. Physical multiplication may not mean multiplication butrather what the physical machine actually does when it multiplies.Consequently, 13×74 might be 16.
This is a position close to the view that the semantics of aprogramming language is fixed by its physical implementation(§4.3). This treats 13×74 asadescription of the causal events that occur in themind/program of the computer which trigger off certain physicalreactions. Thus Turing introduced the premise that
the behaviour of the computer at any moment is determined by thesymbols which he is observing and his “state of mind at thatmoment”. (Turing 1936: 136)
On this picture the computer'sstate of mind is the causalintermediary between observed symbols and subsequent action.Wittgenstein is against this view.
Does a calculating machine calculate? Imagine that a calculatingmachine had come into existence by accident; now someone accidentallypresses its knobs (or an animal walks over it) and it calculates theproduct 25×20… Imagine that calculating machines occurredin nature, but that people could not pierce their cases. And nowsuppose that these people use these appliances, say as we usecalculation, though of that they know nothing. Thus, e.g., they makepredictions with the aid of calculating machines, but for themmanipulating these queer objects is experimenting. These people lackconcepts which we have; but what takes their place? Think of themechanism whose movement we saw as a geometrical (kinematic) proof:clearly it would not normally be said of someone turning the wheel thathe was proving something. Isn't it the same with someone whomakes and changes arrangements of signs as [an experiment]; even whenwhat he produces could be seen as a proof? (Wittgenstein 1956: V, §2)
There are no causal connections in a calculation, only theconnections of the pattern. And it makes no difference to this that wework over the proof in order to accept it. That we are thereforetempted to say that it arose as the result of a psychologicalexperiment. For the psychical course of events is not psychologicallyinvestigated when we calculate……You aren't calculatingif, when you get now this, now that result, and cannot find a mistake,you accept this and say: this simply shows that certain circumstanceswhich are still unknown have an influence on the result. This might beexpressed: if calculation reveals a causal connection to you, then youare not calculating… What I am saying comes to this, thatmathematics is normative (Wittgenstein 1956 [1978}: VII, §18).
On this view calculation and computing are mathematical activities.Wittgenstein also claims that what we do when we run programs onphysical machines is an experimental activity.Butis this right? Are we experimenting whenwe run programs on physical machines or are we using them to performphysical calculations?
There is also an apparent conflict between the insistence thatatomic instructions are meaningless and the principle ofcompositionality for semantic theories (see the entry oncompositionality). In its modernform, the supporting argument for compositionality can be found inFrege.
… the possibility of our understanding sentences which we have neverheard before rests evidently on this, that we can construct the senseof a sentence out of parts that correspond to words. (Frege 1914?)
Competent speakers can understand a complex expression that they neverencountered before. And, so the argument goes, the only plausible waythey can do this is by the employment of their knowledge of thestructure of the expression together with knowledge of the individualmeanings of its simple constituents. This is an inference to the bestexplanation. In the case of programming languages, without someaccount of the meanings of atomic constructs of the language, and howthe meanings of the constructors are fixed, it is hard to see how onecould understand and produce the programs of a programminglanguage. Unfortunately, under the assumption that all basicinstructions are meaningless, it is hard to maintain any compositionaltheory of meaning: if all basic instructions are meaningless, thecompositionality thesis would lead us to conclude that all programs inthe language are meaningless.
Many take it for granted that the Church-Turing thesis characterizesand prescribes actual physical computation. Some claim (see the entry onthe Church-Turing thesis by Copeland (2008))that the thesis as proposed by Church and Turing only concerns theinterpretation of mathematical computations. Andrew Hodges (2011)disagrees. He argues that Church and Turing did not distinguish betweenthe two interpretations. This is the historical dispute. But theimportant one concerns the capabilities of actual machines: does thephysical thesis (M) hold good? In the following, taken from Copeland(2008), the phrase calculated by a machine refers to a machine thatconforms to the laws of physics.
- Thesis M:
- Whatever can be calculated by a machine (working on finite data inaccordance with a finite program of instructions) isTuring-machine-computable.
Gandy proposed four principles that are intended to characterizecomputation by a physical machine. He shows that such machines exactlyagree with Turing's characterization (Gandy's Theorem). Copeland andShagrir (2007, 2011) suggest that Gandy's characterization of adiscrete deterministic mechanical device is too narrow, andconsequently there are examples of possible physical machines whosecapabilities go beyond the class of Turing computable functions. Manyof these require infinite speedup, whereby an infinite number ofcomputations can be physically carried out in a finite time. Quantumcomputing has been cited as a possible example for such machines, butthis has been disputed (Hagar 2007). On page 71 in a footnote, Dummett(2006) contains the view that the very notion of performing aninfinite number of physical operations in a finite time is justconfused.
One question that seems quite puzzling is this: imagine we haveconstructed a physical machine that is claimed can decide the haltingproblem? How would one verify its correctness? A plausible way is toconstruct a semi-decidable machine for the halting problem. We couldthen check that the halting machine is in agreement on the programsthat terminate. But on those that don't, how would we determine if itproduced the right results? For further discussion see Copeland andShagrir (2007).
Many of the topics we have not covered have existing or forthcomingentries. Computational Complexity theory is covered incomputability and complexity.and will bethe subject of a forthcoming article devoted to it by Walter Dean.Information in its various guises is also covered in twoentries:semantic conceptions of information andinformation.Computer ethics is covered incomputer and information Ethics.The logical aspects of artificial intelligence are covered inlogic and artificial intelligence.There is also a general entry onemergent properties.
How to cite this entry. Preview the PDF version of this entry at theFriends of the SEP Society. Look up this entry topic at theIndiana Philosophy Ontology Project (InPhO). Enhanced bibliography for this entryatPhilPapers, with links to its database.
artificial intelligence: logic and |computability and complexity |computational complexity theory |computer and information ethics |function: recursive |information |information: semantic conceptions of |mathematics, philosophy of |technology, philosophy of
This is a completely rewritten and updated version of the entry onthe philosophy of computer science, written by one of the authors ofthe previous version. See the reference to Turner and Eden 2011 inthe Other Internet Resources for the link to the previous, coauthoredversion.
View this site from another server:
The Stanford Encyclopedia of Philosophy iscopyright © 2015 byThe Metaphysics Research Lab, Center for the Study of Language and Information (CSLI), Stanford University
Library of Congress Catalog Data: ISSN 1095-5054