Movatterモバイル変換


[0]ホーム

URL:


SEP home page
Stanford Encyclopedia of Philosophy

Information Technology and Moral Values

First published Tue Jun 12, 2012; substantive revision Fri Nov 9, 2018

Every action we take leaves a trail of information that could, inprinciple, be recorded and stored for future use. For instance, onemight use the older forms of information technologies of pen and paperand keep a detailed diary listing all the things one did and thoughtduring the day. It might be a daunting task to record all thisinformation this way but there are a growing list of technologies andsoftware applications that can help us collect all manner of data,which in principle, and in practice, can be aggregated together foruse in building a data profile about you, a digital diary withmillions of entries. Some examples of which might be: a detailedlisting of all of your economic transactions; a GPS generated plot ofwhere you traveled; a list of all the web addresses you visited andthe details of each search you initiated online; a listing of all yourvital signs such as blood pressure and heart rate; all of your dietaryintakes for the day; and any other kind of data that can bemeasured. As you go through this thought experiment you begin to seethe complex trail of data that you generate each and every day and howthat same data might be efficiently collected and stored though theuse of information technologies. It is here we can begin to see howinformation technology can impact moral values. As this data gatheringbecomes more automated and ever-present, we must ask who is in controlof collecting this data and what is done with it once it has beencollected and stored? Which bits of information should be made public,which held private, and which should be allowed to become the propertyof third parties like corporations? Questions of the production,access, and control of information will be at the heart of moralchallenges surrounding the use of information technology.

One might argue that the situation just described is no different fromthe moral issues revolving around the production, access, and controlof any basic necessity of life. If one party has the privilege of theexclusive production, access, and/or control of some natural resource,then that by necessity prohibits others from using this resourcewithout the consent of the exclusive owner. This is not necessarily sowith digital information. Digital information is nonexclusory, meaningwe can all, at least theoretically, possess the same digitalinformation without excluding its use from others. This is becausecopying digital information from one source to another does notrequire eliminating the previous copy. Unlike a physical object,theoretically, we can all possess the same digital object as it can becopied indefinitely with no loss of fidelity. Since making thesecopies is often so cheap that it is almost without cost, there is notechnical obstacle to the spread of all information as long as thereare people willing to copy it and distribute it. Only appeals tomorality, or economic justice might prevent the distribution ofcertain forms of information. For example, digital entertainmentmedia, such as songs or video, has been a recurring battleground asusers and producers of the digital media fight to either curtail orextend the free distribution of this material. Therefore,understanding the role of moral values in information technology isindispensable to the design and use of these technologies (Johnson,1985, Moore, 1985, Nissenbaum, 1998, Spinello, 2001). It should benoted that this entry will not directly address the phenomenologicalapproach to the ethics of information technology since there is adetailed entry on this subject available (see the entry onphenomenological approaches to ethics and information technology).


1. Introduction

Information technology is ubiquitous in the lives of people across theglobe. These technologies take many forms such as personal computers,smart phones, internet technologies, as well as AI and robotics. Infact, the list is growing constantly and new forms of thesetechnologies are working their way into every aspect of daily life.They all have some form of computation at their core and human usersinterface with them mostly through applications and other softwareoperating systems. In some cases, such as massive multiplayer onlinegames (seesection 3.1.1 below), thesetechnologies are even opening up new ways for humans to interactingwith each other. Information technologies are used to record,communicate, synthesize or organize information through the use ofcomputer technologies. Information itself can be understood as anyuseful data, instructions, or meaningful message content. The wordliterally means to “give form to” or to shape one’sthoughts. A basic type of information technology might be theproverbial string tied around one’s finger that is used to remind, orinform, someone that they have some specific task to accomplish thatday. Here the string stands in for a more complex proposition such as“buy groceries before you come home.” The string itself isnot the information, it merely symbolizes the information andtherefore this symbol must be correctly interpreted for it to beuseful. Which raises the question, what is information itself?

Unfortunately there is not a completely satisfying andphilosophically rigorous definition available, though there are atleast two very good starting points. For those troubled by theontological questions regarding information, we might want to simplyfocus on the symbols and define information as any meaningfullyordered set of symbols. Mathematicians and engineers prefer to focuson this aspect of information, which is called “syntax”and leave the meaningfulness of information or its“semantics” for others to figure out. Claude E. Shannonworking at Bell Labs in the forties produced a landmark mathematicaltheory of communication (1948). In this work he utilized hisexperiences in cryptography and telephone technologies to work out amathematical formulation describing how syntactical information can beturned into a signal that is transmitted in such a way as to mitigatenoise or other extraneous signals which can then be decoded by thedesired receiver of the message (Shannon 1948; Shannon and Weaver1949). The concepts described by Shannon, (along with additionalimportant innovations made by others who are too many to list),explain the way that information technology works, but we still havethe deeper questions to resolve if we want to thoroughly trace theimpact of information technologies on moral values. Some philosophersnoted the fact that information technologies had highlighted thedistinction between syntax and semantics, and have been vocal criticsabout the inability of technologies to bridge the gap between the twoconcepts. Meaning that while information technologies might be adeptat manipulating syntax, they would be incapable of ever understandingthe semantics, or meanings, of the information upon which theyworked.

One famous example can be found in the “Chinese RoomArgument” (Searle 1980) in which the philosopher John Searleargued that even if one were to build a machine that could takestories written in Chinese as input and then output coherent answersto questions about those stories, it would not prove that the machineitself actually understood what it was doing. The argument rests onthe claim that if you replaced the workings of the machine with aperson who was not a native Chinese speaker who would thenpainstakingly follow a set of rules to transform the set of Chineselogograms input into other output symbols. The claim is that thatperson would not understand the input and also would not know what thesystem is saying as its output, it is all meaningless symbolmanipulation to them. The conclusion is that this admittedly strangesystem could skillfully use the syntax of the language and story whilethe person inside would have no ability to understand the semantics,or meaning, of the stories (Searle 1980). Replace the person withelectronics and it follows that the electronics also have nounderstanding of the symbols they are processing. Thisargument, while provocative is not universally accepted and has leadto decades worth of argument and rebuttal (see the entry onthe Chinese room argument).

Information technology has also had a lasting impression on the philosophicalstudy of logic and information. In this field logic is used as a way tounderstand information as well as using information science as a wayto build the foundations of logic itself (see the entry onlogic and information).

The issues just discussed are fascinating but they are separatearguments that do not necessarily have to be resolved before we canenter a discussion on information technology and moral values. Evenpurely syntactical machines can still impact many important ethicalconcerns even if they are completely oblivious to the semantic meaningof the information that they compute.

The second starting point is to explore the more metaphysical role thatinformation might play in philosophy. If we were to begin with the claimthat information either constitutes or is closely correlated with whatconstitutes our existence and the existence of everything around us,then this claim means that information plays an important ontologicalrole in the manner in which the universe operates. Adopting thisstandpoint places information as a core concern for philosophy andgives rise to the fieldsphilosophy of information and information ethics. In this entry, we will not limit our exploration to just the theoryof information but instead look more closely at the actual moral andethical impacts that information technologies are already having on oursocieties. Philosophy of Information will not be addressed in detailhere but the interested reader can begin with Floridi (2010b, 2011b)for an introduction. Some of the most important aspects of InformationEthics will be outlined in more detail below.

2. The Moral Challenges of Information Technology

The move from one set of dominant information technologies to anotheris always morally contentious. Socrates lived during the longtransition from a largely oral tradition to a newer informationtechnology consisting of writing down words and information andcollecting those writings into scrolls and books. Famously Socrateswas somewhat antagonistic to writing and scholars claim that he neverwrote anything down himself. Ironically, we only know aboutSocrates’ argument against writing because his student Platoignored his teacher and wrote them down in a dialogue called“Phaedrus” (Plato). Towards the end of this dialogueSocrates discusses with his friend Phaedrus the“…conditions which make it (writing) proper orimproper” (section 274b–479c). Socrates tells a fable ofan Egyptian God he names Theuth who gives the gift of writing to aking named Thamus. Thamus is not pleased with the gift andreplies,

If men learn this, it will implant forgetfulness in their souls; theywill cease to exercise memory because they rely on that which iswritten, calling things to remembrance no longer from withinthemselves, but by means of external marks. (Phaedrus, section275a)

Socrates, who was adept at quoting lines from poems and epics andplacing them into his conversations, fears that those who rely onwriting will never be able to truly understand and live by thesewords. For Socrates there is something immoral or false about writing.Books can provide information but they cannot, by themselves, give youthe wisdom you need to use or deeply understand that information.Conversely, in an oral tradition you do not simply consult a library,you are the library, a living manifestation of the information youknow by heart. For Socrates, reading a book is nowhere near asinsightful as talking with its author. Written words,

…seem to talk to you as though they were intelligent, but ifyou ask them anything about what they say, from a desire to beinstructed, they go on telling you the same thing forever. (Phaedrus,section 275d).

His criticism of writing at first glance may seem humorous but thetemptation to use recall and call it memory is getting more and moreprevalent in modern information technologies. Why learn anything wheninformation is just an Internet search away? In order to avoidSocrates’ worry, information technologies should do more than justprovide access to information; they should also help foster wisdom andunderstanding as well.

2.1 The Fundamental Character of Information Technologies

Early in the information technology revolution Richard Mason suggestedthat the coming changes in information technologies, such as theirroles in education and economic impacts, would necessitate rethinkingthe social contract (Mason 1986). He worried that they would challengeprivacy, accuracy, property and accessibility (PAPA) and to protectour society we “… must formulate a new social contract,one that insures everyone the right to fulfill his or her own humanpotential” (Mason 1986 P. 11) What he could not have known thenwas how often we would have to update the social contract as thesetechnologies rapidly change. Information technologies change quicklyand move in and out of fashion at a bewildering pace. This makes itdifficult to try to list them all and catalog the moral impacts ofeach. The very fact that this change is so rapid and momentous hascaused some to argue that we need to deeply question the ethics of theprocess of developing emerging technologies (Moor 2008). It has alsobeen argued that the ever morphing nature of information technology ischanging our ability to even fully understand moral values as theychange. Lorenzo Magnani claims that acquiring knowledge of how thatchange confounds our ability to reason morally “…hasbecome a duty in our technological world” (Magnani 2007,93). The legal theorist Larry Lessig warns that the pace of change ininformation technology is so rapid that it leaves the slow anddeliberative process of law and political policy behind and in effectthese technologies become lawless, or extralegal. This is due to thefact that by the time a law is written to curtail, for instance, someform of copyright infringement facilitated by a particular filesharing technology, that technology has become out of date and usersare on to something else that facilitates even more copyrightinfringement (Lessig 1999). But even given this rapid pace of change,it remains the case that information technologies or applications canall be categorized into at least three different types – each ofwhich we will look at below.

All information technologies record (store), transmit (communicate),organize and/or synthesize information. For example, a book is arecord of information, a telephone is used to communicate information,and the Dewey decimal system organizes information. Many informationtechnologies can accomplish more than one of the above functions and,most notably, the computer can accomplish all of them since it can bedescribed as a universal machine (see the entry oncomputability and complexity), so it can be programmed to emulate any form of informationtechnology. Insection 2 we will look at some specific example technologies and applicationsfrom each of the three types of information technology listed aboveand track the moral challenges that arise out of the use and design ofthese particular technologies. In addition to the above we will needto address the growing use of information environments such as massivemultiplayer games, which are environments completely composed ofinformation where people can develop alternate lives filled withvarious forms of social activities (seesection 3.3). Finally we will look at not only how information technology impactsour moral intuitions but also how it might be changing the very natureof moral reasoning. Insection 4, we will look at information as a technology of morality and how wemight program applications and robots to interact with us in a moremorally acceptable manner.

2.1.1 Moral Values in Information Recording

The control of information is power, and in an information economysuch as we find ourselves today, it may be the ultimate form ofpolitical power. We live in a world rich in data and the technology toproduce, record, and store vast amounts of this data has developedrapidly. The primary moral concern here is that when we collect,store, and/or access information, it is vital that this be done in ajust manner that can reasonably be seen as fair and in the bestinterests of all parties involved. As was mentioned above, each of usproduces a vast amount of information every day that could be recordedand stored as useful data to be accessed later when needed. But moralconundrums arise when that collection, storage and use of ourinformation is done by third parties without our knowledge or donewith only our tacit consent. The social institutions that havetraditionally exercised this power are things like, religiousorganizations, universities, libraries, healthcare officials,government agencies, banks and corporations. These entities haveaccess to stored information that gives them a certain amount of powerover their customers and constituencies. Today each citizen has accessto more and more of that stored information without the necessity ofutilizing the traditional mediators of that information and thereforea greater individual share of social power (see Lessig 1999).

One of the great values of modern information technology is that itmakes the recording of information easy and almost automatic. Today, agrowing number of people enter biometric data such as blood pressure,calorie intake, exercise patterns, etc. into applications designed tohelp them achieve a healthier lifestyle. This type of data collectioncould become almost fully automated in the near future. Through theuse of smart watches or technologies such as the “Fitbit,” orgathered through a users smartphone, such as applications that use theGPS tracking to track the length and duration of a user’s walk or run.How long until a smartphone collects a running data stream of yourblood pressure throughout the day perhaps tagged with geolocationmarkers of particularly high or low readings? In one sense this couldbe immensely powerful data that could lead to much healthier lifestylechoices. But it could also be a serious breach in privacy if theinformation got into the wrong hands, which could be easilyaccomplished, since third parties have access to information collectedon smartphones and online applications. In the next section (2.1.2) we will look at some theories on how best to ethically communicatethis recorded information to preserve privacy. But here we mustaddress a more subtle privacy breach – the collection andrecording of data about a users without their knowledge orconsent. When searching on the Internet, browser software records allmanner of data about our visits to various websites which can, forexample, make webpages load faster next time you visit them. Even thewebsites themselves use various means to record information when yourcomputer has accessed them and they may leave bits of information onyour computer which the site can use the next time you visit. Somewebsites are able to detect which other sites you have visited orwhich pages on the website you spend the most time on. If someone werefollowing you around a library noting down this kind of information,you might find it uncomfortable or hostile, but online this kind ofbehavior takes place behind the scenes and is barely noticed by thecasual user.

According to some professionals, information technology has all buteliminated the private sphere and that it has been this way fordecades. Scott McNealy of Sun Microsystems famously announced in 1999:“You have zero privacy anyway. Get over it” (Sprenger,1999). Helen Nissenbaum observes that,

[w]here previously, physical barriers and inconvenience might havediscouraged all but the most tenacious from ferreting out information,technology makes this available at the click of a button or for a fewdollars (Nissenbaum 1997)

and since the time when she wrote this the gathering of personal datahas become more automated and cheaper. Clearly, earlier theories ofprivacy that assumed the inviolability of physical walls no longerapply but as Nissenbaum argues, personal autonomy and intimacy requireus to protect privacy nonetheless (Nissenbaum 1997).

A final concern in this section is that information technologies arenow storing user data in “the cloud” meaning that the datais stored on a device remotely located from the user and not owned oroperated by that user, but the data is then available from anywherethe user happens to be, on any device they happen to be using. Thisease of access has the result of also making the relationship one hasto one’s own data more tenuous because of the uncertainty about thephysical location of that data. Since personal data is cruciallyimportant to protect, the third parties that offer “cloud”services need to understand the responsibility of the trust the useris placing in them. If you load all the photographs of your life to aservice like Flickr and they were to somehow lose or delete them, thiswould be a tragic mistake that might be impossible to repair.

2.1.2 Moral Values in Communicating and Accessing Information

Information technology has forced us to rethink earlier notions ofprivacy that were based on print technologies, such as letters, notes,books, pamphlets, newspapers, etc. The moral values that coalescedaround these earlier technologies have been sorely stretched by theeasy way that information can be shared and altered using digitalinformation technologies and this has required the rapid developmentof new moral theories that recognize both the benefits and risks ofcommunicating all manner of information using modern informationtechnologies. The primary moral values that seem to be under pressurefrom these changes are privacy, confidentiality, ownership, trust, andthe veracity of the information being communicated in these newways.

Who has the final say whether or not some information about a user iscommunicated or not? Who is allowed to sell your medical records, yourfinancial records, your email, your browser history, etc.? If you donot have control over this process, then how can you enforce your ownmoral right to privacy? For instance Alan Westin argued in the veryearly decades of the advance of digital information technologies thatcontrol of access to one’s personal information was the key tomaintaining privacy (Westin, 1967). It follows that if we care aboutprivacy, then we should give all the control of access to personalinformation to the individual. Most corporate entities resist thisnotion for the simple reason that information about users has become aprimary commodity in the digital world boosting the vast fortunes ofcorporations like Google or Facebook. Indeed, there is a great deal ofutility each of us gains from the services provided by internet searchcompanies like Google and social networks such as Facebook. It mightbe argued that it is actually a fair exchange we receive since theyprovide search results and other applications for free and they offsetthe cost of creating those valuable serviced by collecting data fromindividual user behavior that can be monetized in various lucrativeways. A major component of the profit model for these companies isbased on directed advertising where the information collected on theuser is used to help identify advertising that will be most effectiveon a particular user based on his or her search history and otheronline behaviors. Simply by using the free applications offered, eachuser tacitly agrees to give up some amount of privacy that varies withthe applications they are using. Even if we were to agree that thereis some utility to the services users receive in this exchange, thereare still many potential moral problems with this arrangement. If wefollow the argument raised by Westin earlier that privacy isequivalent to information control (ibid.), then we do seem to beceding our privacy away little by little given that we have almost nocontrol or even much understanding of the vast amounts of digitalinformation that is collected about us.

There is a counterargument to this. Herman Tavani and James Moor(2004) argue that in some cases giving the user more control of theirinformation may actually result in greater loss of privacy. Theirprimary argument is that no one can actually control all of theinformation about oneself that is produced every day by ouractivities. If we focus only on the fraction of it that we cancontrol, we lose sight of the vast mountains of data we cannot (Tavaniand Moor, 2004). Tavani and Moor argue that privacy must be recognizedby the third parties that do control your information and only ifthose parties have a commitment to protecting user privacy, will weactually acquire any privacy worth having. Towards this end, theysuggest that we think in terms of restricted access to informationrather than strict personal control of information (ibid).

Information security is another important moral value that impacts thecommunication and access of user information. If we grant the controlof our information to third parties in exchange for the services theyprovide, then these entities must also be responsible for restrictingthe access to that information by others who might use it to harm us(See Epstein 2007; Magnani 2007; Tavani 2007). With enoughinformation, a person’s entire identity can be stolen and used tofacilitate fraud and larceny. This type of crime has grown rapidlysince the advent of digital information technologies. The victims ofthese crimes can have their lives ruined as they try to rebuild suchthings as their credit rating and bank accounts. This has led to thedesign of computer systems that are more difficult to access and thegrowth of a new industry dedicated to securing computer systems. Evenwith these efforts the economic and social impact of cybercrime isgrowing at a staggering rate. In February of 2018 the cyber-securitycompany McAfee released a report that estimated the world cost incybercrime was up from $445 billion in 2014 to $608 billion dollars or0.8 of the global GDP in 2018, and that is not counting the hiddencosts of increased friction and productivity loss in time spent tryingto fight cybercrime (McAfee 2018).

The difficulty in obtaining complete digital security rests on thefact that the moral value of security can be in conflict with themoral values of sharing and openness, and it is these later valuesthat guided many of the early builders of information technology.Steven Levy (1984) describes in his book, “Hackers: Heroes ofthe Computer Revolution,” a kind of “Hacker ethic,”that includes the idea that computers should be freely accessible anddecentralized in order to facilitate “world improvement”and further social justice (Levy 1984; see also Markoff 2005). So itseems that information technology has a strong dissonance created inthe competing values of security and openness that is worked rightinto the design of these technologies and this is all based on thecompeting moral values held by the various people who designed thetechnologies themselves.

This conflict in values has been debated by philosophers. While manyof the hackers interviewed by Levy argue that hacking is not asdangerous as it seems and that it is mostly about gaining access tohidden knowledge of how information technology systems work, EugeneSpafford counters that no computer break-in is entirely harmless andthat the harm precludes the possibility of ethical hacking except inthe most extreme cases (Spafford 2007). Kenneth Himma largely agreesthat the activity of computer hacking is unethical but thatpolitically motivated hacking or “Hacktivism” may havesome moral justification, though he is hesitant to give his completeendorsement of the practice due to the largely anonymous nature of thespeech entailed by the hacktivist protests (Himma 2007b). Mark Manionand Abby Goodrum agree that hacktivism could be a special case ofethical hacking but warn that it should proceed in accordance to themoral norms set by the acts of civil disobedience that marked thetwentieth century or risk being classified as online terrorism (Manionand Goodrum 2007).

A very similar value split plays out in other areas as well,particularly in the field of intellectual property rights (see entryonintellectual property/) and pornography and censorship (see entry onpornography and censorship). What information technology adds to these long standing moral debatesis the nearly effortless access to information that others might wantto control such as intellectual property, dangerous information andpornography (Floridi 1999), as well as providing technologicalanonymity for both the user and those providing access to theinformation in question (Nissenbaum 1999; Sullins 2010). For example,even though cases of bullying and stalking occur regularly, theanonymous and remote actions of cyber-bullying and cyberstalking makethese behaviors much easier and the perpetrator less likely to becaught. Given that information technologies can make these unethicalbehaviors more likely, then it can be argued that the design ofcyberspace itself tacitly promotes unethical behavior (Adams 2002;Grodzinsky and Tavani 2002). Since the very design capabilities ofinformation technology influence the lives of their users, the moralcommitments of the designers of these technologies may dictate thecourse society will take and our commitments to certain moral valueswill then be determined by technologists (Brey 2010; Bynum 2000; Ess2009; Johnson 1985; Magnani 2007; Moor 1985; Spinello 2001; Sullins2010).

Assuming we are justified in granting access to some store ofinformation that we may be in control of, there is a duty to ensurethat that information is truthful, accurate, and useful. A simpleexperiment will show that information technologies might have somedeep problems in this regard. Load a number of different searchengines and then type the same search terms in each of them, each willpresent different results and some of these searches will vary widelyfrom one another. This shows that each of these services uses adifferent proprietary algorithm for presenting the user with resultsfrom their search. It follows then that not all searches are equal andthe truthfulness, accuracy, and usefulness of the results will dependgreatly on which search provider you are using and how much userinformation is shared with this provider. All searches are filtered byvarious algorithms in order to ensure that the information the searchprovider believes is most important to the user is listed first. Sincethese algorithms are not made public and are closely held tradesecrets, users are placing a great deal of trust in this filteringprocess. The hope is that these filtering decisions are morallyjustifiable but it is difficult to know. A simple example is found in“clickjacking.” If we are told a link will take us to one location onthe web yet when we click it we are taken to some other place, theuser may feel that this is a breach of trust. This is often called“clickjacking” and malicious software can clickjack abrowser by taking the user to some other site than what is expected;it will usually be rife with other links that pay the clickjacker forbringing traffic to them (Hansen and Grossman, 2008). Again theanonymity and ease of use that information technology provides canfacilitate deceitful practices such as clickjacking. Pettit (2009)suggests that this should cause us to reevaluate the role that moralvalues such as trust and reliance play in a world of informationtechnology. Anonymity and the ability to hide the authors of newsreports online has contributed to the raise of “fake news” orpropaganda of various sorts posing as legitimate news. This is asignificant problem and will be discussed in section (2.2.3) below

Lastly in this section we must address the impact that the access toinformation has on social justice. Information technology was largelydeveloped in the Western industrial societies during the twentiethcentury. But even today the benefits of this technology have notspread evenly around the world and to all socioeconomic demographics.Certain societies and social classes have little to no access to theinformation easily available to those in more well off and indeveloped nations, and some of those who have some access have thataccess heavily censored by their own governments. This situation hascome to be called the “digital divide,” and despiteefforts to address this gap it may be growing wider. It is worthnoting that as the cost of smart phones decreases these technologiesare giving some access to the global internet to communities that havebeen shut out before (Poushter 2016). While much of this gap isdriven by economics (see Warschauer 2003), Charles Ess notes thatthere is also a problem with the forces of a new kind of cyber enabledcolonialism and ethnocentrism that can limit the desire of thoseoutside the industrial West to participate in this new “GlobalMetropolis” (Ess 2009). John Weckert also notes that culturaldifferences in giving and taking offence play a role in the design ofmore egalitarian information technologies (Weckert 2007). Others arguethat basic moral concerns like privacy are weighed differently inAsian cultures (Hongladarom 2008; Lü 2005).

2.1.3 Moral Values in Organizing and Synthesizing Information

In addition to storing and communicating information, many informationtechnologies automate the organizing of information as well assynthesizing or mechanically authoring or acting on new information.Norbert Wiener first developed a theory of automated informationsynthesis which he calledCybernetics (Wiener 1961 [1948]).Wiener realized that a machine could be designed to gather informationabout the world, derive logical conclusions about that informationwhich would imply certain actions, which the machine could thenimplement, all without any direct input form a human agent. Wienerquickly saw that if his vision of cybernetics was realized, therewould be tremendous moral concerns raised by such machines and heoutlined some of them in his bookthe Human Use of HumanBeings (Wiener 1950). Wiener argued that, while this sort oftechnology could have drastic moral impacts, it was still possible tobe proactive and guide the technology in ways that would increase themoral reasoning capabilities of both humans and machines (Bynum2008).

Machines make decisions that have moral impacts. Wendell Wallach andColin Allen tell an anecdote in their book “MoralMachines” (2008). One of the authors left on a vacation and whenhe arrived overseas his credit card stopped working, perplexed, hecalled the bank and learned that an automatic anti-theft program haddecided that there was a high probability that the charges he wastrying to make were from someone stealing his card and that in orderto protect him the machine had denied his credit card transactions.Here we have a situation where a piece of information technology wasmaking decisions about the probability of nefarious activity happeningthat resulted in a small amount of harm to the person that it wastrying to help. Increasingly, machines make important life changingfinancial decisions about people without much oversight from humanagents. Whether or not you will be given a credit card, mortgage loan,the price you will have to pay for insurance, etc., is very oftendetermined by a machine. For instance if you apply for a credit card,the machine will look for certain data points, like your salary, yourcredit record, the economic condition of the area you reside in, etc.,and then calculate the probability that you will default on yourcredit card. That probability will either pass a threshold ofacceptance or not and determine whether or not you are given the card.The machine can typically learn to make better judgments given theresults of earlier decisions it has made. This kind of machinelearning and prediction is based on complex logic and mathematics (seefor example, Russell and Norvig 2010), this complexity may result inslightly humorous examples of mistaken predictions as told in theanecdote above, or it might be more eventful. For example, the programmay interpret the data regarding the identity of one’s friends andacquaintances, his or her recent purchases, and other readilyavailable social data, which might result in the mistakenclassification of that person as a potential terrorist, thus alteringthat person’s life in a powerfully negative way (Sullins 2010). It alldepends on the design of the learning and prediction algorithm,something that is typically kept secret, so that it is hard to justifythe veracity of the prediction.

2.2 The Moral Paradox of Information Technologies

Several of the issues raised above result from the moral paradox ofInformation technologies. Many users want information to be quicklyaccessible and easy to use and desire that it should come at as low acost as possible, preferably free. But users also want important andsensitive information to be secure, stable and reliable. Maximizingour value of quick and low cost minimizes our ability to providesecure and high quality information and the reverse is true also. Thusthe designers of information technologies are constantly faced withmaking uncomfortable compromises. The early web pioneer Stewart Brandsums this up well in his famous quote:

In fall 1984, at the first Hackers’ Conference, I said in onediscussion session: “On the one hand information wants to beexpensive, because it’s so valuable. The right information in theright place just changes your life. On the other hand, informationwants to be free, because the cost of getting it out is getting lowerand lower all the time. So you have these two fighting against eachother” (Clarke 2000—see Other Internet Resources)[1]

Since these competing moral values are essentially impossible toreconcile, they are likely to continue to be at the heart of moraldebates in the use and design of information technologies for theforeseeable future.

3. Specific Moral Challenges at the Cultural Level

In the section above, the focus was on the moral impacts ofinformation technologies on the individual user. In this section, thefocus will be on how these technologies shape the moral landscape atthe societal level. At the turn of the twentieth century the term“web 2.0” began to surface and it referred to the new waythat the world wide web was being used as a medium for informationsharing and collaboration as well as a change in the mindset of webdesigners to include more interoperability and user-centeredexperiences on their websites. This term has also become associatedwith “social media” and “social networking.”While the original design of the World Wide Web in 1989 by its creatorTim Berners-Lee was always one that included notions of meeting othersand collaborating with them online, users were finally ready to fullyexploit those capabilities by 2004 when the first Web 2.0 conferencewas held by O’Reilly Media (O’Reilly 2007 [2005]). This change has meant that a growing number of people havebegun to spend significant portions of their lives online with otherusers experiencing a new unprecedented lifestyle. Social networking isan important part of many people’s lives worldwide. Vast numbers ofpeople congregate on sites like Facebook and interact with friends oldand new, real and virtual. The Internet offers the immersiveexperience of interacting with others in virtual worlds whereenvironments are constructed entirely out of information. Just now,emerging onto the scene are technologies that will allow us to mergethe real and the virtual. This new form of “augmentedreality” is facilitated by the fact that many people now carryGPS enabled smart phones and other portable computers with them uponwhich they can run applications that let them interact with theirsurroundings and their computers at the same time, perhaps looking atan item though the camera in their device and the “app”calling up information about that entity and displaying it in a bubbleabove the item. Each of these technologies comes with their own suiteof new moral challenges some of which will be discussed below.

3.1 Social Media and Networking

Social networking is a term given to sites and applications whichfacilitate online social interactions that typically focus on sharinginformation with other users referred to as “friends.” Themost famous of these sites today is Facebook but there are manyothers, such as Instagram, Twitter, Snapchat, to name just a few.There are a number of moral values that these sites call intoquestion. Shannon Vallor (2011, 2016) has reflected on how sites likeFacebook change or even challenge our notion of friendship. Heranalysis is based on the Aristotelian theory of friendship (see entryonAristotle’s ethics).Aristotle argued that humans realize a good and true life thoughvirtuous friendships. Vallor notes that four key dimensions ofAristotle’s ‘virtuous friendship,’ namely:reciprocity, empathy, self-knowledge and the shared life, and that thefirst three are found in online social media in ways that cansometimes strengthen friendship (Vallor 2011, 2016). Yet she arguesthat social media is not yet up to the task of facilitating whatAristotle calls ‘the shared life.’ Meaning that socialmedia can give us shared activities but not the close intimatefriendship that shared daily lives can give. (Here is a more completediscussion of Aristotelianfriendship).Thus these media cannot fully support the Aristotelian notion ofcomplete and virtuous friendship by themselves (Vallor 2011). Valloralso has a similar analysis of other Aristotelian virtues such aspatience, honesty, and empathy and their problematic application insocial media (Vallor 2010). Vallor has gone on to argue that both theusers and designers of information technologies need to develop a newvirtue that she terms “technomoral wisdom” which can helpus foster better online communities and friendships (Vallor,2016).

Johnny Hartz Søraker (2012) argues for a nuanced understandingof online friendship rather than a rush to normative judgement on thevirtues of virtual friends.

Privacy issues abound in the use of social media. James Parrishfollowing Mason (1986) recommends four policies that a user of socialmedia should follow to ensure proper ethical concern for other’sprivacy:

  1. When sharing information on SNS (social network sites), it is notonly necessary to consider the privacy of one’s personal information,but the privacy of the information of others who may be tied to theinformation being shared.
  2. When sharing information on SNS, it is the responsibility of theone desiring to share information to verify the accuracy of theinformation before sharing it.
  3. A user of SNS should not post information about themselves thatthey feel they may want to retract at some future date. Furthermore,users of SNS should not post information that is the product of themind of another individual unless they are given consent by thatindividual. In both cases, once the information is shared, it may beimpossible to retract.
  4. It is the responsibility of the SNS user to determine theauthenticity of a person or program before allowing the person orprogram access to the shared information. (Parrish 2010)

These systems are not normally designed to explicitly infringe onindividual privacy, but since these services are typically free thereis a strong economic drive for the service providers to harvest atleast some information about their user’s activities on the site inorder to sell that information to advertisers for directed marketing.This marketing can be done with the provider just selling access tousers’ data that has been made anonymous, so that the advertiser knowsthat the user may be likely to buy a pair of jeans but they do not begiven the exact identity of that person. In this way a social networkprovider can try to maintain the moral value of privacy for its userswhile still profiting off of linking them with advertisers.

3.1.1 Online Games and Worlds

The first moral impact one encounters when contemplating online gamesis the tendency for these games to portray violence, sexism, andsexual violence. There are many news stories that claim a cause andeffect relationship between violence in computer games and realviolence. The claim that violence in video games has a causalconnection to actual violence has been strongly critiqued by thesocial scientist Christopher J. Ferguson (Ferguson 2007). However,Mark Coeckelbergh argues that since this relationship is tenuous atbest and that the real issue at hand is the effect these games have onone’s moral character (Coeckelbergh 2007). But Coeckelberghgoes on to claim that computer games could be designed to facilitatevirtues like empathy and cosmopolitan moral development, thus he isnot arguing against all games just those where the violence inhibitsmoral growth (Coeckelbergh 2007). A good example of this might be thevirtual reality experience that was designed by Planned Parenthood in2017, “…which focuses on the experience of accessing abortion inAmerica, positively influences the way viewers feel about theharassment that many patients, providers, and health center staffexperience from opponents of safe, legal abortion” (PlannedParenthood, 2017).

Marcus Schulzke (2010) defends the depiction of violence in videogames. Schulzke’s main claim is that actions in a virtual world arevery different from actions in the real world. Although a player may“kill” another player in a virtual world, the offendedplayer is instantly back in the game and the two will almost certainlyremain friends in the real world. Thus virtual violence is verydifferent from real violence, a distinction that gamers arecomfortable with (Schulzke 2010). While virtual violence may seempalatable to some, Morgan Luck (2009) seeks a moral theory that mightbe able to allow the acceptance of virtual murder but that will notextend to other immoral acts such as pedophilia. Christopher Bartel(2011) is less worried about the distinction Luck attempts to draw;Bartel argues that virtual pedophilia is real child pornography, whichis already morally reprehensible and illegal across the globe.

While violence is easy to see in online games, there is a much moresubstantial moral value at play and that is the politics of virtualworlds. Peter Ludlow and Mark Wallace describe the initial moves toonline political culture in their book,The Second Life Herald:The Virtual Tabloid that Witnessed the Dawn of the Metaverse(2007). Ludlow and Wallace chronicle how the players in massive onlineworlds have begun to form groups and guilds that often confound thedesigners of the game and are at times in conflict with those thatmake the game. Their contention is that designers rarely realize thatthey are creating a space where people intended to live large portionsof their lives and engage in real economic and social activity andthus the designers have the moral duties somewhat equivalent to thosewho may write a political constitution (Ludlow and Wallace 2007).According to Purcell (2008), there is little commitment to democracyor egalitarianism by those who create and own online games and this needs to be discussed, if more andmore of us are going to spend time living in these virtual societies.

3.1.2 The Lure of the Virtual in Game Worlds

A persistent concern about the use of computers and especiallycomputer games is that this could result in anti-social behavior andisolation. Yet studies might not support these hypotheses (Gibba, etal. 1983). With the advent of massively multiplayer games as well asvideo games designed for families the social isolation hypothesis iseven harder to believe. These games do, however, raise gender equalityissues. James Ivory used online reviews of games to complete a studythat shows that male characters outnumber female characters in gamesand those female images that are in games tend to be overly sexualized(Ivory 2006). Soukup (2007) suggests that gameplay in these virtualworlds is most often based on gameplay that is oriented to masculinestyles of play thus potentially alienating women players. And thosewomen that do participate in game play at the highest level play rolesin gaming culture that are very different from those the largelyheterosexual white male gamers, often leveraging their sexuality togain acceptance (Taylor et al. 2009). Additionally, Joan M. McMahonand Ronnie Cohen have studied how gender plays a role in the making ofethical decisions in the virtual online world, with women more likelyto judge a questionable act as unethical then men (2009). MarcusJohansson suggests that we may be able to mitigate virtual immoralityby punishing virtual crimes with virtual penalties in order to fostermore ethical virtual communities (Johansson 2009).

The media has raised moral concerns about the way that childhood hasbeen altered by the use of information technology (see for exampleJones 2011). Many applications are now designed specifically forbabies and toddlers with educational applications or justentertainment to help keep the children occupied while their parentsare busy. This encourages children to interact with computers from asearly an age as possible. Since children may be susceptible to mediamanipulation such as advertising we have to ask if this practice ismorally acceptable or not. Depending on the particular applicationbeing used, it may encourage solitary play that may lead to isolationbut others are more engaging with both the parents and the childrenplaying (Siraj-Blatchford 2010). It should also be noted thatpediatricians have advised that there are no known benefits to earlymedia use amongst young children but there potential risks (Christakis2009). Studies have shown that from 1998 to 2008, sedentary lifestylesamongst children in England have resulted in the first measureddecline in strength since World War Two (Cohen et al. 2011). It is notclear if this decline is directly attributable to informationtechnology use but it may be a contributing factor. In 2018 theAmerican Academy of Pediatrics released some simple guidelines forparents who may be trying to set realistic limits on this activity (Tips from the American Academy of Pediatrics).

3.1.3 The Technological Transparency Paradox

One may wonder why social media services tend to be free to use, butnone the less, often make fabulous profits for the private companiesthat offer these services. It is no deep secret that the way thesecompanies make profit is through the selling of information that theusers are uploading to the system as they interact with it. The moreusers, and the more information that they provide, the greater thevalue that aggregating that information becomes. Mark Zuckerbergstated his philosophical commitment to the social value of this in hisletter to shareholders from February 1, 2012:

At Facebook, we build tools to help people connect with the peoplethey want and share what they want, and by doing this we are extendingpeople’s capacity to build and maintain relationships. Peoplesharing more – even if just with their close friends or families– creates a more open culture and leads to a betterunderstanding of the lives and perspectives of others. We believe thatthis creates a greater number of stronger relationships betweenpeople, and that it helps people get exposed to a greater number ofdiverse perspectives. By helping people form these connections, wehope to rewire the way people spread and consume information. We thinkthe world’s information infrastructure should resemble the socialgraph “a network built from the bottom up or peer-to-peer,rather than the monolithic, top-down structure that has existed todate. We also believe that giving people control over what they shareis a fundamental principle of this rewiring” (Facebook, Inc.,2012).

The social value of perusing this is debatable, but the economic valuehas been undeniable. At the time this was written, Mark Zuckerberg hasbeen constantly listed in the top ten richest billionaires by ForbesMagazine where he is typically in the top five of that rarefied group.An achievement built on providing a free service to the world. Whatcompanies like Facebook do charge for are services, such as directedadvertising, which allow third party companies to access informationthat users have provided to the social media applications. The resultis that ads bought on an application such as Facebook are more likelyto be seen as useful to viewers who are much more likely to click onthese ads and buy the advertised products. The more detailed andpersonal the information shared, the more valuable it will be to thecompanies that it is shared with. This radical transparency of sharingdeeply personal information with companies like Facebook isencouraged. Those who do use social networking technologies do receivevalue as evidenced by the rapid growth of this technology. Statistareports that in 2019 there will be 2.77 billion users of social mediaworldwide and it will grow to 3.02 by 2021 (Statista, 2018). Thequestion here is, what do we give up in order to receive this“free” service? In 2011, back when there were less than a billionsocial media users the technology critic Andrew Keen warned that,“sharing is a trap,” and that there was a kind of cult of radicaltransparency developing that clouded our ability to think criticallyabout the kind of power we were giving these companies (Keen, 2011).Even before companies like Facebook were making huge profits, therewere those warning of the dangers of the cult of transparency withwarning such as:

…it is not surprising that public distrust has grown in the veryyears in which openness and transparency have been so avidly pursued.Transparency destroys secrecy: but it may not limit the deception anddeliberate misinformation that undermine relations of trust. If wewant to restore trust we need to reduce deception and lies, ratherthan secrecy. (O’Neill, 2002)

In the case of Facebook we can see that some of the warnings of thecritics were prescient. In April of 2018, Mark Zuckerberg was calledbefore congress where he apologized for the actions of his corporationin a scandal that involved divulging a treasure trove of informationabout his users to an independent researcher, who then sold it toCambridge Analytica, which was a company involved in political dataanalysis. This data was then used to target political ads to the usersof Facebook. Many of which were fake ads created by Russianintelligence to disrupt the US election in 2016 (Au-Yeung, 2018).

The philosopher Shannon Vallor critiques the cult of transparency as aversion of what she calls the “Technological TransparencyParadox” (Vallor, 2016). She notes that those in favor ofdeveloping technologies to promote radically transparent societies, doso under the premise that this openness will increase accountabilityand democratic ideals. But the paradox is that this cult oftransparency often achieves just the opposite with large unaccountableorganizations that are not democratically chosen holding informationthat can be used to weaken democratic societies. This is due to theasymmetrical relationship between the user and the companies with whomshe shares all the data of her life. The user is, indeed radicallyopen and transparent to the company, but the algorithms used to minethe data and the 3rd parties that this data is shared with is opaqueand not subject to accountability. We, the users of thesetechnologies, are forced to be transparent but the companies profitingoff our information are not required to be equally transparent.

3.3 Malware, Spyware and Informational Warfare

Malware and computer virus threats continue to grow at an astonishingrate. Security industry professionals report that while certain typesof malware attacks such as spam are falling out of fashion, newertypes of attacks such as Ransomware and other methods focused onmobile computing devices, cryptocurrency, and the hacking of cloudcomputing infrastructure are on the rise outstripping any small reliefseen in the slowing down of older forms of attack (Cisco Systems2018; Kaspersky Lab 2017, McAfee 2018, Symantec 2018). What is clear isthat this type of activity will be with us for the foreseeablefuture. In addition to the largely criminal activity of malwareproduction, we must also consider the related but more morallyambiguous activities of hacking, hacktivism, commercial spyware, andinformational warfare. Each of these topics has its own suite ofsubtle moral ambiguities. We will now explore some of them here.

While there may be wide agreement that the conscious spreading ofmalware is of questionable morality there is an interesting questionas to the morality of malware protection and anti-virus software. Withthe rise in malicious software there has been a corresponding growthin the security industry which is now a multibillion dollar market.Even with all the money spent on security software there seems to beno slowdown in virus production, in fact quite the opposite hasoccurred. This raises an interesting business ethics concern; whatvalue are customers receiving for their money from the securityindustry? The massive proliferation of malware has been shown to belargely beyond the ability of anti-virus software to completelymitigate. There is an important lag in the time between when a newpiece of malware is detected by the security community and theeventual release of the security patch and malware removal tools.

The anti-virus modus operandi of receiving a sample, analyzing thesample, adding detection for the sample, performing quality assurance,creating an update, and finally sending the update to their usersleaves a huge window of opportunity for the adversary … evenassuming that anti-virus users update regularly. (Aycock and Sullins2010)

This lag is constantly exploited by malware producersand in this model there is an ever-present security hole that isimpossible to fill. Thus it is important that security professionalsdo not overstate their ability to protect systems, by the time a newmalicious program is discovered and patched, it has already donesignificant damage and there is currently no way to stop this (Aycockand Sullins 2010).

In the past most malware creation was motivated by hobbyists andamateurs, but this has changed and now much of this activity iscriminal in nature (Cisco Systems 2018; Kaspersky Lab 2017, McAfee2018, Symantec 2018). Aycock and Sullins (2010) argue that relying ona strong defense is not enough and the situation requires acounteroffensive reply as well and they propose an ethically motivatedmalware research and creation program. This is not an entirely newidea and it was originally suggested by the Computer Scientist GeorgeLedin in his editorial for theCommunications of the ACM,“Not Teaching Viruses and Worms is Harmful” (2005). Thisidea does run counter to the majority opinion regarding the ethics oflearning and deploying malware. Many computer scientists andresearchers in information ethics agree that all malware is unethical(Edgar 2003; Himma 2007a; Neumann 2004; Spafford 1992; Spinello2001). According to Aycock and Sullins, these worries can be mitigatedby open research into understanding how malware is created in order tobetter fight this threat (2010).

When malware and spyware is created by state actors, we enter theworld of informational warfare and a new set of moral concerns. Everydeveloped country in the world experiences daily cyber-attacks, withthe major target being the United States that experiences a purported1.8 billion attacks a month in 2010 (Lovely 2010) and 80 billionmalicious scans world wide in 2017 (McAfee 2018). The majority ofthese attacks seem to be just probing for weaknesses but they candevastate a countries internet such as the cyber-attacks on Estonia in2007 and those in Georgia which occurred in 2008. While the Estonianand Georgian attacks were largely designed to obfuscate communicationwithin the target countries more recently informational warfare hasbeen used to facilitate remote sabotage. The famous Stuxnet virus usedto attack Iranian nuclear centrifuges is perhaps the first example ofweaponized software capable of creating remotely damaging physicalfacilities (Cisco Systems 2018). The coming decades will likely seemany more cyber weapons deployed by state actors along well-knownpolitical fault lines such as those between Israel-America-westernEurope vs Iran, and America-Western Europe vs China (Kaspersky Lab2018). The moral challenge here is to determine when these attacks areconsidered a severe enough challenge to the sovereignty of a nation tojustify military reactions and to react in a justified and ethicalmanner to them (Arquilla 2010; Denning 2008, Kaspersky Lab 2018).

The primary moral challenge of informational warfare is determininghow to use weaponized information technologies in a way that honorsour commitments to just and legal warfare. Since warfare is already amorally questionable endeavor it would be preferable if informationtechnologies could be leveraged to lessen violent combat. Forinstance, one might argue that the Stuxnet virus used undetected from2005 to 2010 did damage to Iranian nuclear weapons programs that ingenerations before might have only been accomplished by an air raid orother kinetic military action that would have incurred significantcivilian casualties—and that so far there have been no reportedhuman casualties resulting from Stuxnet. Thus malware might lessen theamount of civilian casualties in conflict. The malware known as“Flame” is an interesting case of malware that evidencesuggests was designed to aid in espionage. One might argue that moreaccurate information given to decision makers during wartime shouldhelp them make better decisions on the battlefield. On the other hand,these new informational warfare capabilities might allow states toengage in continual low level conflict eschewing efforts forpeacemaking which might require political compromise.

3.4 Future Concerns

As was mentioned in the introduction above, information technologiesare in a constant state of change and innovation. The internettechnologies that have brought about so much social change werescarcely imaginable just decades before they appeared. Even though wemay not be able to foresee all possible future informationtechnologies, it is important to try to imagine the changes we arelikely to see in emerging technologies. James Moor argues that moralphilosophers need to pay particular attention to emerging technologiesand help influence the design of these technologies early on toencourage beneficial moral outcomes (Moor 2005). The followingsections contain some potential technological concerns.

3.4.1 Acceleration of Change

An information technology has an interesting growth pattern that hasbeen observed since the founding of the industry. Intel engineerGordon E. Moore noticed that the number of components that could beinstalled on an integrated circuit doubled every year for a minimaleconomic cost and he thought it might continue that way for anotherdecade or so from the time he noticed it in 1965 (Moore 1965). Historyhas shown his predictions were rather conservative. This doubling ofspeed and capabilities along with a halving of costs to produce it hasroughly continued every eighteen months since 1965 and is likely tocontinue. This phenomenon is not limited to computer chips and canalso be found in many different forms of information technologies. Thepotential power of this accelerating change has captured theimagination of the noted inventor and futurist Ray Kurzweil. He hasfamously predicted that if this doubling of capabilities continues andmore and more technologies become information technologies, then therewill come a point in time where the change from one generation ofinformation technology to the next will become so massive that it willchange everything about what it means to be human. Kurzweil has namedthis potential event “the Singularity” at which time hepredicts that our technology will allow us to become a new post humanspecies (2006). If this is correct, there could be no more profoundchange to our moral values. There has been some support for thisthesis from the technology community with institutes such as the Acceleration Studies Foundation, Future of Humanity Institute, and H+.[2] Reaction to this hypothesis from philosophers has been mixed butlargely critical. For example Mary Midgley (1992) argues that thebelief that science and technology will bring us immortality andbodily transcendence is based on pseudoscientific beliefs and a deepfear of death. In a similar vein Sullins (2000) argues that there is often aquasi-religious aspect to the acceptance of transhumanism that iscommitted to certain outcomes such as uploading of human consciousnessinto computers as a way to achieve immortality, and that theacceptance of the transhumanist hypothesis influences the valuesembedded in computer technologies, which can be dismissive or hostileto the human body.

There are other cogent critiques of this argument but none as simpleas the realization that:

…there is, after all, a limit to how small things can get beforethey simply melt. Moore’s Law no longer holds. Just becausesomething grows exponentially for some time, does not mean that itwill continue to do so forever… (Floridi, 2016).

While many ethical systems place a primary moral value on preservingand protecting nature and the natural given world, transhumanists donot see any intrinsic value in defining what is natural and what is not andconsider arguments to preserve some perceived natural state of thehuman body as an unjustifiable obstacle to progress. Not all philosophersare critical of transhumanism, as an example Nick Bostrom (2008) ofthe Future of Humanity Institute at Oxford University argues thatputting aside the feasibility argument, we must conclude that thereare forms of posthumanism that would lead to long and worthwhile livesand that it would be overall a very good thing for humans to becomeposthuman if it is at all possible (Bostrom, 2008).

3.4.2 Artificial Intelligence and Artificial Life

Artificial Intelligence (AI) refers to the many longstanding researchprojects directed at building information technologies that exhibitsome or all aspects of human level intelligence and problem solving.Artificial Life (ALife) is a project that is not as old as AI and isfocused on developing information technologies and or syntheticbiological technologies that exhibit life functions typically foundonly in biological entities. A more complete description of logic andAI can be found in the entry onlogic and artificial intelligence. ALife essentially sees biology as a kind of naturally occurringinformation technology that may be reverse engineered and synthesizedin other kinds of technologies. Both AI and ALife are vast researchprojects that defy simple explanation. Instead the focus here is onthe moral values that these technologies impact and the way some ofthese technologies are programmed to affect emotion and moralconcern.

3.4.2.1 Artificial Intelligence

Alan Turing is credited with defining the research project that wouldcome to be known as Artificial Intelligence in his seminal 1950 paper“Computing Machinery and Intelligence.” He described the“imitation game,” where a computer attempts to fool ahuman interlocutor that it is not a computer but another human (Turing1948, 1950). In 1950, he made the now famous claim that

I believe that in about fifty years’ time…. one will be able tospeak of machines thinking without expecting to be contradicted.

A description of the test and its implications to philosophy outsideof moral values can be found here (see entry onthe Turing test).Turing’s prediction may have been overly ambitious and in factsome have argued that we are nowhere near the completion ofTuring’s dream. For example, Luciano Floridi (2011a) arguesthat while AI has been very successful as a means of augmenting ourown intelligence, but as a branch of cognitive science interested inintelligence production, AI has been a dismal disappointment. Theopposite opinion has also been argued and some claim that the TuringTest has already been passed or at least that programmers are on theverge of doing so. For instance it was reported by the BBC in 2014that the Turing Test had been passed by a program that could convincethe judges that it was a 13 year old Ukrainian boy, but even so, manyexperts remain skeptical (BBC 2014).

For argument’s sake, assume Turing is correct even if he is off in hisestimation of when AI will succeed in creating a machine that canconverse with you. Yale professor David Gelernter worries that thatthere would be certain uncomfortable moral issues raised. “Youwould have no grounds for treating it as a being toward which you havemoral duties rather than as a tool to be used as you like”(Gelernter 2007). Gelernter suggests that consciousness is arequirement for moral agency and that we may treat anything without itin any way that we want without moral regard. Sullins (2006) countersthis argument by noting that consciousness is not required for moralagency. For instance, nonhuman animals and the other living andnonliving things in our environment must be accorded certain moralrights, and indeed, any Turing capable AI would also have moral dutiesas well as rights, regardless of its status as a conscious being(Sullins 2006).

AI is certainly capable of creating machines that can converseeffectively in simple ways with with human beings as evidenced byApple Siri, Amazon Alexa, OK Goolge, etc. along with the many systemsthat businesses use to automate customer service, but these are stilla ways away form having the natural kinds of unscripted conversationshumans have with one another. But that may not matter when it comesto assessing the moral impact of these technologies. In addition,there are still many other applications that use AI technology. Nearlyall of the information technologies we discussed above such as,search, computer games, data mining, malware filtering, robotics,etc., all utilize AI programming techniques. Thus AI will grow to be aprimary location for the moral impacts of information technologies.Many governments and professional associations are now developingethical guidelines and standards to help shape this importanttechnology, on e good example is the Global Initiative on the Ethicsof Intelligent and Autonomous Systems (IEEE 2018).

3.4.2.2 Artificial Life

Artificial Life (ALife) is an outgrowth of AI and refers to the use ofinformation technology to simulate or synthesize life functions. Theproblem of defining life has been an interest in philosophy since itsfounding. See the entry onlife for a look at the concept of life and its philosophicalramifications. If scientists and technologists were to succeed indiscovering the necessary and sufficient conditions for life and thensuccessfully synthesize it in a machine or through synthetic biology,then we would be treading on territory that has significant moralimpact. Mark Bedau has been tracing the philosophical implications ofALife for some time now and argues that there are two distinct formsof ALife and each would thus have different moral effects if and whenwe succeed in realizing these separate research agendas (Bedau 2004;Bedau and Parke 2009). One form of ALife is completely computationaland is in fact the earliest form of ALife studied. ALife is inspiredby the work of the mathematician John von Neumann on self-replicatingcellular automata, which von Neumann believed would lead to acomputational understanding of biology and the life sciences (1966).The computer scientist Christopher Langton simplified von Neumann’smodel greatly and produced a simple cellular automata called“Loops” in the early eighties and helped get the field offthe ground by organizing the first few conferences on Artificial Life(1989). Artificial Life programs are quite different from AI programs.Where AI is intent on creating or enhancing intelligence, ALife iscontent with very simple minded programs that display life functionsrather than intelligence. The primary moral concern here is that theseprograms are designed to self-reproduce and in that way resemblecomputer viruses and indeed successful ALife programs could become asmalware vectors. The second form of ALife is much more morallycharged. This form of ALife is based on manipulating actual biologicaland biochemical processes in such a way as to produce novel life formsnot seen in nature.

Scientists at the J. Craig Venter institute were able to synthesize anartificial bacterium called JCVI-syn1.0 in May of 2010. While mediapaid attention to this breakthrough, they tended to focus on thepotential ethical and social impacts of the creation of artificialbacteria. Craig Venter himself launched a public relations campaigntrying to steer the conversation about issues relating to creatinglife. This first episode in the synthesis of life gives us a taste ofthe excitement and controversy that will be generated when more viableand robust artificial protocells are synthesized. The ethical concernsraised by Wet ALife, as this kind of research is called, are moreproperly the jurisdiction of bioethics (see entry ontheory and bioethics). But it does have some concern for us here in that Wet ALife is partof the process of turning theories from the life sciences intoinformation technologies. This will tend to blur the boundariesbetween bioethics and information ethics. Just as software ALife mightlead to dangerous malware, so too might Wet ALife lead to dangerousbacteria or other disease agents. Critics suggest that there arestrong moral arguments against pursuing this technology and that weshould apply the precautionary principle here which states that ifthere is any chance at a technology causing catastrophic harm, andthere is no scientific consensus suggesting that the harm will notoccur, then those who wish to develop that technology or pursue thatresearch must prove it to be harmless first (see Epstein 1980). MarkBedau and Mark Traint argue against a too strong adherence to theprecautionary principle by suggesting that instead we should opt formoral courage in pursuing such an important step in humanunderstanding of life (2009). They appeal to the Aristotelian notionof courage, not a headlong and foolhardy rush into the unknown, but aresolute and careful step forward into the possibilities offered bythis research.

3.4.3 Robotics and Moral Values

Information technologies have not been content to remain confined tovirtual worlds and software implementations. These technologies arealso interacting directly with us through robotics applications.Robotics is an emerging technology but it has already produced anumber of applications that have important moral implications.Technologies such as military robotics, medical robotics, personalrobotics and the world of sex robots are just some of the alreadyexistent uses of robotics that impact on and express our moralcommitments (see, Anderson and Anderson 2011; Capurro and Nagenborg2009; Lin et al. 2012, 2017).

There have already been a number of valuable contributions to thegrowing fields of machine morality and robot ethics (roboethics). Forexample, in Wallach and Allen’s bookMoral Machines:Teaching Robots Right from Wrong (2010), the authors presentideas for the design and programming of machines that can functionallyreason on moral questions as well as examples from the field ofrobotics where engineers are trying to create machines that can behavein a morally defensible way. The introduction of semi and fullyautonomous machines, (meaning machines that make decisions with littleor no human intervention), into public life will not besimple. Towards this end, Wallach (2011) has also contributed to thediscussion on the role of philosophy in helping to design publicpolicy on the use and regulation of robotics.

Military robotics has proven to be one of the most ethically chargedrobotics applications (Lin et al. 2008, 2013, Lin 2010; Strawser,2013). Today these machines are largely remotely operated (telerobots)or semi-autonomous, but over time these machines are likely to becomemore and more autonomous due to the necessities of modern warfare(Singer 2009). In the first decades of war in the 21stcentury robotic weaponry has been involved in numerous killings ofboth soldiers and noncombatants (Plaw 2013), and this fact alone is ofdeep moral concern. Gerhard Dabringer has conducted numerousinterviews with ethicists and technologists regarding the implicationsof automated warfare (Dabringer 2010). Many ethicists are cautious intheir acceptance of automated warfare with the provision that thetechnology is used to enhance ethical conduct in war, for instance byreducing civilian and military casualties or helping warfightersfollow International Humanitarian Law and other legal and ethicalcodes of conduct in war (see Lin et al. 2008, 2013; Sullins 2009b),but others have been highly skeptical of the prospects of an ethicalautonomous war due to issues like the risk to civilians and the easein which wars might be declared given that robots will be taking mostof the risk (Asaro 2008; Sharkey 2011).

4. Information Technologies of Morality

A key development in the realm of information technologies is thatthey are not only the object of moral deliberations but they are alsobeginning to be used as a tool in moral deliberation itself. Sinceartificial intelligence technologies and applications are a kind ofautomated problem solvers, and moral deliberations are a kind ofproblem, it was only a matter of time before automated moral reasoningtechnologies would emerge. This is still only an emerging technologybut it has a number of very interesting moral implications which willbe outlined below. The coming decades are likely to see a number ofadvances in this area and ethicists need to pay close attention tothese developments as they happen. Susan and Michael Anderson havecollected a number of articles regarding this topic in their book,Machine Ethics (2011), and Rocci Luppicini has a section ofhis anthology devoted to this topic in theHandbook of Research onTechnoethics (2009).

4.1 Information Technology as a Model for Moral Discovery

Patrick Grim has been a longtime proponent of the idea that philosophyshould utilize information technologies to automate and illustratephilosophical thought experiments (Grim et al. 1998; Grim 2004). PeterDanielson (1998) has also written extensively on this subjectbeginning with his bookModeling Rationality, Morality, andEvolution with much of the early research in the computationaltheory of morality centered on using computer models to elucidate theemergence of cooperation between simple software AI or ALife agents(Sullins 2005).

Luciano Floridi and J. W. Sanders argue that information as it is usedin the theory of computation can serve as a powerful idea that canhelp resolve some of the famous moral conundrums in philosophy such asthe nature of evil (1999, 2001). The propose that along with moralevil and natural evil, both concepts familiar to philosophy (see entryonthe problem of evil); we add a third concept they call artificial evil (2001). Floridi andSanders contend that if we do this then we can see that the actions ofartificial agents

…to be morally good or evil can be determined even in theabsence of biologically sentient participants and thus allowsartificial agents not only to perpetrate evil (and for that mattergood) but conversely to ‘receive’ or ‘sufferfrom’ it. (Floridi and Sanders 2001)

Evil can then be equated with something like information dissolution,where the irretrievable loss of information is bad and thepreservation of information is good (Floridi and Sanders 2001). Thisidea can move us closer to a way of measuring the moral impacts of anygiven action in an information environment.

4.2 Information Technology as a Moral System

Early in the twentieth century the American philosopher John Dewey(see entry onJohn Dewey) proposed a theory of inquiry based on the instrumental uses oftechnology. Dewey had an expansive definition of technology whichincluded not only common tools and machines but information systemssuch as logic, laws and even language as well (Hickman 1990). Deweyargued that we are in a ‘transactional’ relationship withall of these technologies within which we discover and construct ourworld (Hickman 1990). This is a helpful standpoint to take as itallows us to advance the idea that an information technology ofmorality and ethics is not impossible. As well as allowing us to takeseriously the idea that the relations and transactions between humanagents and those that exist between humans and their artifacts haveimportant ontological similarities. While Dewey could only dimlyperceive the coming revolutions in information technologies, histheory is useful to us still because he proposed that ethics was notonly a theory but a practice and solving problems in ethics is likesolving problems in algebra (Hickman 1990). If he is right, then aninteresting possibility arises, namely the possibility that ethics andmorality are computable problems and therefore it should be possibleto create an information technology that can embody moral systems ofthought.

In 1974 the philosopher Mario Bunge proposed that we take the notionof a ‘technoethics’ seriously arguing that moralphilosophers should emulate the way engineers approach a problem.Engineers do not argue in terms of reasoning by categoricalimperatives but instead they use:

… the forms IfA producesB, and you valueB, chose to doA, and IfA producesB andC producesD, and you preferB toD, chooseA rather thanC. Inshort, the rules he comes up with are based on fact and value, Isubmit that this is the way moral rules ought to be fashioned, namelyas rules of conduct deriving from scientific statements and valuejudgments. In short ethics could be conceived as a branch oftechnology. (Bunge 1977, 103)

Taking this view seriously implies that the very act of buildinginformation technologies is also the act of creating specific moralsystems within which human and artificial agents will, at leastoccasionally, interact through moral transactions. Informationtechnologists may therefore be in the business of creating moralsystems whether they know it or not and whether or not they want thatresponsibility.

4.3 Informational Organisms as Moral Agents

The most comprehensive literature that argues in favor of the prospectof using information technology to create artificial moral agents isthat of Luciano Floridi (1999, 2002, 2003, 2010b, 2011b), and Floridiwith Jeff W. Sanders (1999, 2001, 2004). Floridi (1999) recognizesthat issues raised by the ethical impacts of information technologiesstrain our traditional moral theories. To relieve this friction heargues that what is needed is a broader philosophy of information(2002). After making this move, Floridi (2003) claims that informationis a legitimate environment of its own and that has its own intrinsicvalue that is in some ways similar to the natural environment and inother ways radically foreign but either way the result is thatinformation is on its own a thing that is worthy of ethical concern.Floridi (2003) uses these ideas to create a theoretical model of moralaction using the logic of object oriented programming.

His model has seven components; 1) the moral agent a, 2) the moralpatientp (or more appropriately, reagent), 3) theinteractions of these agents, 4) the agent’s frame ofinformation, 5) the factual information available to the agentconcerning the situation that agent is attempting to navigate, 6) theenvironment the interaction is occurring in, and 7) the situation inwhich the interaction occurs (Floridi 2003, 3). Note that there is noassumption of the ontology of the agents concerned in the moralrelationship modeled and these agents can be any mixture or artificialor natural in origin (Sullins 2009a).

There is additional literature which critiques arguments such asFloridi’s with the hope of expanding the idea of automated moralreasoning so that one can speak of many different types of automatedmoral technologies form simple applications all the way to full moralagents with rights and responsibilities similar to humans (Adam 2008;Anderson and Anderson 2011; Johnson and Powers 2008; Schmidt 2007;Wallach and Allen 2010).

While scholars recognize that we are still some time from creatinginformation technology that would be unequivocally recognized as anartificial moral agent, there are strong theoretical arguments insuggesting that automated moral reasoning is an eventual possibilityand therefore it is an appropriate area of study for those interestedin the moral impacts of information technologies.

Bibliography

  • Adam, A., 2002, “Cyberstalking and Internet pornography:Gender and the gaze,”Ethics and InformationTechnology, 4(2): 133–142.
  • –––, 2008, “Ethics for things,”Ethics and Information technology, 10(2–3):149–154.
  • American Academy of Pediatrics, 2018, “Tips from theAmerican Academy of Pediatrics to Help Families Manage the EverChanging Digital Landscape,” May 1,available online.
  • Anderson, M. and S. L. Anderson (eds.), 2011,MachineEthics, Cambridge: Cambridge University Press.
  • Arkin, R., 2009,Governing Lethal Behavior in AutonomousRobots, New York: Chapman and Hall/CRC.
  • Arquilla, J., 2010, “Conflict, Security and ComputerEthics,” in Floridi 2010a.
  • Asaro, P., 2008. “How Just Could a Robot War Be?” inPhilip Brey, Adam Briggle and Katinka Waelbers (eds.),CurrentIssues in Computing And Philosophy, Amsterdam, The Netherlands:IOS Press, pp. 50–64.
  • –––, 2009. “Modeling the Moral User:Designing Ethical Interfaces for Tele-Operation,”IEEETechnology & Society, 28(1): 20–24.
  • Au-Yeung A., 2018. “Why Investors Remain Bullish On Facebookin Day Two Of Zuckerberg’s CongressionalHearnings,”Forbes, April 11,available online.
  • Aycock, J. and J. Sullins, 2010, “Ethical Proactive ThreatResearch,”Workshop on Ethics in Computer Security Research(LNCS 6054), New York: Springer, pp. 231–239.
  • Bartell, C., 2011, “Resolving the gamer’s dilemma,”Ethics and Information Technology, 14(1):11–16.
  • Baase, S., 2008,A Gift of Fire: Social, Legal, and EthicalIssues for Computing and the Internet, Englewood Cliffs, NJ:Prentice Hall.
  • BBC, 2014, “Computer AI passes Turing test in ‘worldfirst’,”BBC Technology [available online]
  • Bedau, M., 2004, “Artificial Life,” in Floridi 2004.
  • Bedau, M. and E. Parke (eds.), 2009,The Ethics of Protocells:Moral and Social Implications of Creating Life in the Laboratory,Cambridge: MIT Press.
  • Bedau, M. and M. Traint, 2009, “Social and EthicalImplications of Creating Artificial Cells,” in Bedau and Parke2009.
  • Bostrom, N., 2008, “Why I Want to be a Posthuman When I GrowUp,” inMedical Enhancement and Posthumanity, G.Gordijn and R. Chadwick (eds), Berlin: Springer, pp.107–137.
  • Brey, P., 2008, “Virtual Reality and ComputerSimulation,” in Himma and Tavanni 2008
  • –––, 2010, “Values in Technology andDisclosive Computer Ethics,” in Floridi 2010a.
  • Bunge, M. 1977, “Towards a Technoethics,”TheMonist, 60(1): 96–107.
  • Bynum, T., 2000, “Ethics and the InformationRevolution,”Ethics in the Age of InformationTechnology, pp. 32–55, Linköping, Sweden: Center forApplied Ethics at Linköping University.
  • –––, 2008, “Norbert Wiener and the Rise ofInformation Ethics,” in van den Hoven and Weckert 2008.
  • Capurro, R., Nagenborg, M., 2009,Ethics and Robotics,[CITY]: IOS Press
  • Christakis, D. A., 2009, “The effects of infant media usage:what do we know and what should we learn?”ActaPædiatrica, 98 (1): 8–16.
  • Cisco Systems, Inc., 2018,Cisco 2018 Annual Security Report:Small and Mighty: How Small and Midmarket Businesses Can Fortify TheirDefenses Against Today’s Threats, San Jose, CA: Cisco SystemsInc. [available online]
  • Coeckelbergh, M., 2007, “Violent Computer Games, Empathy,and Cosmopolitanism,”Ethics and InformationTechnology, 9(3): 219–231
  • Cohen, D. D., C. Voss, M. J. D. Taylor, A. Delextrat, A. A.Ogunleye, and G. R. H. Sandercock, 2011, “Ten-year secularchanges in muscular fitness in English children,”ActaPaediatrica, 100(10): e175–e177.
  • Danielson, P., 1998,Modeling Rationality, Morality, andEvolution, Oxford: Oxford University Press.
  • Dabringer, G., (ed.) 2010,Ethica Themen: Ethical and LegalAspects of Unmanned Systems, Interviews, Vienna, Austria:Austrian Ministry of Defence and Sports. [available online]
  • Denning, D., 2008, “The Ethics of Cyber Conflict,” InHimma and Tavanni 2008.
  • Dodig-Crnkovic, G., Hofkirchner, W., 2011, “Floridi’s‘Open Problems in Philosophy of Information’, Ten YearsLater,”Information, (2): 327–359. [available online]
  • Edgar, S.L., 2003,Morality and Machines, SudburyMassachusetts: Jones and Bartlett.
  • Epstein, R., 2007, “The Impact of Computer Security Concernson Software Development,” in Himma 2007a, pp.171–202.
  • Epstein, L.S. 1980. “Decision-making and the temporalresolution of uncertainty”.International EconomicReview 21 (2): 269–283.
  • Ess, C., 2009,Digital Media Ethics, Massachusetts:Polity Press.
  • Facebook, Inc., 2012,Form S-1: Registration Statement,filed with the United States Securities and Exchange Commission,Washington, DC,available online.
  • Floridi, L., 1999, “Information Ethics: On the TheoreticalFoundations of Computer Ethics”,Ethics and InformationTechnology, 1(1): 37–56.
  • –––, 2002, “What is the Philosophy ofInformation?” inMetaphilosophy, 33(1/2):123–145.
  • –––, 2003, “On the Intrinsic Value ofInformation Objects and the Infosphere,”Ethics andInformation Technology, 4(4): 287–304.
  • –––, 2004,The Blackwell Guide to thePhilosophy of Computing and Information, BlackwellPublishing.
  • ––– (ed.), 2010a,The Cambridge Handbook ofInformation and Computer Ethics, Cambridge: Cambridge UniversityPress.
  • –––, 2010b,Information: A Very ShortIntroduction, Oxford: Oxford University Press.
  • –––, 2011a, “Enveloping the World forAI,”The Philosopher’s Magazine, 54: 20–21
  • –––, 2011b,The Philosophy ofInformation, Oxford: Oxford University Press.
  • –––, 2016, “Should We be Afraid ofAI?”, Neigel Warburton (ed.),Aeon, 09 May 2016,available online.
  • Floridi, L. and J. W. Sanders, 1999, “Entropy as Evil inInformation Ethics,”Etica & Politica, specialissue on Computer Ethics, I(2). [available online]
  • –––, 2001, “Artificial evil and thefoundation of computer ethics,” inEthics and InformationTechnology, 3(1): 55–66. doi:10.1023/A:1011440125207
  • –––, 2004, “On the Morality of ArtificialAgents,” inMinds and Machines, 14(3): 349–379 [available online]
  • Furguson, C. J., 2007, “The Good The Bad and the Ugly: AMeta-analytic Review of Positive and Negative Effects of Violent VideoGames,”Psychiatric Quarterly, 78(4):309–316.
  • Gelernter, D., 2007, “Artificial Intelligence Is Lost in theWoods,”Technology Review, July/August, pp.62–70. [available online]
  • Gibba, G. D., J. R. Baileya, T. T. Lambirtha, and W. Wilsona,1983, “Personality Differences Between High and Low ElectronicVideo Game Users,”The Journal of Psychology, 114(2):159–165.
  • Grim, P., 2004, “Computational Modeling as a PhilosophicalMethodology,” In Floridi 2004.
  • Grim, P., G. Mar, and P. St. Denis, 1998,The PhilosophicalComputer: Exploratory Essays in Philosophical Computer Modeling,MIT Press.
  • Grodzinsky, F. S. and H. T. Tavani, 2002, “EthicalReflections on Cyberstalking,”Computers and Society,32(1): 22–32.
  • Hansen, R. and J. Grossman, 2008, “Clickjacking,”SecTheory: Internet Security. [available online]
  • Hickman, L. A. 1990,John Dewey’s Pragmatic Technology,Bloomington, Indiana: Indiana University Press.
  • Himma, K. E. (ed.), 2007a,Internet Security, Hacking,Counterhacking, and Society, Sudbury Massachusetts: Jones andBartlett Publishers.
  • Himma, K. E., 2007b, “Hacking as Politically MotivatedDigital Civil Disobedience: Is Hacktivisim Morally Justified?”In Himma 2007a, pp. 73–98.
  • Himma, K. E., and H. T. Tavanni (eds.), 2008,The Handbook ofInformation and Computer Ethics, Wiley-Interscience;1st edition
  • Hongladarom, S., 2008, “Privacy, Contingency, Identity andthe Group,”Handbook of Research on Technoethics. Vol.II, R. Luppicini and Rebecca Adell Eds. Hershey, PA: IGI Global,pp. 496–511.
  • IEEE 2018, “Ethically Aligned Design: The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems”,IEEE [available online]
  • Ivory, J. D., 2006, “Still a Man’s Game: GenderRepresentation in Online Reviews of Video Games,”MassCommunication and Society, 9(1): 103–114.
  • Johansson, M., 2009, “Why unreal punishments in response tounreal crimes might actually be a really good thing,”Ethicsand Information Technology, 11(1): 71–79
  • Johnson, D.G., 1985,Computer Ethics, Englewood Cliffs,New Jersey: Prentice Hall. (2nd ed., 1994; 3rded., 2001; 4th ed., 2009).
  • Johnson D. G., and T. Powers, 2008, “Computers and SurrogateAgents,” In van den Hoven and Weckert 2008.
  • Jones, T., 2011, “Techno-toddlers: A is for Apple,”The Guardian, Friday November 18. [available online]
  • Kaspersky Lab, 2017,Kaspersky Security Bulletin: KASPERSKY LAB THREAT PREDICTIONS FOR 2018,Moscow, Russia: Kaspersky Lab ZAO. [available online]
  • Keen, A., 2011, “Your Life Torn Open, Essay 1: Sharing is atrap,”Wired, 03 Feb 2011,available online.
  • Kurzweil, R., 2006,The Singularity is Near, New York:Penguin Press.
  • Langton, C. G., (ed.), 1989,Artificial Life: the Proceedingsof an Interdisciplinary Workshop on the Synthesis and Simulation ofLiving Systems, Redwood City: Addison-Wesley.
  • Ledin G., 2005, “Not Teaching Viruses and Worms isHarmful”Communications of the ACM, 48(1): 144.
  • Lessig, L., 1999,Code and Other Values of Cyberspace,New York: Basic Books.
  • Levy, S., 1984,Hackers: Heroes of the ComputerRevolution, New York: Anchor Press.
  • Lin P., 2010, “Ethical Blowback from EmergingTechnologies”,Journal of Military Ethics, 9(4):313–331.
  • Lin, P., K. Abney, and R. Jenkins, 2017,Robot Ethics 2.0:From Autonomous Cars to Artificial Intelligence, Oxford: OxfordUniversity Press.
  • Lin, P., K. Abney, and G. Bekey, 2012,Robot Ethics: TheEthical and Social Implications of Robotics, Cambridge, MA: MITPress.
  • –––2013, “Ethics, War, and Robots”,Ethics and Emerging Technologies,London: Palgrave–Macmillan.
  • Lin, P., G. Bekey, and K. Abney, 2008,Autonomous MilitaryRobotics: Risk, Ethics, and Design, Washington, DC: U.S.Department of the Navy, Office of Naval Research. [available online]
  • Lovely, E., 2010, “Cyberattacks explode in Congress,”Politico, March 5, 2010. [available online]
  • Lü, Yao-Hui, 2005, “Privacy and Data Privacy Issues inContemporary China,”Ethics and Information Technology,7(1): 7–15
  • Ludlow, P. and M. Wallace, 2007,The Second Life Herald: TheVirtual Tabloid that Witnessed the Dawn of the Metaverse,Cambridge, MA: MIT Press.
  • Luck, M., 2009, “The gamer’s dilemma: An analysis of thearguments for the moral distinction between virtual murder and virtualpaedophilia,”Ethics and Information Technology, 11(1):31–36.
  • Luppicini, R. and R. Adell (eds.), 2009,Handbook of Researchon Technoethics, Idea Group Inc. (IGI).
  • Magnani, L., 2007, Morality in a Technological World: Knowledge asDuty, Cambridge, Cambridge University Press.
  • Mason, R. O., 1986, Four ethical issues of the information age.MIS Quarterly, 10(1): 5–12.
  • Markoff, J., 2005,What the Dormouse Said: How the 60sCounterculture Shaped the Personal Computer Industry, New York:Penguin.
  • Manion, M. and A. Goodrum, 2007, “Terrorism or CivilDisobedience: Toward a Hacktivist Ethic,” in Himma 2007a, pp.49–59.
  • McAfee, 2018,Economic Impact of Cybercrime: No Slowing Down, Report [available online]
  • McMahon, J. M. and R. Cohen, 2009, “Lost in cyberspace:ethical decision making in the online environment,”Ethicsand Information technology, 11(1): 1–17.
  • Midgley, M., 1992,Science as Salvation: a modern myth and itsmeaning, London: Routledge.
  • Moor, J. H., 1985, “What is Computer Ethics?”Metaphilosophy, 16(4): 266–275.
  • –––, 2005, “Why We Need Better Ethics forEmerging Technologies,”Ethics and InformationTechnology, 7(3): 111–119. Reprinted in van den Hoven andWeckert 2008, pp. 26–39.
  • Moore, Gordon E. 1965. “Cramming more components ontointegrated circuits”.Electronics, 38(8):114–117. [available online]
  • Neumann, P. G., 2004, “Computer security and humanvalues,” ComputerEthics and ProfessionalResponsibility, Malden, MA: Blackwell
  • Nissenbaum, H., 1997. “Toward an Approach to Privacy inPublic: Challenges of Information Technology,”Ethics andBehavior, 7(3): 207–219. [available online]
  • –––, 1998. “Values in the Design ofComputer Systems,”Computers and Society, March: pp.38–39. [available online]
  • –––, 1999, “The Meaning of Anonymity in anInformation Age,”The Information Society, 15:141–144.
  • –––, 2009,Privacy in Context: Technology,Policy, and the Integrity of Social Life, Stanford Law Books:Stanford University Press.
  • Northcutt, S. and C. Madden, 2004,IT Ethics Handbook: Rightand Wrong for IT Professionals, Syngress.
  • O’Neil, O., 2002, “Trust is the first casualty of the cultof transparency,”The Telegraph, 24 April,available online.
  • O’Reilly, T., 2007 [2005], “What is Web 2.0: Design Patternsand Business Models for the Next Generation of Software,”Communications & Strategies, 65(1): 17–37;available online.[The earlier, 2005 version, is linked into the Other InternetResources section below.]
  • Parrish, J., 2010, “PAPA knows best: Principles for theethical sharing of information on social networking sites,”Ethics and Information Technology, 12(2): 187–193.
  • Pettit, P., 2009, “Trust, Reliance, and the Internet,”In van den Hoven and Weckert 2008.
  • Plaw, A. 2013, “Counting the Dead: The Proportionality of Predation in Pakistan,” In Strawser 2013.
  • Planned Parenthood, 2017, “New Study Shows Virtual RealityCan Move People’s Views on Abortion and Clinic Harassment,” [available online]
  • Plato, “Phaederus,” inPlato: The CollectedDialogues, E. Hamilton and H. Cairns (eds.), Princeton: PrincetonUniversity Press, pp. 475–525.
  • Poushter J., 2016, “Smartphone Ownership and Internet Usage Continues to Climb in Emerging Economies: But advanced economies still have higher rates of technology use,”Pew Research Center, 22 February 2018 [available online]
  • Powers, T., 2011, “Prospects for a Kantian Machine,”in Anderson and Anderson 2011.
  • Purcell, M., 2008, “Pernicious virtual communities:Identity, polarisation and the Web 2.0,”Ethics andInformation Technology, 10(1): 41–56.
  • Reynolds, G., 2009,Ethics in Information Technology,(3rd ed.), Course Technology.
  • Russell, S. and P. Norvig, 2010,Artificial Intelligence: AModern Approach, (3rd ed.), Massachusetts: PrenticeHall.
  • Schmidt, C. T. A., 2007, “Children, Robots and… theParental Role,” 17(3): 273–286.
  • Schulzke, M., 2010, “Defending the Morality of Violent VideoGames,”Ethics and Information Technology, 12(2):127–138.
  • Searle, J., 1980, “Minds, Brains, and Programs,”Behavioral and Brain Sciences, 3: 417–57.
  • Shannon, C.E., 1948, “A Mathematical Theory ofCommunication”,Bell System Technical Journal, 27(July,October): 379–423, 623–656. [available online]
  • Shannon, C. E. and W. Weaver, 1949,The Mathematical Theory ofCommunication, University of Illinois Press.
  • Sharkey, N.E. 2011, “The automation and proliferation ofmilitary drones and the protection of civilians,”Journal ofLaw, Innovation and Technology, 3(2): 229–240.
  • Singer, P. W., 2009,Wired for War:The Robotics Revolution andConflict in the 21st Century, Penguin (Non-Classics);Reprint edition.
  • Siraj-Blatchford, J., 2010, “Analysis: ‘ComputersBenefit Children’,”Nursery World, October 6. [available online]
  • Soukup, C., 2007, “Mastering the Game: Gender and theEntelechial Motivational System of Video Games,”Women’sStudies in Communication, 30(2): 157–178.
  • Søraker, Johnny Hartz, 2012, “How Shall I CompareThee? Comparing the Prudential Value of Actual VirtualFriendship,”Ethics and Information technology, 14(3):209–219. doi:10.1007/s10676-012-9294-x [available online]
  • Spafford, E.H., 1992, “Are computer hacker break-insethical?”Journal of Systems and Software17(1):41–47.
  • –––, 2007, “Are Computer Hacker Break-insEthical?” in Himma 2007a, pp. 49–59.
  • Spinello, R. A., 2001,Cyberethics, Sudbury, MA: Jonesand Bartlett Publishers. (2nd ed., 2003; 3rd ed., 2006;4th ed., 2010).
  • –––, 2002,Case Studies in InformationTechnology Ethics, Prentice Hall. (2nd ed.).
  • Sprenger P., 1999, “Sun on Privacy: ‘Get Over It’,”Wired, January 26, 1999. [available online]
  • Statista, 2018, “Number of social media users worldwide from2010 to 2021 (in billions)”, [available online].
  • Strawser, B.J., 2013,Killing by Remorte Control: The Ethicsof an Unmanned Military, Oxford: Oxford University Press.
  • Sullins, J. P., 2000, “Transcending the meat: immersivetechnologies and computer mediated bodies,”Journal ofExperimental and Theoretical Artificial Intelligence, 12(1):13–22.
  • –––, 2005, “Ethics and Artificial life:From Modeling to Moral Agents,”Ethics and Informationtechnology, 7(3): 139–148. [available online]
  • –––, 2006, “When Is a Robot a MoralAgent?”International Review of Information Ethics,6(12): 23–30. [available online]
  • –––, 2009a, “Artificial Moral Agency inTechnoethics,” in Luppicini and Adell 2009.
  • –––, 2009b, “Telerobotic weapons systemsand the ethical conduct of war,”APA Newsletter onPhilosophy and Computers, P. Boltuc (ed.) 8(2): 21.
  • –––, 2010, “Rights and ComputerEthics,” in Floridi 2010a.
  • –––, forthcoming, “Deception and Virtue inRobotic and Cyber Warfare,” Presentation for the Workshop on TheEthics of Informational Warfare, at University of Hertfordhire, UK,July 1–2 2011
  • Symantec, 2018, Internet Security Threat Report (ISTR),Symantec Security Response, [available online]
  • Tavani, H. T., 2007, “The Conceptual and Moral Landscape ofComputer Security,” in Himma 2007a, pp. 29–45.
  • –––, 2010,Ethics and Technology:Controversies, Questions, and Strategies for Ethical Computing,(3rd ed.), Wiley.
  • Tavani, H. and J. Moor, 2004, “Privacy Protection, Controlof Information, and Privacy-Enhancing Technologies,” inReadings in Cyberethics, second edition, Spinello, R. andTavani, H. (eds.), Sudsbury: Jones and Bartlett.
  • Taylor, N., J. Jensona, and S. de Castellb, 2009.“Cheerleaders/booth babes/ Halo hoes: pro-gaming, gender andjobs for the boys,”Digital Creativity, 20(4):239–252.
  • Turing, A. M., 1948, “Machine Intelligence”, in B.Jack Copeland,The Essential Turing: The ideas that gave birth tothe computer age, Oxford: Oxford University Press.
  • –––, 1950, “Computing Machinery andIntelligence”,Mind, 59(236): 433–460. doi:10.1093/mind/LIX.236.433
  • Vallor, S., 2010, “Social Networking Technology and theVirtues,”Ethics and Information Technology, 12(2, Jan.6): 157–170.
  • –––, 2011, “Flourishing on Facebook:Virtue Friendship and New Social Media,”Ethics andInformation technology, 1388–1957, pp. 1–15,Netherlands: Springer.
  • –––, 2016,Technology and the Virtues: APhilosophical Guide to a Future worth Wanting, Oxford: OxfordUniversity Press.
  • Van den Hoven, J. and J. Weckert (eds), 2008,InformationTechnology and Moral Philosophy, Cambridge: Cambridge UniversityPress.
  • Von Neumann, J., 1966,Theory of Self ReproducingAutomata, edited and completed by A. Burks, Urbana-Champaign:University of Illinois Press.
  • Wallach, W., 2011. From Robots to Techno Sapiens: Ethics, Law andPublic Policy in the Development of Robotics and Neurotechnologies,Law, Innovation and Technology, 3(2): 185–207.
  • Wallach, W. and C. Allen, 2010,Moral Machines: TeachingRobots Right from Wrong, Oxford: Oxford University Press.
  • Warschauer, M., 2003,Technology and Social Inclusion:Rethinking the Digital Divide, Cambridge: MIT Press.
  • Weckert, John, 2007, “Giving and Taking Offence in a GlobalContext,”International Journal of Technology and HumanInteraction, 3(3): 25–35.
  • Westin, A., 1967,Privacy and Freedom, New York:Atheneum.
  • Wiener, N., 1950,The Human Use of Human Beings,Cambridge, MA: The Riverside Press (Houghton Mifflin Co.).
  • –––, 1961,Cybernetics: Or Control andCommunication in the Animal and the Machine, 2nd revised ed.,Cambridge: MIT Press. First edition, 1948.
  • Woodbury, M. C., 2010,Computer and Information Ethics,2nd edition; 1st edition, 2003, Champaign, IL:Stipes Publishing LLC.

Copyright © 2018 by
John Sullins<john.sullins@sonoma.edu>

Open access to the SEP is made possible by a world-wide funding initiative.
The Encyclopedia Now Needs Your Support
Please Read How You Can Help Keep the Encyclopedia Free

Browse

About

Support SEP

Mirror Sites

View this site from another server:

USA (Main Site)Philosophy, Stanford University

The Stanford Encyclopedia of Philosophy iscopyright © 2023 byThe Metaphysics Research Lab, Department of Philosophy, Stanford University

Library of Congress Catalog Data: ISSN 1095-5054


[8]ページ先頭

©2009-2025 Movatter.jp