Artificial Intelligence (AI) is already having a major impact on society. The key questions are how, where, when, and by whom the impact of AI will be felt. As a result, many organizations have launched a wide range of initiatives to establish ethical principles for the adoption of socially beneficial AI. Unfortunately, the sheer volume of proposed principles threatens to become overwhelming and confusing, posing two potential problems.
How might this problem of âprinciple proliferationâ be solved? In this paper, we report the results of a fine-grained analysis of several of the highest-profile sets of ethical principles for AI. We assess whether these principles are convergent, with a set of agreed-upon principles, or divergent, with significant disagreement over what constitutes âethical AI.â Our analysis finds a high degree of overlap among the sets of principles we analyze. We then identify an overarching framework consisting of
AI has been defined in many ways. Today, it comprises several techno-scientific branches, well summarized in Figure 1 (see also the articles by Dick and Jordan in this issue for enlightening analyses).
Altogether, AI paradigms still satisfy the classic definition provided by John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon in their seminal
For the present purpose the artificial intelligence problem is taken to be that of making a machine behave in ways that would be called intelligent if a human were so behaving. (Quotation from the 2006 re-issue in
This is a
The classic definition enables one to conceptualize AI as a growing resource of interactive, autonomous, and often self-learning agency (in the machine learning sense, see Figure 1), that can deal with tasks that would otherwise require human intelligence and intervention to be performed successfully. In short, AI is defined on the basis of engineered outcomes and actions and so, in what follows, we shall treat AI as a
The establishment of artificial intelligence as a field of academic research dates back to the 1950s (
The time has come for a comparative analysis of these documents, including an assessment of whether they converge or diverge and, if the former, whether a unified framework may therefore be synthesised. For this comparative analysis, we identified six high-profile initiatives established in the interest of socially beneficial AI:
The Asilomar AI Principles, developed under the auspices of the Future of Life Institute, in collaboration with attendees of the high-level Asilomar conference of January 2017 (hereafter âAsilomarâ; Asilomar AI Principles, 2017)
The Montreal Declaration for Responsible AI, developed under the auspices of the University of Montreal, following the Forum on the Socially Responsible Development of AI of November 2017 (hereafter âMontrealâ;
The General Principles offered in the second version of
The Ethical Principles offered in the
The âfive overarching principles for an AI codeâ offered in UK House of Lords Artificial Intelligence Committeeâs report,
The Tenets of the Partnership on AI, a multi-stakeholder organization consisting of academics, researchers, civil society organisations, companies building and utilising AI technology, and other groups (hereafter âthe Partnershipâ;
Each set of principles meets three basic criteria: they are
The principle of creating AI technology that is beneficial to humanity is expressed in different ways across the six documents, but is perhaps the easiest of the four traditional bioethics principles to observe. Montreal and IEEE principles both use the term âwell-beingâ; for Montreal, âthe development of AI should ultimately promote the well-being of all sentient creatures,â while IEEE states the need to âprioritize human well-being as an outcome in all system designs.â AIUK and Asilomar both characterise this principle as the âcommon goodâ: AI should âbe developed for the common good and the benefit of humanity,â according to AIUK. The Partnership describes the intention to âensure that AI technologies benefit and empower as many people as possibleâ, while the EGE emphasizes the principle of both âhuman dignityâ and âsustainability.â Its principle of âsustainabilityâ articulates perhaps the widest of all interpretations of beneficence, arguing that âAI technology must be in line with ⦠ensur[ing] the basic preconditions for life on our planet, continued prospering for mankind and the preservation of a good environment for future generations.â Taken together, the prominence of beneficence firmly underlines the central importance of promoting the well-being of people and the planet with AI.
Though âdo only goodâ (beneficence) and âdo no harmâ (non-maleficence) may seem logically equivalent, they are not, and represent distinct principles. While the six documents all encourage the creation of beneficent AI, each one also cautions against various negative consequences of overusing or misusing AI technologies (
When we adopt AI and its smart agency, we willingly cede some of our decision-making power to technological artefacts. Thus, affirming the principle of autonomy in the context of AI means striking a balance between the decision-making power we retain for ourselves and that which we delegate to artificial agents. The risk is that the growth in
The decision to make or delegate decisions does not take place in a vacuum. Nor is this capacity distributed equally across society. The consequences of this disparity in autonomy are addressed in the principle of justice. The importance of âjusticeâ is explicitly cited in the Montreal Declaration, which argues that âthe development of AI should promote justice and seek to eliminate all types of discrimination,â while the Asilomar Principles include the need for both âshared benefitâ and âshared prosperityâ from AI. Under its principle named âJustice, equity and solidarity,â the EGE argues that AI should âcontribute to global justice and equal access to the benefitsâ of AI technologies. It also warns against the risk of bias in datasets used to train AI systems, and â unique among the documents â argues for the need to defend against threats to âsolidarity,â including âsystems of mutual assistance such as in social insurance and healthcare.â Elsewhere âjusticeâ has still other meanings (especially in the sense of
The short answer to the question of whether âweâ are the patient or the doctor is that actually we could be either, depending on the circumstances and on who âweâ are in everyday life. The situation is inherently unequal: a small fraction of humanity is currently engaged in the development of a set of technologies that are already transforming the everyday lives of almost everyone else. This stark reality is not lost on the authors whose documents we analyze. All of them refer to the need to understand and hold to account the decision-making processes of AI. Different terms express this principle: âtransparencyâ in Asilomar and EGE; both âtransparencyâ and âaccountabilityâ in IEEE; âintelligibilityâ in AIUK; and as âunderstandable and interpretableâ by the Partnership. Each of these principles captures something seemingly novel about AI: that its workings are often invisible or unintelligible to all but (at best) the most expert observers.
The addition of the principle of âexplicability,â incorporating both the epistemological sense of âintelligibilityâ (as an answer to the question âhow does it work?â) and in the ethical sense of âaccountabilityâ (as an answer to the question âwho is responsible for the way it works?â), is the crucial missing piece of the AI ethics jigsaw. It complements the other four principles: for AI to be beneficent and non-maleficent, we must be able to understand the good or harm it is actually doing to society, and in which ways; for AI to promote and not constrain human autonomy, our âdecision about who should decideâ must be informed by knowledge of how AI would act instead of us; and for AI to be just, we must know whom to hold accountable in the event of a serious, negative outcome, which would require in turn adequate understanding of why this outcome arose.
Taken together, these five principles capture every one of the 47 principles contained in the six high-profile, expert-driven documents we analysed. Moreover, each principle is included in almost every statement of principles we analyzed (see Table 1 below). The five principles therefore form an ethical framework within which policies, best practices, and other recommendations may be made. This framework of principles is shown in Figure 2.
It is important to note that each of the six sets of ethical principles for AI that we analyzed emerged either from initiatives with global scope, or from within western liberal democracies. For the framework to be more broadly applicable, it would undoubtedly benefit from the perspectives of regions and cultures presently un- or under-represented in our sample. Of particular interest in this respect is the role of China, which is already home to the worldâs most valuable AI start-up (
An executive at the major Chinese technology firm Tencent recently suggested that the European Union should focus on developing AI which has âthe maximum benefit for human life, even if that technology isnât competitive to take on [the] American or Chinese marketâ (
If the framework presented in this article provides a coherent and sufficiently comprehensive overview of the central ethical principles for AI (
⢠| ⢠| ⢠| ⢠| ⢠| |
⢠| ⢠| ⢠| ⢠| ⢠| |
⢠| ⢠| ⢠| ⢠| ⢠| |
⢠| ⢠| ⢠| |||
⢠| ⢠| ⢠| ⢠| ⢠| |
⢠| ⢠| ⢠| ⢠| ||
⢠| ⢠| ⢠| ⢠| ⢠| |
⢠| ⢠| ⢠| ⢠| ⢠| |
⢠| ⢠| ⢠| ⢠| ⢠|
The development and use of AI hold the potential for both positive and negative impact on society, to alleviate or to amplify existing inequalities, to cure old problems, or to cause new ones. Charting the course that is socially preferable will depend not only on well-crafted regulation and common standards, but also on the use of a framework of ethical principles, within which concrete actions can be situated. We believe that the framework presented here as emerging from the current debate will serve as valuable architecture for securing positive social outcomes from AI technology and move from good principles to good practices (
Floridiâs work was supported by (i) Privacy and Trust Stream - Social lead of the PETRAS Internet of Things research hub - PETRAS is funded by the UK Engineering and Physical Sciences Research Council (EPSRC), grant agreement no. EP/N023013/1; (ii) Facebook; and (iii) Google. Cowls is the recipient of a Doctoral Studentship from the Alan Turing Institute.
Floridi chaired the AI4People project and Cowls was the rapporteur. Floridi is also a member of the European Commissionâs High-Level Expert Group on Artificial Intelligence (HLEGAI).
Beauchamp, T. L., & Childress, J. F. (2012).
Boland, H. (2018, October, 14). Tencent executive urges Europe to focus on ethical uses of artificial intelligence.
China State Council (2017, July, 8).
Corea, F. (2019). AI knowledge map: How to classify AI technologies, a sketch of a new AI technology landscape. First appeared in
Cowls, J., Floridi, L., & Taddeo, M. (2018). The challenges and opportunities of ethical AI.
Cowls, J., King, T. C., Taddeo, M., & Floridi, L. (2019). Designing AI for social good: Seven essential factors.
Cowls, J., Png, M.-T., & Au, Y. (n.d.). Foundations for geographic representation in algorithmic ethics. Unpublished.
Delcker, J. (2018, March, 3). Europeâs silver bullet in global AI battle: Ethics.
Ding, J. (2018, March). Deciphering Chinaâs AI dream.
European Group on Ethics in Science and New Technologies (2018, March).
Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., Luetge, C., Madelin, R., Pagallo, U., Rossi, F., Schafer, B., Valcke, P., & Vayena, E. (2018). AI4PeopleâAn ethical framework for a good AI society: Opportunities, risks, principles, and recommendations.
Floridi, L., Taddeo, M., & Turilli, M. (2009). Turingâs imitation game: still an impossible challenge for all machines and some judgesâan evaluation of the 2008 Loebner contest.
Floridi, L. (2013).
Floridi, L. (2019a). What the near future of artificial intelligence could be.
Floridi, L. (2019b). Translating principles into practices of digital ethics: Five risks of being unethical.
Hagendorff, T. (2019). The ethics of AI ethicsâAn evaluation of guidelines.
HLEGAI [High Level Expert Group on Artificial Intelligence], European Commission (2018, December 18).
HLEGAI [High Level Expert Group on Artificial Intelligence], European Commission (2019, April 8).
House of Lords Artificial Intelligence Committee (2018, April 16).
The IEEE Initiative on Ethics of Autonomous and Intelligent Systems (2017). Ethically Aligned Design, v2.
Jezard, A. (2018, April 11).
King, T., Aggarwal, N., Taddeo, M., & Floridi, L. (2018, May 22). Artificial intelligence crime: An interdisciplinary analysis of foreseeable threats and solutions.
Lee, K., & Triolo, P. (2017, December). Chinaâs artificial intelligence revolution: Understanding Beijingâs structural advantages.
McCarthy, J., Minsky, M. L., Rochester, N., & Shannon, C. E. (2006). A proposal for the Dartmouth summer research project on artificial intelligence, August 31, 1955.
Montreal Declaration for a Responsible Development of Artificial Intelligence (2017, November, 3). Announced at the conclusion of the Forum on the Socially Responsible Development of AI.
Morley, J., Floridi, L., Kinsey, L., & Elhalal, A. (2019). From what to how. An overview of AI ethics tools, methods and research to translate principles into practices.
OECD (2019). Recommendation of the Council on Artificial Intelligence.
Partnership on AI (2018). Tenets.
Samuel, A. L. (1960). Some moral and technical consequences of automationâA refutation.
Taddeo, M., & Floridi, L. (2018a). How AI can be a force for good.
Taddeo, M., & Floridi, L. (2018b). Regulate artificial intelligence to avert cyber arms race.
Turing, A. M. (1950). Computing machinery and intelligence.
Vodafone Institute for Society and Communications (2018).
Webster, G., Creemers, R., Triolo, P. and Kania, E (2017, August 1). Chinaâs plan to âLeadâ in AI: Purpose, prospects, and problems.
Wiener, N. (1960). Some moral and technical consequences of automation.
Yang, G. Z., Bellingham, J., Dupont, P. E., Fischer, P., Floridi, L., Full, R., Jacobstein, N., Kumar, V., McNutt, M., Merrifield, R., Nelson, B. J., Scassellati, B., Taddeo, M., Taylor, R., Veloso, M., Wang, Z. L., & Wood, R. (2018). The grand challenges of Science Robotics.
©2019 Luciano Floridi and Josh Cowls. This article is licensed under a Creative Commons Attribution (CC BY 4.0)