The Trolley Problem and Isaac Asimov’s First Law of Robotics.Erik Persson &Maria Hedlund -2024 -Journal of Science Fiction and Philosophy 7.detailsHow to make robots safe for humans is intensely debated, within academia as well as in industry, media and on the political arena. Hardly any discussion of the subject fails to mention Isaac Asimov’s three laws of Robotics. We find it curious that a set of fictional laws can have such a strong impact on discussions about a real-world problem and we think this needs to be looked into. The probably most common phrase in connection with robotic and AI ethics, (...) second to “The Three Laws of Robotics”, is “The Trolley Problem”. Asimov’s laws and the Trolley Problem are usually discussed separately but there is a connection in that the Trolley Problem poses a seemingly unsolvable problem for Asimov’s First Law, that states: A robot may not injure a human being or, through inaction, allow a human being to come to harm. That is, it contains an active and a passive clause and obliges the robot to obey both, while the Trolley Problem forces us to choose between these two options. The object of this paper is therefore to investigate if and how Asimov’s First Law of Robotics can handle a situation where we are forced to choose between the active and the passive clauses of the law. We discuss four possible solutions to the challenge explicitly or implicitly used by Asimov. We conclude that all four suggestions would solve the problem but in different ways and with different implications for other dilemmas in robot ethics. We also conclude that considering the urgency of finding ways to secure a safe coexistence between humans and robots, we should not let the Trolley Problem stand in the way of using the First Law of robotics for this purpose. If we want to use Asimov’s laws for this purpose, we also recommend discarding the active clause of the First Law. (shrink)
Expert responsibility in AI development.Maria Hedlund &Erik Persson -2022 -AI and Society:1-12.detailsThe purpose of this paper is to discuss the responsibility of AI experts for guiding the development of AI in a desirable direction. More specifically, the aim is to answer the following research question: To what extent are AI experts responsible in a forward-looking way for effects of AI technology that go beyond the immediate concerns of the programmer or designer? AI experts, in this paper conceptualised as experts regarding the technological aspects of AI, have knowledge and control of AI (...) technology that non-experts do not have. Drawing on responsibility theory, theories of the policy process, and critical algorithm studies, we discuss to what extent this capacity, and the positions that these experts have to influence the AI development, make AI experts responsible in a forward-looking sense for consequences of the use of AI technology. We conclude that, as a professional collective, AI experts, to some extent, are responsible in a forward-looking sense for consequences of use of AI technology that they could foresee, but with the risk of increased influence of AI experts at the expense of other actors. It is crucial that a diversity of actors is included in democratic processes on the future development of AI, but for this to be meaningful, AI experts need to take responsibility for how the AI technology they develop affects public deliberation. (shrink)
Epigenetic Responsibility.Maria Hedlund -2011 -Medicine Studies 3 (3):171-183.detailsThe purpose of this article is to argue for a position holding that epigenetic responsibility primarily should be a political and not an individual responsibility. Epigenetic is a rapidly growing research field studying regulations of gene expression that do not change the DNA sequence. Knowledge about these mechanisms is still uncertain in many respects, but main presumptions are that they are triggered by environmental factors and life style and, to a certain extent, heritable to subsequent generations, thereby reminding of aspects (...) of Lamarckism. Epigenetic research advances give rise to intriguing challenges for responsibility relations between the society and the individual. Responsibility is commonly understood in a backwards-looking manner, identifying causally responsible actors to blame for a bad outcome. If only a backwards-looking responsibility model is applied, epigenetics might give rise to arduous responsibility ascriptions to individuals for their health and the health of their future descendants. This would put heavy responsibility burdens on actors constrained by unequal social and economic structures. In contrast, a forward-looking responsibility notion takes account of structural conditions and pay attention to who is best placed to do something about conditions contributing to bad outcomes. A forward-looking responsibility notion would partly free disadvantaged individuals from responsibility, and identify actors with power and capacity to do something about structural factors constraining genuine choice. (shrink)
The future of AI in our hands? - To what extent are we as individuals morally responsible for guiding the development of AI in a desirable direction?Erik Persson &Maria Hedlund -2022 -AI and Ethics 2:683-695.detailsArtificial intelligence (AI) is becoming increasingly influential in most people’s lives. This raises many philosophical questions. One is what responsibility we have as individuals to guide the development of AI in a desirable direction. More specifically, how should this responsibility be distributed among individuals and between individuals and other actors? We investigate this question from the perspectives of five principles of distribution that dominate the discussion about responsibility in connection with climate change: effectiveness, equality, desert, need, and ability. Since much (...) is already written about these distributions in that context, we believe much can be gained if we can make use of this discussion also in connection with AI. Our most important findings are: (1) Different principles give different answers depending on how they are interpreted but, in many cases, different interpretations and different principles agree and even strengthen each other. If for instance ‘equality-based distribution’ is interpreted in a consequentialist sense, effectiveness, and through it, ability, will play important roles in the actual distributions, but so will an equal distribution as such, since we foresee that an increased responsibility of underrepresented groups will make the risks and benefits of AI more equally distributed. The corresponding reasoning is true for need-based distribution. (2) If we acknowledge that someone has a certain responsibility, we also have to acknowledge a corresponding degree of influence for that someone over the matter in question. (3) Independently of which distribution principle we prefer, ability cannot be dismissed. Ability is not fixed, however and if one of the other distributions is morally required, we are also morally required to increase the ability of those less able to take on the required responsibility. (shrink)
Who Should obey Asimov’s Laws of Robotics? A Question of Responsibility.Maria Hedlund &Erik Persson -2024 - In Spyridon Stelios & Kostas Theologou,The Ethics Gap in the Engineering of the Future. Emerald Publishing. pp. 9-25.detailsThe aim of this chapter is to explore the safety value of implementing Asimov’s Laws of Robotics as a future general framework that humans should obey. Asimov formulated laws to make explicit the safeguards of the robots in his stories: (1) A robot may not injure or harm a human being or, through inaction, allow a human being to come to harm; (2) A robot must obey the orders given to it by human beings except where such orders would conflict (...) with the First Law; (3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. In Asimov’s stories, it is always assumed that the laws are built into the robots to govern the behaviour of the robots. As his stories clearly demonstrate, the Laws can be ambiguous. Moreover, the laws are not very specific. General rules as a guide for robot behaviour may not be a very good method to achieve robot safety – if we expect the robots to follow them. But would it work for humans? In this chapter, we ask whether it would make as much, or more, sense to implement the laws in human legislation with the purpose of governing the behaviour of people or companies that develop, build, market or use AI, embodied in robots or in the form of software, now and in the future. (shrink)
Ethicisation and Reliance on Ethics Expertise.Maria Hedlund -2024 -Res Publica 30 (1):87-105.detailsEthicisation refers to the tendency to frame issues in ethical terms and can be observed in different areas of society, particularly in relation to policy-making on emerging technologies. The turn to ethics implies increased use of ethics expertise, or at least an expectation that this is the case. Calling for experts on ethics when ethically complicated questions need to be handled helps us to uphold central virtues, but there are also problems connected with ethicisation. In policy-making processes, the turn to (...) ethics may not always be a sign of a sincere aspiration to moral performance, but a strategic move to gain acceptance for controversial or sensitive activities, and ethicisation may depoliticise questions and constrain room for democratic participation. Nevertheless, ethicisation, and the ensuing call for ethics experts, suggests an expectation of confidence in ethics and ethics expertise, and that ethical guidance is an effective way of governing people’s behaviour in a morally desirable way. The purpose of this article is to explore democratic and epistemic challenges of ethicisation in the context of emerging technologies, with a specific focus on how the notions of _under-reliance_ and _over-reliance_ of ethics expertise can unpack the processes at play. By using biotechnology and the EU process of bio-patents and the publication of ethical guidelines for AI development as illustrations, it is demonstrated how ethicisation may give rise to democratic and epistemic challenges that are not explicitly addressed in discussions on the political use of ethics expertise. (shrink)
Distribution of responsibility for AI development: expert views.Maria Hedlund &Erik Persson -forthcoming -AI and Society.detailsThe purpose of this paper is to increase the understanding of how different types of experts with influence over the development of AI, in this role, reflect upon distribution of forward-looking responsibility for AI development with regard to safety and democracy. Forward-looking responsibility refers to the obligation to see to it that a particular state of affairs materialise. In the context of AI, actors somehow involved in AI development have the potential to guide AI development in a safe and democratic (...) direction. This study is based on qualitative interviews with such actors in different roles at research institutions, private companies, think tanks, consultancy agencies, parliaments, and non-governmental organisations. While the reflections about distribution of responsibility differ among the respondents, one observation is that influence is seen as an important basis for distribution of responsibility. Another observation is that several respondents think of responsibility in terms of what it would entail in concrete measures. By showing how actors involved in AI development reflect on distribution of responsibility, this study contributes to a dialogue between the field of AI governance and the field of AI ethics. (shrink)
How will the emerging plurality of lives change how we conceive of and relate to life?Erik Persson,Jessica Abbott,Christian Balkenius,Anna Cabak Redei,Klara Anna Čápová,Dainis Dravins,David Dunér,Markus Gunneflo,Maria Hedlund,Mats Johansson,Anders Melin &Petter Persson -2019 -Challenges 10 (1).detailsThe project “A Plurality of Lives” was funded and hosted by the Pufendorf Institute for Advanced Studies at Lund University, Sweden. The aim of the project was to better understand how a second origin of life, either in the form of a discovery of extraterrestrial life, life developed in a laboratory, or machines equipped with abilities previously only ascribed to living beings, will change how we understand and relate to life. Because of the inherently interdisciplinary nature of the project aim, (...) the project took an interdisciplinary approach with a research group made up of 12 senior researchers representing 12 different disciplines. The project resulted in a joint volume, an international symposium, several new projects, and a network of researchers in the field, all continuing to communicate about and advance the aim of the project. (shrink)