| |
Moral sensitivity and moral reasoning are essential competencies biomedical researchers have to develop to make ethical decisions in their daily practices. Previous research has shown that these competencies can be developed through ethics education. However, it is unclear which underlying mechanisms best support the development of these competencies. In this article we argue that the development of moral sensitivity and moral reasoning can be fostered through teaching strategies that tap into students’ moral imagination. We describe how moral imagination can stimulate (...) the development of these competencies through three different merits of moral imagination. Moral imagination can help students to 1) transfer and apply abstract moral concepts to concrete situations and contexts, 2) explore the perspective of others, 3) explore and foresee the moral consequences of different decisions and actions. We explain these three merits of moral imagination in the context of biomedical research and present a theoretical model for how these merits can be used to stimulate the development of moral sensitivity and moral reasoning. Furthermore, we describe multiple teaching strategies for biomedical curricula that tap into the three merits of moral imagination. These teaching strategies can inspire teachers to design ethics education that activates students’ moral imagination for the development of moral sensitivity and moral reasoning. (shrink) | |
Knowing that technologies are inherently value-laden and systemically interwoven with society, the question is how individual engineers can take up the challenge of accepting the responsibility for their work? This paper will argue that engineers have no institutional structure at the level of society that allows them to recognize, reflect upon, and actively integrate the value-laden character of their designs. Instead, engineers have to tap on the different institutional realms of market, science, and state, making their work a ‘hybrid’ activity (...) combining elements from the different institutional realms. To deal with this institutional hybridity, engineers develop routines and heuristics in their professional network, which do not allow societal values to be expressed in a satisfactory manner. To allow forms of ‘active’ responsibility, there have to be so-called ‘accountability forums’ that guide moral reflections of individual actors. The paper will subsequently look at the methodologies of value-sensitive design and constructive technology assessment and explore whether and how these methodologies allow engineers to integrate societal values into the design technological artifacts and systems. As VSD and CTA are methodologies that look at the process of technological design, whereas the focus of this paper is on the designer, they can only be used indirectly, namely as frameworks which help to identify the contours of a framework for active responsibility of engineers. (shrink) | |
In contemporary Science, Technology and Society studies, Bruno Latour’s Actor Network Theory is often used to study how social change arises from interaction between people and technologies. Though Latour’s approach is rich in the sense of enabling scholars to appreciate the complexity of many relevant technological, environmental, and social factors in their studies, the approach is poor from an ethical point of view: the doings of things and people are couched in one and the same behaviorist vocabulary without giving due (...) recognition to the ethical relevance of human intelligence, sympathy and reflection in making responsible choices. This article argues that two other naturalist projects, the non-teleological virtue ethics of Charles Darwin and the pragmatist instrumentalism of John Dewey can enrich ANT-based STS studies, both, in a descriptive and in a normative sense. (shrink) | |
Bernard Williams’ integrity objection poses a significant challenge to utilitarianism, which has largely been answered by utilitarians. This paper recasts the integrity objection to show that utilitarian agents could be committed to producing the overall best states of affairs and yet not positively act to bring them about. I introduce the ‘Moral Pinch Hitter’ – someone who performs actions at the bequest of another agent – to demonstrate that utilitarianism cannot distinguish between cases in which an agent maximizes utility by (...) positively acting in response to her duty, and cases in which an agent fails morally by relying upon someone else to perform the obligatory act. The inability to distinguish among these cases establishes a new, reloaded integrity objection to utilitarianism: utilitarianism cannot explain why it would be wrong to have someone else make difficult moral decisions, and to act on those decisions, for me. (shrink) | |
This article builds upon previous discussion of social and technical determinisms as implicit positions in the biofuel debate. To ensure these debates are balanced, it has been suggested that they should be designed to contain a variety of deterministic positions. Whilst it is agreed that determinism does not feature strongly in contemporary academic literatures, it is found that they have generally been superseded by an absence of any substantive conceptualisation of how the social shaping of technology may be related to, (...) or occur alongside, an objective or autonomous reality. The problem of determinism emerges at an ontological level and must be resolved in situ. A critical realist approach to technology is presented which may provide a more appropriate framework for debate. In dialogue with previous discussion, the distribution of responsibility is revisited with reference to the role of scientists and engineers. (shrink) | |
Technologies fulfill a social role in the sense that they influence the moral actions of people, often in unintended and unforeseen ways. Scientists and engineers are already accepting much responsibility for the technological, economical and environmental aspects of their work. This article asks them to take an extra step, and now also consider the social role of their products. The aim is to enable engineers to take a prospective responsibility for the future social roles of their technologies by providing them (...) with a matrix that helps to explore in advance how emerging technologies might plausibly affect the reasons behind people’s (moral) actions. On the horizontal axis of the matrix, we distinguished the three basic types of reasons that play a role in practical judgment: what is the case, what can be done and what should be done. On the vertical axis we distinguished the morally relevant classes of issues: stakeholders, consequences and the good life. To illustrate how this matrix may work in practice, the final section applies the matrix to the case of the Google PowerMeter. (shrink) | |
The overall aim of this thesis is to look at some philosophical issues surrounding autonomous systems in society and war. These issues can be divided into three main categories. The first, discussed in papers I and II, concerns ethical issues surrounding the use of autonomous systems – where the focus in this thesis is on military robots. The second issue, discussed in paper III, concerns how to make sure that advanced robots behave ethically adequate. The third issue, discussed in papers (...) IV and V, has to do with agency and responsibility. Another issue, somewhat aside from the philosophical, has to do with coping with future technologies, and developing methods for dealing with potentially disruptive technologies. This is discussed in papers VI and VII. Paper I systemizes some ethical issues surrounding the use of UAVs in war, with the laws of war as a backdrop. It is suggested that the laws of war are too wide and might be interpreted differently depending on which normative moral theory is used. Paper II is about future, more advanced autonomous robots, and whether the use of such robots can undermine the justification for killing in war. The suggestion is that this justification is substantially undermined if robots are used to replace humans to a high extent. Papers I and II both suggest revisions or additions to the laws or war. Paper III provides a discussion on one normative moral theory – ethics of care – connected to care robots. The aim is twofold: first, to provide a plausible and ethically relevant interpretation of the key term care in ethics of care, and second, to discuss whether ethics of care may be a suitable theory to implement in care robots. Paper IV discusses robots connected to agency and responsibility, with a focus on consciousness. The paper has a functionalistic approach, and it is suggested that robots should be considered agents if they can behave as if they are, in a moral Turing test. Paper V is also about robots and agency, but with a focus on free will. The main question is whether robots can have free will in the same sense as we consider humans to have free will when holding them responsible for their actions in a court of law. It is argued that autonomy with respect to norms is crucial for the agency of robots. Paper VI investigates the assessment of socially disruptive technological change. The coevolution of society and potentially disruptive technolgies makes decision-guidance on such technologies difficult. Four basic principles are proposed for such decision guidance, involving interdisciplinary and participatory elements. Paper VII applies the results from paper VI – and a workshop – to autonomous systems, a potentially disruptive technology. A method for dealing with potentially disruptive technolgies is developed in the paper. (shrink) No categories |