Movatterモバイル変換


[0]ホーム

URL:


PhilPapersPhilPeoplePhilArchivePhilEventsPhilJobs
Order:

1 filter applied
  1.  47
    Schrödinger’s Fetus and Relational Ontology: Reconciling Three Contradictory Intuitions in Abortion Debates.Stephen R. Milford &David Shaw -2024 -Ethical Theory and Moral Practice 27 (3):389-406.
    Pro-life and pro-choice advocates battle for rational dominance in abortion debates. Yet, public polling (and general legal opinion) demonstrates the public’s preference for the middle ground: that abortions are acceptable in certain circumstances and during early pregnancy. Implicit in this, are two contradictory intuitions: (1) that we were all early fetuses, and (2) abortion kills no one. To hold these positions together, Harman and Räsänen have argued for the Actual Future Principle (AFP) which distinguishes between fetuses that will develop into (...) persons and those that will never develop into persons. However intellectually ingenious their solutions are, they fail to account for a third intuition: that the death of a wanted fetus – e.g. through termination or miscarriage – is of moral significance. Not only is this practically important, but it is also supported by public opinion. The authors of this paper argue that relational ontology can modify the AFP to better account for all three intuitions. Furthermore, it further emphasizes the pivotal role of the pregnant person who relates to their own fetus in either personal or impersonal ways. Addressing the fundamental challenges of relational ontology, the authors defend the position that human personal identity is ultimately relational. (shrink)
    Direct download(2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  2.  20
    Playing Brains: The Ethical Challenges Posed by Silicon Sentience and Hybrid Intelligence in DishBrain.Stephen R. Milford,David Shaw &Georg Starke -2023 -Science and Engineering Ethics 29 (6):1-17.
    The convergence of human and artificial intelligence is currently receiving considerable scholarly attention. Much debate about the resulting _Hybrid Minds_ focuses on the integration of artificial intelligence into the human brain through intelligent brain-computer interfaces as they enter clinical use. In this contribution we discuss a complementary development: the integration of a functional in vitro network of human neurons into an _in silico_ computing environment. To do so, we draw on a recent experiment reporting the creation of silico-biological intelligence as (...) a case study (Kagan et al., 2022b ). In this experiment, multielectrode arrays were plated with stem cell-derived human neurons, creating a system which the authors call _DishBrain_. By embedding the system into a virtual game-world, neural clusters were able to receive electrical input signals from the game-world and to respond appropriately with output signals from pre-assigned motor regions. Using this design, the authors demonstrate how the _DishBrain_ self-organises and successfully learns to play the computer game ‘Pong’, exhibiting ‘sentient’ and intelligent behaviour in its virtual environment. The creation of such hybrid, silico-biological intelligence raises numerous ethical challenges. Following the neuroscientific framework embraced by the authors themselves, we discuss the arising ethical challenges in the context of Karl Friston’s Free Energy Principle, focusing on the risk of creating synthetic phenomenology. Following the _DishBrain’s_ creator’s neuroscientific assumptions, we highlight how DishBrain’s design may risk bringing about artificial suffering and argue for a congruently cautious approach to such synthetic biological intelligence. (shrink)
    Direct download(3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  3.  49
    AI Through Ethical Lenses: A Discourse Analysis of Guidelines for AI in Healthcare.Laura Arbelaez Ossa,Stephen R. Milford,Michael Rost,Anja K. Leist,David M. Shaw &Bernice S. Elger -2024 -Science and Engineering Ethics 30 (3):1-21.
    While the technologies that enable Artificial Intelligence (AI) continue to advance rapidly, there are increasing promises regarding AI’s beneficial outputs and concerns about the challenges of human–computer interaction in healthcare. To address these concerns, institutions have increasingly resorted to publishing AI guidelines for healthcare, aiming to align AI with ethical practices. However, guidelines as a form of written language can be analyzed to recognize the reciprocal links between its textual communication and underlying societal ideas. From this perspective, we conducted a (...) discourse analysis to understand how these guidelines construct, articulate, and frame ethics for AI in healthcare. We included eight guidelines and identified three prevalent and interwoven discourses: (1) AI is unavoidable and desirable; (2) AI needs to be guided with (some forms of) principles (3) trust in AI is instrumental and primary. These discourses signal an over-spillage of technical ideals to AI ethics, such as over-optimism and resulting hyper-criticism. This research provides insights into the underlying ideas present in AI guidelines and how guidelines influence the practice and alignment of AI with ethical, legal, and societal values expected to shape AI in healthcare. (shrink)
    Direct download(3 more)  
     
    Export citation  
     
    Bookmark  
  4.  13
    Accuracy is inaccurate: Why a focus on diagnostic accuracy for medical chatbot AIs will not lead to improved health outcomes.Stephen R. Milford -2025 -Bioethics 39 (2):163-169.
    Since its launch in November 2022, ChatGPT has become a global phenomenon, sparking widespread public interest in chatbot artificial intelligences (AIs) generally. While not approved for medical use, it is capable of passing all three United States medical licensing exams and offers diagnostic accuracy comparable to a human doctor. It seems inevitable that it, and tools like it, are and will be used by the general public to provide medical diagnostic information or treatment plans. Before we are taken in by (...) the promise of a golden age for chatbot medical AIs, it would be wise to consider the implications of using these tools as either supplements to, or substitutes for, human doctors. With the rise of publicly available chatbot AIs, there has been a keen focus on research into the diagnostic accuracy of these tools. This, however, has left a notable gap in our understanding of the implications for health outcomes of these tools. Diagnosis accuracy is only part of good health care. For example, crucial to positive health outcomes is the doctor–patient relationship. This paper challenges the recent focus on diagnostic accuracy by drawing attention to the causal relationship between doctor–patient relationships and health outcomes arguing that chatbot AIs may even hinder outcomes in numerous ways including subtracting the elements of perception and observation that are crucial to clinical consultations. The paper offers brief suggestions to improve chatbot medical AIs so as to positively impact health outcomes. (shrink)
    Direct download(2 more)  
     
    Export citation  
     
    Bookmark  
  5.  35
    Integrating ethics in AI development: a qualitative study.Laura Arbelaez Ossa,Giorgia Lorenzini,Stephen R. Milford,David Shaw,Bernice S. Elger &Michael Rost -2024 -BMC Medical Ethics 25 (1):1-11.
    Background While the theoretical benefits and harms of Artificial Intelligence (AI) have been widely discussed in academic literature, empirical evidence remains elusive regarding the practical ethical challenges of developing AI for healthcare. Bridging the gap between theory and practice is an essential step in understanding how to ethically align AI for healthcare. Therefore, this research examines the concerns and challenges perceived by experts in developing ethical AI that addresses the healthcare context and needs. Methods We conducted semi-structured interviews with 41 (...) AI experts and analyzed the data using reflective thematic analysis. Results We developed three themes that expressed the considerations perceived by experts as essential for ensuring AI aligns with ethical practices within healthcare. The first theme explores the ethical significance of introducing AI with a clear and purposeful objective. The second theme focuses on how experts are concerned about the tension that exists between economic incentives and the importance of prioritizing the interests of doctors and patients. The third theme illustrates the need to develop context-sensitive AI for healthcare that is informed by its underlying theoretical foundations. Conclusions The three themes collectively emphasized that beyond being innovative, AI must genuinely benefit healthcare and its stakeholders, meaning AI also aligns with intricate and context-specific healthcare practices. Our findings signal that instead of narrow product-specific AI guidance, ethical AI development may need a systemic, proactive perspective that includes the ethical considerations (objectives, actors, and context) and focuses on healthcare applications. Ethically developing AI involves a complex interplay between AI, ethics, healthcare, and multiple stakeholders. (shrink)
    Direct download(3 more)  
     
    Export citation  
     
    Bookmark  
  6.  16
    Mobile homes in the land of illness: the hospitality and hostility of language in doctor-patient relations.Stephen R. Milford -2023 -Philosophy, Ethics, and Humanities in Medicine 18 (1):1-7.
    Illness has a way of disorientating us, as if we are cast adrift in a foreign land. Like strangers in a dessert we seek oasis to recollect ourselves, find refuge and learn to build our own shelters. Using the philosophy of Levinas and Derrida, we can interpret health care providers (HCP), and the sites from which they act (e.g. hospitals), as _dwelling hosts_ that offer hospitality to strangers in this foreign land. While often the dwellings are physical (e.g. hospitals), this (...) is not always the case. Language represents a mobile home of refuge to the sick. Using language the HCP has built a shelter so as to dwell in the land of illness. However, while hospitality is an inviting concept, it also implies hostility. The door that opens may also be slammed shut. This article explores the paradox of the linguistic mobile home offered to patients. It highlights the power of language to construct a safe place in a strange land, but also explores the inherent violence. It ends with an exploration of the ways language can be used by HCP to assist patients to construct their own mobile shelters. (shrink)
    Direct download(4 more)  
     
    Export citation  
     
    Bookmark  
Export
Limit to items.
Filters





Configure languageshere.Sign in to use this feature.

Viewing options


Open Category Editor
Off-campus access
Using PhilPapers from home?

Create an account to enable off-campus access through your institution's proxy server or OpenAthens.


[8]ページ先頭

©2009-2025 Movatter.jp