Movatterモバイル変換


[0]ホーム

URL:


PhilPapersPhilPeoplePhilArchivePhilEventsPhilJobs
Switch to: References

Add citations

You mustlogin to add citations.
  1. (1 other version)Respect.Robin S. Dillon -2018 -Stanford Encyclopedia of Philosophy.
    Direct download  
     
    Export citation  
     
    Bookmark   45 citations  
  • Artificial intelligence and human autonomy: the case of driving automation.Fabio Fossa -2024 -AI and Society:1-12.
    The present paper aims at contributing to the ethical debate on the impacts of artificial intelligence (AI) systems on human autonomy. More specifically, it intends to offer a clearer understanding of the design challenges to the effort of aligning driving automation technologies to this ethical value. After introducing the discussion on the ambiguous impacts that AI systems exert on human autonomy, the analysis zooms in on how the problem has been discussed in the literature on connected and automated vehicles (CAVs). (...) On this basis, it is claimed that the issue has been mainly tackled on a fairly general level, and mostly with reference to the controversial issue of crash-optimization algorithms, so that only limited design insights have been drawn. However, integrating ethical analysis and design practices is critical to pursue the implementation of such an important ethical value into CAV technologies. To this aim, it is argued, a more applied approach targeted at examining the impacts on human autonomy of current CAV functions should also be explored. As an example of the intricacy of this task, the case of automated route planning is discussed in some detail. (shrink)
    Direct download(4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • Decentralized Governance of AI Agents.Tomer Jordi Chaffer,Charles von Goins Ii,Bayo Okusanya,Dontrail Cotlage &Justin Goldston -manuscript
    Autonomous AI agents present transformative opportunities and significant governance challenges. Existing frameworks, such as the EU AI Act and the NIST AI Risk Management Framework, fall short of addressing the complexities of these agents, which are capable of independent decision-making, learning, and adaptation. To bridge these gaps, we propose the ETHOS (Ethical Technology and Holistic Oversight System) framework—a decentralized governance (DeGov) model leveraging Web3 technologies, including blockchain, smart contracts, and decentralized autonomous organizations (DAOs). ETHOS establishes a global registry for AI (...) agents, enabling dynamic risk classification, proportional oversight, and automated compliance monitoring through tools like soulbound tokens and zero-knowledge proofs. Furthermore, the framework incorporates decentralized justice systems for transparent dispute resolution and introduces AI-specific legal entities to manage limited liability, supported by mandatory insurance to ensure financial accountability and incentivize ethical design. By integrating philosophical principles of rationality, ethical grounding, and goal alignment, ETHOS aims to create a robust research agenda for promoting trust, transparency, and participatory governance. This innovative framework offers a scalable and inclusive strategy for regulating AI agents, balancing innovation with ethical responsibility to meet the demands of an AI-driven future. (shrink)
    Direct download(2 more)  
     
    Export citation  
     
    Bookmark  
  • The linguistic dead zone of value-aligned agency, natural and artificial.Travis LaCroix -2024 -Philosophical Studies:1-23.
    The value alignment problem for artificial intelligence (AI) asks how we can ensure that the “values”—i.e., objective functions—of artificial systems are aligned with the values of humanity. In this paper, I argue that linguistic communication is a necessary condition for robust value alignment. I discuss the consequences that the truth of this claim would have for research programmes that attempt to ensure value alignment for AI systems—or, more loftily, those programmes that seek to design robustly beneficial or ethical artificial agents.
    Direct download(4 more)  
     
    Export citation  
     
    Bookmark  
  • Do Men Have No Need for “Feminist” Artificial Intelligence? Agentic and Gendered Voice Assistants in the Light of Basic Psychological Needs.Laura Moradbakhti,Simon Schreibelmayr &Martina Mara -2022 -Frontiers in Psychology 13.
    Artificial Intelligence is supposed to perform tasks autonomously, make competent decisions, and interact socially with people. From a psychological perspective, AI can thus be expected to impact users’ three Basic Psychological Needs, namely autonomy, competence, and relatedness to others. While research highlights the fulfillment of these needs as central to human motivation and well-being, their role in the acceptance of AI applications has hitherto received little consideration. Addressing this research gap, our study examined the influence of BPN Satisfaction on Intention (...) to Use an AI assistant for personal banking. In a 2×2 factorial online experiment, 282 participants watched a video of an AI finance coach with a female or male synthetic voice that exhibited either high or low agency. In combination, these factors resulted either in AI assistants conforming to traditional gender stereotypes or in non-conforming conditions. Although the experimental manipulations had no significant influence on participants’ relatedness and competence satisfaction, a strong effect on autonomy satisfaction was found. As further analyses revealed, this effect was attributable only to male participants, who felt their autonomy need significantly more satisfied by the low-agency female assistant, consistent with stereotypical images of women, than by the high-agency female assistant. A significant indirect effects model showed that the greater autonomy satisfaction that men, unlike women, experienced from the low-agency female assistant led to higher ITU. The findings are discussed in terms of their practical relevance and the risk of reproducing traditional gender stereotypes through technology design. (shrink)
    Direct download(2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • Reconstructing AI Ethics Principles: Rawlsian Ethics of Artificial Intelligence.Salla Westerstrand -2024 -Science and Engineering Ethics 30 (5):1-21.
    The popularisation of Artificial Intelligence (AI) technologies has sparked discussion about their ethical implications. This development has forced governmental organisations, NGOs, and private companies to react and draft ethics guidelines for future development of ethical AI systems. Whereas many ethics guidelines address values familiar to ethicists, they seem to lack in ethical justifications. Furthermore, most tend to neglect the impact of AI on democracy, governance, and public deliberation. Existing research suggest, however, that AI can threaten key elements of western democracies (...) that are ethically relevant. In this paper, Rawls’s theory of justice is applied to draft a set of guidelines for organisations and policy-makers to guide AI development towards a more ethical direction. The goal is to contribute to the broadening of the discussion on AI ethics by exploring the possibility of constructing AI ethics guidelines that are philosophically justified and take a broader perspective of societal justice. The paper discusses how Rawls’s theory of justice as fairness and its key concepts relate to the ongoing developments in AI ethics and gives a proposition of how principles that offer a foundation for operationalising AI ethics in practice could look like if aligned with Rawls’s theory of justice as fairness. (shrink)
    Direct download(3 more)  
     
    Export citation  
     
    Bookmark  
  • Institutions, Automation, and Legitimate Expectations.Jelena Belic -2024 -The Journal of Ethics 28 (3):505-525.
    Debates concerning digital automation are mostly focused on the question of the availability of jobs in the short and long term. To counteract the possible negative effects of automation, it is often suggested that those at risk of technological unemployment should have access to retraining and reskilling opportunities. What is often missing in these debates are implications that all of this may have for individual autonomy understood as the ability to make and develop long-term plans. In this paper, I argue (...) that if digital automation becomes rapid, it will significantly undermine the legitimate expectation of stability and consequently, the ability to make and pursue long-term plans in the sphere of work. I focus on what is often taken to be one of the main long-term plans, i.e. the choice of profession, and I argue that this choice may be undermined by the pressure to continuously acquire new skills while at the same time facing a diminishing range of professions that one can choose from. Given that the choice of profession is significant for not-work related spheres of life, its undermining can greatly affect individual autonomy in these other spheres too. I argue that such undermining of individual planning agency constitutes a distinctive form of harm that necessitates a proactive institutional response. (shrink)
    Direct download(4 more)  
     
    Export citation  
     
    Bookmark  
  • Autonomy and Automation: the Case of Connected and Automated Vehicles.Fossa Fabio -2022 - In P. Kommers & M. Macedo,Proceedings of the International Conferences on ICT, Society, and Human Beings 2022; Web Based Communities and Social Media 2022; and E-Health 2022. IADIS Press. pp. 244-248.
    This short paper offers a preliminary inquiry into the impacts of driving automation on personal autonomy. Personal autonomy is a key ethical value in western culture, and one that buttresses fundamental components of the moral life such as the exercise of responsible behaviour and the full enjoyment of human dignity. Driving automation simultaneously enhances and constrains it in significant ways. Hence, its moral profile with reference to the value of personal autonomy is uncertain. Ethical analysis shows that such uncertainty is (...) due not just to the complexity of the technology, but also to the multifaceted normative profile of personal autonomy, which offers reasons to support both conditional and full driving automation. The paper sheds light on this duplicity, underlines the challenges this poses to the ethics of driving automation, and advocates for further research aimed at providing practitioners with more fine-grained guidelines on such a delicate issue. (shrink)
    Direct download  
     
    Export citation  
     
    Bookmark  

  • [8]ページ先頭

    ©2009-2025 Movatter.jp