Movatterモバイル変換


[0]ホーム

URL:


PhilPapersPhilPeoplePhilArchivePhilEventsPhilJobs
Order:

1 filter applied
  1.  40
    Moral Judgements on the Actions of Self-Driving Cars and Human Drivers in Dilemma Situations From Different Perspectives.Noa Kallioinen,Maria Pershina,Jannik Zeiser,Farbod Nosrat Nezami,Gordon Pipa,Achim Stephan &Peter König -2019 -Frontiers in Psychology 10.
  2.  32
    Owning Decisions: AI Decision-Support and the Attributability-Gap.Jannik Zeiser -2024 -Science and Engineering Ethics 30 (4):1-19.
    Artificial intelligence (AI) has long been recognised as a challenge to responsibility. Much of this discourse has been framed around robots, such as autonomous weapons or self-driving cars, where we arguably lack control over a machine’s behaviour and therefore struggle to identify an agent that can be held accountable. However, most of today’s AI is based on machine-learning technology that does not act on its own, but rather serves as a decision-support tool, automatically analysing data to help human agents make (...) better decisions. I argue that decision-support tools pose a challenge to responsibility that goes beyond the familiar problem of finding someone to blame or punish for the behaviour of agent-like systems. Namely, they pose a problem for what we might call “decision ownership”: they make it difficult to identify human agents to whom we can attribute value-judgements that are reflected in decisions. Drawing on recent philosophical literature on responsibility and its various facets, I argue that this is primarily a problem of attributability rather than of accountability. This particular responsibility problem comes in different forms and degrees, most obviously when an AI provides direct recommendations for actions, but also, less obviously, when it provides mere descriptive information on the basis of which a decision is made. (shrink)
    Direct download(3 more)  
     
    Export citation  
     
    Bookmark  
  3.  5
    Emergent Discrimination: Should We Protect Algorithmic Groups?Jannik Zeiser -forthcoming -Journal of Applied Philosophy.
    Discrimination is usually thought of in terms of socially salient groups, such as race or gender. Some scholars argue that the rise of algorithmic decision‐making poses a challenge to this notion. Algorithms are not bound by a social view of the world. Therefore, they may not only inherit pre‐existing social biases and injustices but may also discriminate based on entirely new categories that have little or no meaning to humans at all, such as ‘being born on a Tuesday’. Should this (...) prospect change how we theorize about discrimination, and should we protect these algorithmic groups, as some have suggested? I argue that the phenomenon is adequately described as ‘discrimination’ when a group is systematically disadvantaged. At present, we lack information about whether any algorithmic group meets this criterion, so it is difficult to protect such groups. Instead, we should prevent algorithms from disproportionately disadvantaging certain individuals, and I outline strategies for doing so. (shrink)
    Direct download(2 more)  
     
    Export citation  
     
    Bookmark  
Export
Limit to items.
Filters





Configure languageshere.Sign in to use this feature.

Viewing options


Open Category Editor
Off-campus access
Using PhilPapers from home?

Create an account to enable off-campus access through your institution's proxy server or OpenAthens.


[8]ページ先頭

©2009-2025 Movatter.jp