Movatterモバイル変換


[0]ホーム

URL:


PhilPapersPhilPeoplePhilArchivePhilEventsPhilJobs

Is There a Trade-Off Between Human Autonomy and the ‘Autonomy’ of AI Systems?

InConference on Philosophy and Theory of Artificial Intelligence. Springer International Publishing. pp. 67-71 (2022)
  Copy   BIBTEX

Abstract

Autonomy is often considered a core value of Western society that is deeply entrenched in moral, legal, and political practices. The development and deployment of artificial intelligence (AI) systems to perform a wide variety of tasks has raised new questions about how AI may affect human autonomy. Numerous guidelines on the responsible development of AI now emphasise the need for human autonomy to be protected. In some cases, this need is linked to the emergence of increasingly ‘autonomous’ AI systems that can perform tasks without human control or supervision. Do such ‘autonomous’ systems pose a risk to our own human autonomy? In this article, I address the question of a trade-off between human autonomy and system ‘autonomy’.

Other Versions

No versions found

Links

PhilArchive

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Analytics

Added to PP
2023-08-07

Downloads
1,754 (#8,964)

6 months
501 (#3,137)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

Citations of this work

No citations found.

Add more citations


[8]ページ先頭

©2009-2025 Movatter.jp