Movatterモバイル変換


[0]ホーム

URL:


PhilPapersPhilPeoplePhilArchivePhilEventsPhilJobs
Switch to: References

Add citations

You mustlogin to add citations.
  1. From human resources to human rights: Impact assessments for hiring algorithms.Josephine Yam &Joshua August Skorburg -2021 -Ethics and Information Technology 23 (4):611-623.
    Over the years, companies have adopted hiring algorithms because they promise wider job candidate pools, lower recruitment costs and less human bias. Despite these promises, they also bring perils. Using them can inflict unintentional harms on individual human rights. These include the five human rights to work, equality and nondiscrimination, privacy, free expression and free association. Despite the human rights harms of hiring algorithms, the AI ethics literature has predominantly focused on abstract ethical principles. This is problematic for two reasons. (...) First, AI principles have been criticized for being vague and not actionable. Second, the use of vague ethical principles to discuss algorithmic risks does not provide any accountability. This lack of accountability creates an algorithmic accountability gap. Closing this gap is crucial because, without accountability, the use of hiring algorithms can lead to discrimination and unequal access to employment opportunities. This paper makes two contributions to the AI ethics literature. First, it frames the ethical risks of hiring algorithms using international human rights law as a universal standard for determining algorithmic accountability. Second, it evaluates four types of algorithmic impact assessments in terms of how effectively they address the five human rights of job applicants implicated in hiring algorithms. It determines which of the assessments can help companies audit their hiring algorithms and close the algorithmic accountability gap. (shrink)
    Direct download(3 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  • Just data? Solidarity and justice in data-driven medicine.Matthias Braun &Patrik Hummel -2020 -Life Sciences, Society and Policy 16 (1):1-18.
    This paper argues that data-driven medicine gives rise to a particular normative challenge. Against the backdrop of a distinction between the good and the right, harnessing personal health data towards the development and refinement of data-driven medicine is to be welcomed from the perspective of the good. Enacting solidarity drives progress in research and clinical practice. At the same time, such acts of sharing could—especially considering current developments in big data and artificial intelligence—compromise the right by leading to injustices and (...) affecting concrete modes of individual self-determination. In order to address this potential tension, two key elements for ethical reflection on data-driven medicine are proposed: the controllability of information flows, including technical infrastructures that are conducive towards controllability, and a paradigm shift towards output-orientation in governance and policy. (shrink)
    No categories
    Direct download(2 more)  
     
    Export citation  
     
    Bookmark   9 citations  
  • Which Framework to Use? A Systematic Review of Ethical Frameworks for the Screening or Evaluation of Health Technology Innovations.Tijs Vandemeulebroucke,Yvonne Denier,Evelyne Mertens &Chris Gastmans -2022 -Science and Engineering Ethics 28 (3):1-35.
    Innovations permeate healthcare settings on an ever-increasing scale. Health technology innovations impact our perceptions and experiences of health, care, disease, etc. Because of the fast pace these HTIs are being introduced in different healthcare settings, there is a growing societal consensus that these HTIs need to be governed by ethical reflection. This paper reports a systematic review of argument-based literature which focused on articles reporting on ethical frameworks to screen or evaluate HTIs. To do this a four step methodology was (...) followed: Literature search conducted in five electronic literature databases; Identification of relevant articles; Development of data-extraction tool to analyze the included articles; Analysis, synthesis of data and reporting of results. Fifty-seven articles were included, each reporting on a specific ethical framework. These ethical frameworks existed out of characteristics which were grouped into five common ones: Motivations for development and use of frameworks; Objectives of using frameworks; Specific characteristics of frameworks ; Ethical approaches and concepts used in the frameworks; Methods to use the frameworks. Although this multiplicity of ethical frameworks shows an increasing importance of ethically analyzing HTIs, it remains unclear what the specific role is of these analyses. An ethics of caution, on which ethical frameworks rely, guides HTIs in their design, development, implementation, without questioning their technological paradigm. An ethics of desirability questions this paradigm, without guiding HTIs. In the end, a place needs to be found in-between, to critically assess HTIs. (shrink)
    Direct download(3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  • Cross-Sectoral Big Data: The Application of an Ethics Framework for Big Data in Health and Research.Graeme T. Laurie -2019 -Asian Bioethics Review 11 (3):327-339.
    Discussion of uses of biomedical data often proceeds on the assumption that the data are generated and shared solely or largely within the health sector. However, this assumption must be challenged because increasingly large amounts of health and well-being data are being gathered and deployed in cross-sectoral contexts such as social media and through the internet of things and wearable devices. Cross-sectoral sharing of data thus refers to the generation, use and linkage of biomedical data beyond the health sector. This (...) paper considers the challenges that arise from this phenomenon. If we are to benefit fully, it is important to consider which ethical values are at stake and to reflect on ways to resolve emerging ethical issues across ecosystems where values, laws and cultures might be quite distinct. In considering such issues, this paper applies the deliberative balancing approach of the Ethics Framework for Big Data in Health and Research to the domain of cross-sectoral big data. Please refer to that article for more information on how this framework is to be used, including a full explanation of the key values involved and the balancing approach used in the case study at the end. (shrink)
    No categories
    Direct download(2 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  • The paradox of the artificial intelligence system development process: the use case of corporate wellness programs using smart wearables.Alessandra Angelucci,Ziyue Li,Niya Stoimenova &Stefano Canali -forthcoming -AI and Society:1-11.
    Artificial intelligence systems have been widely applied to various contexts, including high-stake decision processes in healthcare, banking, and judicial systems. Some developed AI models fail to offer a fair output for specific minority groups, sparking comprehensive discussions about AI fairness. We argue that the development of AI systems is marked by a central paradox: the less participation one stakeholder has within the AI system’s life cycle, the more influence they have over the way the system will function. This means that (...) the impact on the fairness of the system is in the hands of those who are less impacted by it. However, most of the existing works ignore how different aspects of AI fairness are dynamically and adaptively affected by different stages of AI system development. To this end, we present a use case to discuss fairness in the development of corporate wellness programs using smart wearables and AI algorithms to analyze data. The four key stakeholders throughout this type of AI system development process are presented. These stakeholders are called service designer, algorithm designer, system deployer, and end-user. We identify three core aspects of AI fairness, namely, contextual fairness, model fairness, and device fairness. We propose a relative contribution of the four stakeholders to the three aspects of fairness. Furthermore, we propose the boundaries and interactions between the four roles, from which we make our conclusion about the possible unfairness in such an AI developing process. (shrink)
    Direct download(4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • The “black box” at work.Ifeoma Ajunwa -2020 -Big Data and Society 7 (2).
    An oversized reliance on big data-driven algorithmic decision-making systems, coupled with a lack of critical inquiry regarding such systems, combine to create the paradoxical “black box” at work. The “black box” simultaneously demands a higher level of transparency from the worker in regard to data collection, while shrouding the decision-making in secrecy, making employer decisions even more opaque to the worker. To access employment, the worker is commanded to divulge highly personal information, and when hired, must submit further still to (...) algorithmic processes of evaluations which will make authoritative claims as to the workers’ productivity. Furthermore, in and out of the workplace, the worker is governed by an invisible data-created leash deploying wearable technology to collect intimate worker data. At all stages, the worker is confronted with a lack of transparency, accountability, or explanation as to the inner workings or even the logic of the “black box” at work. This data revolution of the workplace is alarming for several reasons: the “black box at work” not only serves to conceal disparities in hiring, but could also allow for a level of “data-laundering” that beggars any notion of equal opportunity in employment and there exists, the danger of a “mission creep” attitude to data collection that allows for pervasive surveillance, contributing to the erosion of both the personhood and autonomy of workers. Thus, the “black box at work” not only enables worker domination in the workplace, it deprives the worker of Rawlsian justice. (shrink)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark   4 citations  
  • The ethical use of artificial intelligence in human resource management: a decision-making framework.Sarah Bankins -2021 -Ethics and Information Technology 23 (4):841-854.
    Artificial intelligence is increasingly inputting into various human resource management functions, such as sourcing job applicants and selecting staff, allocating work, and offering personalized career coaching. While the use of AI for such tasks can offer many benefits, evidence suggests that without careful and deliberate implementation its use also has the potential to generate significant harms. This raises several ethical concerns regarding the appropriateness of AI deployment to domains such as HRM, which directly deal with managing sometimes sensitive aspects of (...) individuals’ employment lifecycles. However, research at the intersection of HRM and technology continues to largely center on examining what AI can be used for, rather than focusing on the salient factors relevant to its ethical use and examining how to effectively engage human workers in its use. Conversely, the ethical AI literature offers excellent guiding principles for AI implementation broadly, but there remains much scope to explore how these principles can be enacted in specific contexts-of-use. By drawing on ethical AI and task-technology fit literature, this paper constructs a decision-making framework to support the ethical deployment of AI for HRM and guide determinations of the optimal mix of human and machine involvement for different HRM tasks. Doing so supports the deployment of AI for the betterment of work and workers and generates both scholarly and practical outcomes. (shrink)
    Direct download(2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  • Big Data in the workplace: Privacy Due Diligence as a human rights-based approach to employee privacy protection.Jeremias Adams-Prassl,Isabelle Wildhaber &Isabel Ebert -2021 -Big Data and Society 8 (1).
    Data-driven technologies have come to pervade almost every aspect of business life, extending to employee monitoring and algorithmic management. How can employee privacy be protected in the age of datafication? This article surveys the potential and shortcomings of a number of legal and technical solutions to show the advantages of human rights-based approaches in addressing corporate responsibility to respect privacy and strengthen human agency. Based on this notion, we develop a process-oriented model of Privacy Due Diligence to complement existing frameworks (...) for safeguarding employee privacy in an era of Big Data surveillance. (shrink)
    Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  • The Ethics of Workplace Health Promotion.Eva Kuhn,Sebastian Müller,Ludger Heidbrink &Alena Buyx -2020 -Public Health Ethics 13 (3):234-246.
    Companies increasingly offer their employees the opportunity to participate in voluntary Workplace Health Promotion programmes. Although such programmes have come into focus through national and regional regulation throughout much of the Western world, their ethical implications remain largely unexamined. This article maps the territory of the ethical issues that have arisen in relation to voluntary health promotion in the workplace against the background of asymmetric relationships between employers and employees. It addresses questions of autonomy and voluntariness, discrimination and distributive justice, (...) as well as privacy and responsibility. Following this analysis, we highlight the inadequacy of currently established ethical frameworks to sufficiently cover all aspects of workplace health promotion. Thus, we recommend the consideration of principles from all such frameworks in combination, in a joint reflection of an Ethics of Workplace Health Promotion. (shrink)
    Direct download(3 more)  
     
    Export citation  
     
    Bookmark  

  • [8]ページ先頭

    ©2009-2025 Movatter.jp