An Analysis of the Interaction Between Intelligent Software Agents and Human Users.Christopher Burr,Nello Cristianini &James Ladyman -2018 -Minds and Machines 28 (4):735-774.detailsInteractions between an intelligent software agent and a human user are ubiquitous in everyday situations such as access to information, entertainment, and purchases. In such interactions, the ISA mediates the user’s access to the content, or controls some other aspect of the user experience, and is not designed to be neutral about outcomes of user choices. Like human users, ISAs are driven by goals, make autonomous decisions, and can learn from experience. Using ideas from bounded rationality, we frame these interactions (...) as instances of an ISA whose reward depends on actions performed by the user. Such agents benefit by steering the user’s behaviour towards outcomes that maximise the ISA’s utility, which may or may not be aligned with that of the user. Video games, news recommendation aggregation engines, and fitness trackers can all be instances of this general case. Our analysis facilitates distinguishing various subcases of interaction, as well as second-order effects that might include the possibility for adaptive interfaces to induce behavioural addiction, and/or change in user belief. We present these types of interaction within a conceptual framework, and review current examples of persuasive technologies and the issues that arise from their use. We argue that the nature of the feedback commonly used by learning agents to update their models and subsequent decisions could steer the behaviour of human users away from what benefits them, and in a direction that can undermine autonomy and cause further disparity between actions and goals as exemplified by addictive and compulsive behaviour. We discuss some of the ethical, social and legal implications of this technology and argue that it can sometimes exploit and reinforce weaknesses in human beings. (shrink)
The ethics of digital well-being: a thematic review.Christopher Burr,Mariarosaria Taddeo &Luciano Floridi -2020 -Science and Engineering Ethics 26 (4):2313–2343.detailsThis article presents the first thematic review of the literature on the ethical issues concerning digital well-being. The term ‘digital well-being’ is used to refer to the impact of digital technologies on what it means to live a life that is good for a human being. The review explores the existing literature on the ethics of digital well-being, with the goal of mapping the current debate and identifying open questions for future research. The review identifies major issues related to several (...) key social domains: healthcare, education, governance and social development, and media and entertainment. It also highlights three broader themes: positive computing, personalised human–computer interaction, and autonomy and self-determination. The review argues that three themes will be central to ongoing discussions and research by showing how they can be used to identify open questions related to the ethics of digital well-being. (shrink)
The ethics of digital well-being: a thematic review.Christopher Burr,Mariarosaria Taddeo &Luciano Floridi -2020 -Science and Engineering Ethics 26 (4):2313–2343.detailsThis article presents the first thematic review of the literature on the ethical issues concerning digital well-being. The term ‘digital well-being’ is used to refer to the impact of digital technologies on what it means to live a life that isgood fora human being. The review explores the existing literature on the ethics of digital well-being, with the goal of mapping the current debate and identifying open questions for future research. The review identifies major issues related to several key social (...) domains: healthcare, education, governance and social development, and media and entertainment. It also highlights three broader themes: positive computing, personalised human–computer interaction, and autonomy and self-determination. The review argues that three themes will be central to ongoing discussions and research by showing how they can be used to identify open questions related to the ethics of digital well-being. (shrink)
Can Machines Read our Minds?Christopher Burr &Nello Cristianini -2019 -Minds and Machines 29 (3):461-494.detailsWe explore the question of whether machines can infer information about our psychological traits or mental states by observing samples of our behaviour gathered from our online activities. Ongoing technical advances across a range of research communities indicate that machines are now able to access this information, but the extent to which this is possible and the consequent implications have not been well explored. We begin by highlighting the urgency of asking this question, and then explore its conceptual underpinnings, in (...) order to help emphasise the relevant issues. To answer the question, we review a large number of empirical studies, in which samples of behaviour are used to automatically infer a range of psychological constructs, including affect and emotions, aptitudes and skills, attitudes and orientations (e.g. values and sexual orientation), personality, and disorders and conditions (e.g. depression and addiction). We also present a general perspective that can bring these disparate studies together and allow us to think clearly about their philosophical and ethical implications, such as issues related to consent, privacy, and the use of persuasive technologies for controlling human behaviour. (shrink)
Digital psychiatry: ethical risks and opportunities for public health and well-being.Christopher Burr,Jessica Morley,Mariarosaria Taddeo &Luciano Floridi -2020 -IEEE Transactions on Technology and Society 1 (1):21–33.detailsCommon mental health disorders are rising globally, creating a strain on public healthcare systems. This has led to a renewed interest in the role that digital technologies may have for improving mental health outcomes. One result of this interest is the development and use of artificial intelligence for assessing, diagnosing, and treating mental health issues, which we refer to as ‘digital psychiatry’. This article focuses on the increasing use of digital psychiatry outside of clinical settings, in the following sectors: education, (...) employment, financial services, social media, and the digital well-being industry. We analyse the ethical risks of deploying digital psychiatry in these sectors, emphasising key problems and opportunities for public health, and offer recommendations for protecting and promoting public health and well-being in information societies. (shrink)
The ethics of digital well-being: a multidisciplinary perspective.Christopher Burr &Luciano Floridi -2020 - In Christopher Burr & Luciano Floridi,Ethics of digital well-being: a multidisciplinary approach. Springer.detailsThis chapter serves as an introduction to the edited collection of the same name, which includes chapters that explore digital well-being from a range of disciplinary perspectives, including philosophy, psychology, economics, health care, and education. The purpose of this introductory chapter is to provide a short primer on the different disciplinary approaches to the study of well-being. To supplement this primer, we also invited key experts from several disciplines—philosophy, psychology, public policy, and health care—to share their thoughts on what they (...) believe are the most important open questions and ethical issues for the multi-disciplinary study of digital well-being. We also introduce and discuss several themes that we believe will be fundamental to the ongoing study of digital well-being: digital gratitude, automated interventions, and sustainable co-well-being. (shrink)
Empowerment or Engagement? Digital Health Technologies for Mental Healthcare.Christopher Burr &Jessica Morley -2020 - In Christopher Burr & Silvia Milano,The 2019 Yearbook of the Digital Ethics Lab. Springer Nature. pp. 67-88.detailsWe argue that while digital health technologies (e.g. artificial intelligence, smartphones, and virtual reality) present significant opportunities for improving the delivery of healthcare, key concepts that are used to evaluate and understand their impact can obscure significant ethical issues related to patient engagement and experience. Specifically, we focus on the concept of empowerment and ask whether it is adequate for addressing some significant ethical concerns that relate to digital health technologies for mental healthcare. We frame these concerns using five key (...) ethical principles for AI ethics (i.e. autonomy, beneficence, non-maleficence, justice, and explicability), which have their roots in the bioethical literature, in order to critically evaluate the role that digital health technologies will have in the future of digital healthcare. (shrink)
The body as laboratory: Prediction-error minimization, embodiment, and representation.Christopher Burr &Max Jones -2016 -Philosophical Psychology 29 (4):586-600.detailsIn his paper, Jakob Hohwy outlines a theory of the brain as an organ for prediction-error minimization, which he claims has the potential to profoundly alter our understanding of mind and cognition. One manner in which our understanding of the mind is altered, according to PEM, stems from the neurocentric conception of the mind that falls out of the framework, which portrays the mind as “inferentially-secluded” from its environment. This in turn leads Hohwy to reject certain theses of embodied cognition. (...) Focusing on this aspect of Hohwy’s argument, we first outline the key components of the PEM framework such as the “evidentiary boundary,” before looking at why this leads Hohwy to reject certain theses of embodied cognition. We will argue that although Hohwy may be correct to reject specific theses of embodied cognition, others are in fact implied by the PEM framework and may contribute to its development. We present the metaphor of the “body as a laboratory” in order to highlight wha... (shrink)
Ethics of digital well-being: a multidisciplinary approach.Christopher Burr &Luciano Floridi (eds.) -2020 - Springer.detailsThis chapter serves as an introduction to the edited collection of the same name, which includes chapters that explore digital well-being from a range of disciplinary perspectives, including philosophy, psychology, economics, health care, and education. The purpose of this introductory chapter is to provide a short primer on the different disciplinary approaches to the study of well-being. To supplement this primer, we also invited key experts from several disciplines—philosophy, psychology, public policy, and health care—to share their thoughts on what they (...) believe are the most important open questions and ethical issues for the multi-disciplinary study of digital well-being. We also introduce and discuss several themes that we believe will be fundamental to the ongoing study of digital well-being: digital gratitude, automated interventions, and sustainable co-well-being. (shrink)
No categories
Embodied Decisions and the Predictive Brain.Christopher Burr -2017 -Philosophy and Predictive Processing.detailsA cognitivist account of decision-making views choice behaviour as a serial process of deliberation and commitment, which is separate from perception and action. By contrast, recent work in embodied decision-making has argued that this account is incompatible with emerging neurophysiological data. We argue that this account has significant overlap with an embodied account of predictive processing, and that both can offer mutual development for the other. However, more importantly, by demonstrating this close connection we uncover an alternative perspective on the (...) nature of decision-making, and the mechanisms that underlie our choice behaviour. This alternative perspective allows us to respond to a challenge for predictive processing, which claims that the satisfaction of distal goal-states is underspecified. Answering this challenge requires the adoption of an embodied perspective. (shrink)
The debate on the ethics of AI in health care: a reconstruction and critical review.Jessica Morley,Caio C. V. Machado,Christopher Burr,Josh Cowls,Indra Joshi,Mariarosaria Taddeo &Luciano Floridi -manuscriptdetailsHealthcare systems across the globe are struggling with increasing costs and worsening outcomes. This presents those responsible for overseeing healthcare with a challenge. Increasingly, policymakers, politicians, clinical entrepreneurs and computer and data scientists argue that a key part of the solution will be ‘Artificial Intelligence’ (AI) – particularly Machine Learning (ML). This argument stems not from the belief that all healthcare needs will soon be taken care of by “robot doctors.” Instead, it is an argument that rests on the classic (...) counterfactual definition of AI as an umbrella term for a range of techniques that can be used to make machines complete tasks in a way that would be considered intelligent were they to be completed by a human. Automation of this nature could offer great opportunities for the improvement of healthcare services and ultimately patients’ health by significantly improving human clinical capabilities in diagnosis, drug discovery, epidemiology, personalised medicine, and operational efficiency. However, if these AI solutions are to be embedded in clinical practice, then at least three issues need to be considered: the technical possibilities and limitations; the ethical, regulatory and legal framework; and the governance framework. In this article, we report on the results of a systematic analysis designed to provide a clear overview of the second of these elements: the ethical, regulatory and legal framework. We find that ethical issues arise at six levels of abstraction (individual, interpersonal, group, institutional, sectoral, and societal) and can be categorised as epistemic, normative, or overarching. We conclude by stressing how important it is that the ethical challenges raised by implementing AI in healthcare settings are tackled proactively rather than reactively and map the key considerations for policymakers to each of the ethical concerns highlighted. (shrink)
Ethical assurance: a practical approach to the responsible design, development, and deployment of data-driven technologies.Christopher Burr &David Leslie -forthcoming -AI and Ethics.detailsThis article offers several contributions to the interdisciplinary project of responsible research and innovation in data science and AI. First, it provides a critical analysis of current efforts to establish practical mechanisms for algorithmic auditing and assessment to identify limitations and gaps with these approaches. Second, it provides a brief introduction to the methodology of argument-based assurance and explores how it is currently being applied in the development of safety cases for autonomous and intelligent systems. Third, it generalises this method (...) to incorporate wider ethical, social, and legal considerations, in turn establishing a novel version of argument-based assurance that we call ‘ethical assurance.’ Ethical assurance is presented as a structured method for unifying the myriad practical mechanisms that have been proposed. It is built on a process-based form of project governance that enlists reflective innovation practices to operationalise normative principles, such as sustainability, accountability, transparency, fairness, and explainability. As a set of interlocutory governance mechanisms that span across the data science and AI lifecycle, ethical assurance supports inclusive and participatory ethical deliberation while also remaining grounded in social and technical realities. Finally, this article sets an agenda for ethical assurance, by detailing current challenges, open questions, and next steps, which serve as a springboard to build an active (and interdisciplinary) research programme as well as contribute to ongoing discussions in policy and governance. (shrink)
Bayesian Learning Models of Pain: A Call to Action.Abby Tabor &Christopher Burr -2019 -Current Opinion in Behavioral Sciences 26:54-61.detailsLearning is fundamentally about action, enabling the successful navigation of a changing and uncertain environment. The experience of pain is central to this process, indicating the need for a change in action so as to mitigate potential threat to bodily integrity. This review considers the application of Bayesian models of learning in pain that inherently accommodate uncertainty and action, which, we shall propose are essential in understanding learning in both acute and persistent cases of pain.
Building machines that learn and think about morality.Christopher Burr &Geoff Keeling -2018 - In Christopher Burr & Geoff Keeling,Proceedings of the Conventionof the Society for the Study of ArtificialIntelligence and Simulation of Behaviour (AISB 2018). Society for the Study of Artificial Intelligence and Simulation of Behaviour.detailsLake et al. propose three criteria which, they argue, will bring artificial intelligence (AI) systems closer to human cognitive abilities. In this paper, we explore the application of these criteria to a particular domain of human cognition: our capacity for moral reasoning. In doing so, we explore a set of considerations relevant to the development of AI moral decision-making. Our main focus is on the relation between dual-process accounts of moral reasoning and model-free/model-based forms of machine learning. We also discuss (...) how work in embodied and situated cognition could provide a valu- able perspective on future research. (shrink)
Embodied Decisions and the Predictive Brain.Christopher Burr -2016 - Dissertation, University of BristoldetailsDecision-making has traditionally been modelled as a serial process, consisting of a number of distinct stages. The traditional account assumes that an agent first acquires the necessary perceptual evidence, by constructing a detailed inner repre- sentation of the environment, in order to deliberate over a set of possible options. Next, the agent considers her goals and beliefs, and subsequently commits to the best possible course of action. This process then repeats once the agent has learned from the consequences of her (...) actions and subsequently updated her beliefs. Under this interpretation, the agent’s body is considered merely as a means to report the decision, or to acquire the relevant goods. However, embodied cognition argues that an agent’s body should be understood as a proper part of the decision-making pro- cess. Accepting this principle challenges a number of commonly held beliefs in the cognitive sciences, but may lead to a more unified account of decision-making. This thesis explores an embodied account of decision-making using a recent frame- work known as predictive processing. This framework has been proposed by some as a functional description of neural activity. However, if it is approached from an embodied perspective, it can also offer a novel account of decision-making that ex- tends the scope of our explanatory considerations out beyond the brain and the body. We explore work in the cognitive sciences that supports this view, and argue that decision theory can benefit from adopting an embodied and predictive perspective. (shrink)
Normative folk psychology and decision theory.Joe Dewhurst &Christopher Burr -2022 -Mind and Language 37 (4):525-542.detailsOur aim in this paper is to explore two possible directions of interaction between normative folk psychology and decision theory. In one direction, folk psychology plays a regulative role that constrains practical decision‐making. In the other direction, decision theory provides novel tools and norms that shape folk psychology. We argue that these interactions could lead to the emergence of an iterative “decision theoretic spiral," where folk psychology influences decision‐making, decision‐making is studied by decision theory, and decision theory influences folk psychology. (...) Understanding these interactions is important both for the theoretical study of social cognition and decision theory, and also for thinking about how to implement practical interventions into real‐world decision‐making. (shrink)
No categories
The 2019 Yearbook of the Digital Ethics Lab.Christopher Burr &Silvia Milano (eds.) -2020 - Springer Nature.detailsThis edited volume presents an overview of cutting-edge research areas within digital ethics as defined by the Digital Ethics Lab of the University of Oxford. It identifies new challenges and opportunities of influence in setting the research agenda in the field. The yearbook presents research on the following topics: conceptual metaphor theory, cybersecurity governance, cyber conflicts, anthropomorphism in AI, digital technologies for mental healthcare, data ethics in the asylum process, AI’s legitimacy and democratic deficit, digital afterlife industry, automatic prayer bots, (...) foresight analysis and the future of AI. This volume appeals to students, researchers and professionals. (shrink)
No categories
Unifying the mind: Cognitive Representations as Graphical Models. [REVIEW]Christopher Burr -2016 -Philosophical Psychology 29 (5):789-791.detailsBook review of Danks, D. (2014) Unifying the Mind: Cognitive Representations as Graphical Models.