| |
It is common to think that well-being has a temporal dimension—that people can be benefited and harmed at times. However, accounting for when a person benefits is not always a clear matter for desire satisfactionism, which holds that a person benefits if their desires are satisfied. This is because there are many cases where a person desires something at some time, but the desired state of affairs only obtains at some other time where the person lacks the desire. What should (...) desire satisfactionists say about such cases where the person’s desire and the desired state of affairs do not overlap temporally? When do they benefit (if they do)? To address this, I advance a new view called No-Future Time-of-Desire: In cases where a person’s desire and the desired state of affairs do not overlap temporally, (a) a person cannot benefit if the desired state of affairs obtains after the desire, but (b) a person can benefit if the desired state of affairs obtains prior to the desire, and they do so at the time they have the desire. I argue that this view is superior to other views in the literature, such as Unrestricted Time-of-Desire, Time-of-Object, Later-Time, Fusion, and Concurrentism. (shrink) | |
Value-sensitive design theorists propose that a range of values that should inform how future social robots are engineered. This article explores a new value: digital well-being, and proposes that the next generation of social robots should be designed to facilitate this value in those who use or come into contact with these machines. To do this, I explore how the morphology of social robots is closely connected to digital well-being. I argue that a key decision is whether social robots are (...) designed as embodied or disembodied. After exploring the merits of both approaches, I conclude that, on balance, there are persuasive reasons why disembodied social robots may well fare better with respect to the value of digital well-being. (shrink) |