Conference Proceedings

1. Thinking Unwise: a Relational U-Turn RoboPhilosophy (2022) | 489-497 | Paper | Slides


In this paper, I add to the recent flurry of research concerning the moral patiency of artificial beings. Focusing on David Gunkel’s adaptation of Levinas, I identify and argue that the Relationist’s extrinsic case-by-case approach of ascribing artificial moral status fails on two accounts. Firstly, despite Gunkel’s effort to avoid anthropocentrism, I argue that Relationism is, itself, anthropocentric. This is in virtue of how its case-by-case approach is, necessarily, assessed from a human perspective. Secondly, in light of interpreting Gunkel’s Relationism as a case-by-case approach, I reiterate that it cannot give sufficient action guidance.

Book Reviews

1. Mark Coeckelbergh's Robot Ethics for The Journal of Applied Philosophy (2023) | Accepted Text


1. Experience Machines and Reality: the Finishing Touch (with Patrick Haggard) | Slides

Association for the Scientific Study of Consciousness* (26 | New York | June 2023).


With the emergence of virtual and augmented reality (VR, AR, XR, collectively *R) technologies, the capabilities of Nozick’s ‘experience machine’ thought experiment are within our grasp. *R allows for first-person experiences that seem real, enabling us to view situations from other perspectives and providing opportunities for innovative therapeutic benefits such as confronting phobias, pain management, and anxiety disorders, whilst also being useful for military and medical training (Slater et al., 2020).

However, although *R allows for a degree of realism, some suspension of disbelief is still required. That is, although experiences in *R seem real, they do not feel real. While *R allows for a degree of visual immersion, it cannot yet deliver all of the physical sensations associated with interacting with the real world. Our sense of immersion is immediately and abruptly interrupted when we, for example, reach to touch something but cannot actually feel it. The next logical step in virtual realism, therefore, is to facilitate the sense of tactile, touch sensation – haptic technology.

We outline three ways in which touch plays a particular role in our sense of immersion, presence, and realism. That touch (1) facilitates our sense of agency; (2), complements and confirms our other sensory modalities (such as vision); and (3) provides privileged information that corresponds to our sense of self and body ownership. We also consider the limitations that current haptic technology – such as the use of haptic gloves and other devices – has in delivering realism and identify future research agendas.

An important implication of Nozick’s experience machine is its illustration that we seem to value real experiences over virtual ones. However, although we might not value touch-enabled *R experiences as much as real experiences, they clearly still matter to some degree. Regardless of their source, for example, they will still evoke real emotions. The implications of tactile realism for consciousness science, therefore, are particularly important because digital touch technology raises several ethical concerns.

We identify the key concerns as tactile unreliability, tactile selfhood, and tactile autonomy respectively. First, because touch provides an ultimate arbiter of presence in the real physical world, we ought to be cautious about how digital touch disrupts our ordinary understanding of touch’s confirmatory role. As an example, we consider ‘tactile deepfakes’ and how these might produce particularly problematic experiences. Second, touch is deeply involved with key aspects of self-consciousness such as bodily self-awareness and agency. Touch technologies could generate the experience of trying to touch your own body but finding nothing there. We suggest that this could provoke a crisis of self-consciousness. Third, a distinctive feature of our sensory experiences is the degree of autonomous control we have over them. Although touch is "always on" (and perhaps because it is always on) we carefully control what (and particularly who) we allow to touch us. Tactile technologies risk individuals being subjected to unwanted experiences, with implications for both autonomy and privacy.

2. The Ethics of Digital Touch (with Patrick Haggard) 

UCL, Institue of Cognitive Neuroscience, Action and Body Lab† (March 2023)

3. Thinking Unwise: a Relational U-Turn

RoboPhilosophy 2022* (Helsinki | August 2022)

The University of York Graduate Conference† (June 2022)



In this short essay I identify a seemingly underappreciated line of argument available to the mereological nihilist: that we may as well be nihilists. I derive this from the fact that there is, arguably, little practical difference between the nihilist explanation of compositional objects and that of the more costly common-sense view. However, despite introducing a new way of appreciating the simplicity of the nihilist view, it is ultimately shown that the way in which the nihilist achieves it remains flawed. That is, through the vehicle of paraphrase.  

In Progress

How to Make Intutions about Welfare Intuitive 


Some lives go better than others. Can this be because of things we do not consciously experience or are, in any way, aware of? Experientialists say no. Most theories of welfare, however – such as brute list theories, subjective list theories, and desire satisfaction theories – say yes. They endorse extra-experientialism: the claim that there is an extra class of welfare goods beyond those that are experienced. In this paper, I consider intuition-based arguments that pro-tanto pull either way. In particular, I argue that such arguments are only intuitive if they are viewed from the ‘correct’ point of view. I show that once understood this way, we can explain why some think experientialism is obviously true whilst others think extra-experientialism is obviously true: it is a matter of perspective. I then consider what, ironically, this might mean for the intuitiveness of each theory.

The Ethics of Digital Touch (with Patrick Haggard) | PrePrint


This paper outlines the foundations for an ethics of digital touch. Digital touch refers to new hardware and software technologies that provide somatic sensations such as touch and kinaesthesis, either as a stand-alone interface to users, or as part of a wider immersive experience. A common feature of all digital touch is the direct interaction between a designed stimulus and the human skin. Digital touch is therefore proximal. In contrast, other interface sensory technologies such as graphics and sound are distal since they rely on exteroceptive senses. The proximity of touch underlies the potential value of digital touch systems, for example in applications such as communication, affective computing, medicine and education. At the same time, proximity raises a distinctive set of ethical considerations, which we here bring together for the first time. We first consider the distinctive physiology of human somatic sensations and the various functions that digital technologies can deliver via these sensations. A systems neurophysiology understanding of touch leads us to identify several ethical issues for future digital touch technology. Digital touch technologies directly impact a user’s personal space, raising important questions about control, transparency, and epistemic procedures. First, because human somatosensation is “always on”, digital touch technologies that take advantage of this (i.e., alerting systems) threaten our sensory autonomy (the right to choose what sensations we experience). Second, users may reasonably want to know who or what is touching them, and for what purpose. Consent for digital touch will therefore need to be carefully and transparently transacted. We consider how this might be done. Third, because touch gives us a special, direct sense of interacting with our physical environment, digital touch technologies that manipulate this interaction could potentially provide a major epistemic challenge, changing a user’s basic understanding of reality and their relation to it. The benefits of creating novel technology-mediated touch experiences will need to be balanced against the ethical risks of unmanageable cognitive and socio-affective challenges. Interestingly, most research effort in digital touch has focused on a user’s haptic interaction with external objects. However, our analyses suggest that the strongest and most immediate ethical risks surrounding digital touch technologies arise when interacting with other agents, rather than passive objects, and when users are being passively touched, rather than during active haptic exploration. 

Robot Moral Standing, Practically 


This paper argues that any account of robot moral standing should be practically adequate. An approach to robot moral standing is practically adequate on my understanding if moral agents, who are bound by such moral principles, are able to follow them in practice. I defend the importance of practical adequacy for both ethical principles in general but also make a more concrete claim when applied to artificial entities such as robots. After doing so, I consider two ways in which current theories of robot moral standing fail to uphold this requirement. First, I argue that relational approaches fail to be practically adequate as they are unable to provide agent-neutral action guidance. Second, I reinterpret the epistemic objection toward views that hold (phenomenal) consciousness as necessary for robot moral standing as a challenge of practicality, rather than a mere sceptical observation. Once understood in this way, I argue that the epistemic objection is a real and troubling problem for such views. I conclude by considering what a practically adequate approach to robot moral standing might look like.

Should you want to live forever?


Most (but not all) people think that living forever wouldn’t be such a good thing. In fact, many think that death is what gives life its meaning. Most (if not all) people, however, think that the death of a loved one is bad. No one blames the deceased for dying. In many cases, it would be both odd and inappropriate to. Nevertheless, the associated harm to others caused by someone dying is often particularly acute and much like physical pain, the pain felt by those going through grief is instinctively undesirable and contributes negatively to someone’s level of welfare. With this in mind, suppose you (and you alone) had the opportunity to live forever. Your loved ones will not experience any grief on your part. Should you take up this opportunity? The aim of this paper is to sketch out how to answer this question.