Research Output 

Journal Articles

ABSTRACT

As humans, we have an innate tendency to ascribe human-like qualities to non-human entities. Whilst sometimes helpful, such anthropomorphic projections are often misleading. This commentary considers how anthropomorphising AI contributes to its misrepresentation and hype. First, I outline three manifestations (terminology; imagery; and morality). Then, I consider the extent to which we ought to mitigate it.

Conference Proceedings

Thinking Unwise: a Relational U-Turn RoboPhilosophy (2022) | 489-497 | Paper | Slides

ABSTRACT

In this paper, I add to the recent flurry of research concerning the moral patiency of artificial beings. Focusing on David Gunkel’s adaptation of Levinas, I identify and argue that the Relationist’s extrinsic case-by-case approach of ascribing artificial moral status fails on two accounts. Firstly, despite Gunkel’s effort to avoid anthropocentrism, I argue that Relationism is, itself, anthropocentric. This is in virtue of how its case-by-case approach is, necessarily, assessed from a human perspective. Secondly, in light of interpreting Gunkel’s Relationism as a case-by-case approach, I reiterate that it cannot give sufficient action guidance.

Book Reviews

Mark Coeckelbergh's Robot Ethics for the Journal of Applied Philosophy (2023) | Accepted Text

Talks

Experience Machines and Reality: the Finishing Touch (with Patrick Haggard) | Slides

Association for the Scientific Study of Consciousness* (26 | New York | June 2023).

ABSTRACT

With the emergence of virtual and augmented reality (VR, AR, XR, collectively *R) technologies, the capabilities of Nozick’s ‘experience machine’ thought experiment are within our grasp. *R allows for first-person experiences that seem real, enabling us to view situations from other perspectives and providing opportunities for innovative therapeutic benefits such as confronting phobias, pain management, and anxiety disorders, whilst also being useful for military and medical training (Slater et al., 2020).

However, although *R allows for a degree of realism, some suspension of disbelief is still required. That is, although experiences in *R seem real, they do not feel real. While *R allows for a degree of visual immersion, it cannot yet deliver all of the physical sensations associated with interacting with the real world. Our sense of immersion is immediately and abruptly interrupted when we, for example, reach to touch something but cannot actually feel it. The next logical step in virtual realism, therefore, is to facilitate the sense of tactile, touch sensation – haptic technology.

We outline three ways in which touch plays a particular role in our sense of immersion, presence, and realism. That touch (1) facilitates our sense of agency; (2), complements and confirms our other sensory modalities (such as vision); and (3) provides privileged information that corresponds to our sense of self and body ownership. We also consider the limitations that current haptic technology – such as the use of haptic gloves and other devices – has in delivering realism and identify future research agendas.

An important implication of Nozick’s experience machine is its illustration that we seem to value real experiences over virtual ones. However, although we might not value touch-enabled *R experiences as much as real experiences, they clearly still matter to some degree. Regardless of their source, for example, they will still evoke real emotions. The implications of tactile realism for consciousness science, therefore, are particularly important because digital touch technology raises several ethical concerns.

We identify the key concerns as tactile unreliability, tactile selfhood, and tactile autonomy respectively. First, because touch provides an ultimate arbiter of presence in the real physical world, we ought to be cautious about how digital touch disrupts our ordinary understanding of touch’s confirmatory role. As an example, we consider ‘tactile deepfakes’ and how these might produce particularly problematic experiences. Second, touch is deeply involved with key aspects of self-consciousness such as bodily self-awareness and agency. Touch technologies could generate the experience of trying to touch your own body but finding nothing there. We suggest that this could provoke a crisis of self-consciousness. Third, a distinctive feature of our sensory experiences is the degree of autonomous control we have over them. Although touch is "always on" (and perhaps because it is always on) we carefully control what (and particularly who) we allow to touch us. Tactile technologies risk individuals being subjected to unwanted experiences, with implications for both autonomy and privacy.

The Ethics of Digital Touch (with Patrick Haggard) 

UCL, Institue of Cognitive Neuroscience, Action and Body Lab† (March 2023)

Thinking Unwise: a Relational U-Turn

RoboPhilosophy 2022* (Helsinki | August 2022)

The University of York Graduate Conference† (June 2022)

Public Philosophy + misc

* 'Why be transparent about the use of AI?' for Philosophy2u (2024)

* Through my work with We and AI and the Better Images Of AI project I helped contribute to 'Better Images of AI: A Guide for Users and Creators' (2023).

* A research summary of Carissa Véliz's 'Moral zombies: why algorithms are not moral agents' for the the Montreal AI Ethics Institute (2022).

* I published a piece called 'Is it the case we may as well be Nihilists?' in the University of York's Dialectic Undergraduate Philosophy journal (2022, 15(3)). 

In Progress

The Ethics of Digital Touch (with Patrick Haggard) | PrePrint

ABSTRACT

This paper aims to outline the foundations for an ethics of digital touch. Digital touch refers to hardware and software technologies, often collectively referred to as ‘haptics’, that provide somatic sensations including touch and kinaesthesis, either as a stand-alone interface to users, or as part of a wider immersive experience. Digital touch has particular promise in application areas such as communication, affective computing, medicine, and education. However, as with all emerging technologies, potential value needs to be considered against potential risk. We therefore identify some areas where digital touch raises ethical concerns, and we identify why these concerns arise, based on the distinctive physiological and functional properties of the human somatosensory system. Most scientific research in digital touch has focused on user interaction with external objects (active touch). However, the most pressing ethical concerns with digital touch technologies arise when users are being passively touched. Our analysis identifies several important questions about control, transparency, and epistemic procedure in digital touch scenarios. First, human somatosensation is “always on”, and many digital touch technologies take advantage of this (e.g., alerting systems). As a result, digital touch technologies can undermine individuals’ sensory autonomy (i.e., the right to choose what sensations one experiences). Second, users may reasonably want to know who or what is touching them, and for what purpose. Consent for digital touch will therefore need to be carefully and transparently transacted. Third, because touch gives us a special, direct experience of interacting with our physical environment, digital touch technologies that manipulate this interaction could potentially provide a major epistemic challenge, by changing a user’s basic understanding of reality and their relation to it. Informed by this discussion, we conclude by suggesting a basis for an ethical design framework for digital touch systems.

Why the epistemic objection is neither inconsistent nor irrelevant 

ABSTRACT

To many, (phenomenal) consciousness is ethically significant. It seems to matter, when ascribing morally charged concepts (like moral standing, moral agency, or welfare) to entities, whether there is something it is like to be that entity. Infamously, however, we are unable to externally verify this capacity, suggesting to some that we ought to look elsewhere for a suitable marker of important moral concepts like moral standing. This is the epistemic objection. In this paper, I defend the epistemic objection, in the context of determining robot moral standing, from two recent challenges: inconsistent scepticism and metaphysical irrelevance. Although I concede that metaphysical irrelevance poses a real threat to the epistemic objection’s reasoning, practical concerns around verifiability still seem to matter. I, therefore, conclude by providing a middle ground, whereby the intuitiveness of consciousness being ethically significant remains, but we free ourselves from its practical inadequacy.


Should you want to live forever?

ABSTRACT

Most (but not all) people think that living forever wouldn’t be such a good thing. In fact, many think that death is what gives life its meaning. Most (if not all) people, however, think that the death of a loved one is bad. No one blames the deceased for dying. In many cases, it would be both odd and inappropriate to. Nevertheless, the associated harm to others caused by someone dying is often particularly acute and much like physical pain, the pain felt by those going through grief is instinctively undesirable and contributes negatively to someone’s level of welfare. With this in mind, suppose you (and you alone) had the opportunity to live forever. Your loved ones will not experience any grief on your part. Should you take up this opportunity? The aim of this paper is to sketch out how to answer this question. 


How to Make Intutions about Welfare Intuitive 

ABSTRACT

Some lives go better than others. Can this be because of things we do not consciously experience or are, in any way, aware of? Experientialists say no. Most theories of welfare, however – such as brute list theories, subjective list theories, and desire satisfaction theories – say yes. They endorse extra-experientialism: the claim that there is an extra class of welfare goods beyond those that are experienced. In this paper, I consider intuition-based arguments that pro-tanto pull either way. In particular, I argue that such arguments are only intuitive if they are viewed from the ‘correct’ point of view. I show that once understood this way, we can explain why some think experientialism is obviously true whilst others think extra-experientialism is obviously true: it is a matter of perspective. I then consider what, ironically, this might mean for the intuitiveness of each theory.