Data-Driven Healthcare: Empirical Encounters with Computational ‘Intelligence’

By

On Wednesday 3rd December, 2025, three of us on the DARE team presented dispatches from our fieldwork at the University of Edinburgh’s AI, Ethics and Society (AIES) Seminar. Drawing from our individual work package research over the past year and a half, each of us shared accounts of the ways data is shaping UK healthcare and biomedical research. This seminar was the first time the three of us presented together. It was a fun experience to integrate the ideas we’ve been thinking about—across our multi-sited ethnography and different scales of attention—into a generative dialogue.  

------ 

Abby: I shared some early findings and thoughts from my fieldwork on data collection in the home, remote clinical trials, and the development of electronic clinical outcome measures. One question that’s been nagging me recently is how something becomes data in the context of a clinical trial. The trials I’m tracing involve patients recording a video of themselves doing a clinical assessment in their homes, then uploading these videos to a secure platform where data scientists and clinical researchers can access them for analysis. These videos are rich in incidental detail; they reveal a lot more about a patient’s daily life than just their ability to complete a clinical assessment: whether they’ve gotten dressed for the day or are still in pyjamas, whether they have children or pets, if they’re able to keep their home tidy. I ask: do patients’ surroundings in these videos come to bear on the analysis? Often, the answer is no. These details aren’t considered by the data scientists and researchers, except in a few cases where data scientists have noticed that some part of a patient’s environment may influence their performance of an assessment task. This has included chair type and height, shoes the patient was wearing during the assessment, and the location of filming. I wonder about what triggers the shift from something in the background of a video to a point of data included in analysis and in the ultimate understanding of a patient’s condition.  I draw on Leonelli and Tempini’s (2020) data journeys and in particular their notion that something becomes data when it is ‘mobilised’: when it is produced or stored, sent or received, managed or cleaned, analysed or depended upon. How and why do things like chairs, shoes and place become mobilised in these trials? And what were they before they were mobilised? I hope to explore these questions further as I wrap up my fieldwork and dive into deeper analysis over the next months.  

 

Max: I am increasingly interested in the history of databases, and their role in shaping our current discursive moment. As such, I presented a series of —as yet incomplete— reflections on the ‘data’ that constitutes not just the bedrock of AI, but its operating milieu. That is, I wanted to propose that a very particular conception of data as an abstracted and independent manifestation of empirical experience has shaped an environment within which we can call a collection of automation machines ‘intelligent’. I started the talk with three texts spanning nearly 200 years. Each concerned medicine’s bureaucratic functioning. These, I interwove with Anglo-American post-war computer science and cybernetics. My intention was to build these into a sociology of bureaucracy that includes Max Weber (2013) and Andrew Abbott (1988), and then to ask, understood within the context of a history of computational data, what are the epistemological conditions within which medical decisions are made? 

As is perhaps clear from this short summary, such a task was overambitious, both regarding my own capabilities as a writer, speaker, and thinker, and regarding the time available in the room. 

Weber, M. (2013) From Max Weber: Essays In Sociology. Hoboken: Taylor and Francis. 

Abbott, A. (1988) The System of Professions: An Essay on the Division of Expert Labour. London: University of Chicago Press. 

 

Nicola: With my DARE fieldwork only just beginning, I instead wanted to take this opportunity to think about AI and ethics to discuss an urgent issue sitting at the intersection of my areas of expertise and lived experience: AI chatbots and mental health crisis. With this talk, I wanted to face the empirical reality that many, many people are using generic AI chatbots as pseudo-therapists, and that this can go badly wrong because of a series of fundamental clashes: between the epistemic frameworks of generative AI and of psychotherapy; between the anthropomorphising experience of using chatbots and their non-human status; and between supply-and-demand framings of the mental health crisis compared to the realities of the mental health crisis as experienced by those of us living with psychiatric diagnoses. 

I argued that the same essential features that make LLM-based AI chatbots ‘bullshit machines’ (Hicks et al, 2024) also make it impossible for them to function as therapists, and makes them dangerous companions for people who have a serious mental health condition or are in crisis. Nevertheless, users are taking their suffering to generic chatbots instead of other humans, engaging in forms of pseudo-therapy that range from gradual evolution into a therapy-like relationship to prompt engineering and custom coding seeking explicitly to transform chatbots into therapists. 

2025 has been a disastrous year for people using AI during mental health crisis, with several deaths by suicide following interactions with chatbots. I concluded my talk by calling for an ethics of AI chatbots that prioritises safety for those who may be most vulnerable. From the first few years of AI-chatbots being available to the public, it seems like those most vulnerable people are people with serious mental health conditions or experiencing a crisis; people with learning disabilities; and children and adolescents. Research and legislation have already fallen behind, and there is an urgent need to develop understandings of human-chatbot interfacing and its psychosocial effects; to secure regulation; and to develop tools for promoting public understanding. 

------ 

While our presentations spanned different topics, methodologies, and ways of thinking, the subsequent discussion helped us think through the themes underlying our projects and reflect upon the ways in which our individual work packages have already had great influence on each other’s. 

Some of the questions and comments we will consider as we move forward with our work are: 

  • Methodologically, how do we plan to work and think across our scales of attention? 
  • With what type of futures in mind is data generated? How can we account for the recursive relationships between the purposes of data collection, and the purposes that can be imagined from data that has been collected (e.g. as resource, as ‘independent’ of context, as empirical)? 
  • In which ways do ideas of expertise come into our work? 

After a year and a half working together on DARE, we’ve both come a long way and still have lots of exciting work ahead of us. Looking forward to 2026! 

DARE Postdocs, Abby, Nicola, and Max (left to right) standing in front of their presentation title slide.