AI standards essential to protect doctor-patient relationships and human rights


Thursday 26th May 2022, 5.33pm

Dr Mittelstadt is Director of Research at the Oxford Internet Institute and a leading data ethicist. He says, ‘I hope the report will make people think about how AI might disrupt the core practices involved in healthcare.’

But he is concerned AI could be used as a way to reduce budgets or save costs rather than to improve patient care and says, ‘If you’re going to introduce new technology into the clinical space, you need to think about how that will be done. Too often it is seen solely as a cost-saving or efficiency exercise, and not one which can radically transform healthcare itself.’

Too often AI is seen solely as a cost-saving or efficiency exercise, and not one which can radically transform healthcare itself

Dr Brent Mittelstadt

Dr Mittelstadt was commissioned to write the report by the Council of Europe’s Steering Committee for Human Rights in the fields of Biomedicine and Health to investigate the potential impact of AI on the doctor-patient relationship. The report advises the use of AI remains ‘unproven’ and could undermine the ‘healing relationship’.

The ethicist points out, AI is already being used in clinical settings – not robot doctors, which ‘don’t exist’. He says, ‘Systems which use AI go a long way back…they might identify tumours in scans or recommend diagnosis.’ But, Dr Mittelstadt says, ‘They are not autonomously making decisions about patient care.’

His report states, ‘A radical reconfiguration of the doctor-patient relationship of the type imagined by some commentators, in which artificial systems diagnose and treat patients directly with minimal interference from human clinicians, continues to seem far in the distance.’

However, Dr Mittelstadt says, there are already robot systems being used in surgery and his report is predicated on the idea that automation is creeping forward in clinical settings – creating the need for clear standards.

There are already robot systems being used in surgery and his report is predicated on the idea that automation is creeping forward in clinical settings

The report warns, ‘The doctor-patient relationship is the foundation of ‘good’ medical practice, and yet it is seemingly being transformed into a doctor-patient-AI relationship. The challenge facing AI providers, regulators and policymakers is to set robust standards and relationships for this new clinical relationship ensure patients’ interests and the moral integrity of medicine as a profession are not fundamentally damaged by the introduction of AI.’

He explains AI could change healthcare as a third actor in the relationship – bringing new information and new dynamics. In turn, this could change how care is delivered.

According to the report, with the introduction of AI, a shift could take place in the relationship, but not in the patient, ‘The patient’s vulnerability [is] not changed by the introduction of AI…what changes is the means of care delivery, how it can be provided, and by whom. The shift of expertise and care responsibilities to AI systems can be disruptive in many ways.’

The report also warns AI can introduce bias into a situation but detecting biases is not straightforward. He writes, ‘AI systems are widely recognised as suffering from bias… Biased and unfair decision-making often…reflects underlying social biases and inequalities. For example, samples in clinical trials and health studies have historically been biased towards white male subjects meaning results are less likely to apply to women and people of colour.’

AI systems are widely recognised as suffering from bias… historically been biased towards white male subjects meaning results are less likely to apply to women and people of colour

And it underlines the challenges, and potential problems in the relationship between physicians and patients, including in terms of patients’ safety and protection, ‘If AI is used to heavily augment or replace human clinical expertise, its impact on the caring relationship is more difficult to predict…. The impact of AI on the doctor-patient relationship nonetheless remains highly uncertain’. And, it points out, ‘AI poses several unique challenges to the human right to privacy.’

The need for binding standards for how AI is deployed and governed is clear, the report adds, ‘Recommendations should focus on a higher positive standard of care with regards to the doctor-patient relationship to ensure it is not unduly disrupted by the introduction of AI in care settings.’