Mitigating the impact of biased artificial intelligence in emergency decision-making

Source: Nature Portfolio


In healthcare, the use of AI or machine learning (ML)-based decision support tools is becoming increasingly common.

However, the datasets used to train these tools can be biased. As a result, the recommendations made to practitioners or non-practitioners using these tools are flawed, reducing the quality of treatment decisions. 

The researchers conducted a study to mitigate the harm caused by discriminatory algorithms. The researchers recruited clinicians (438) and lay people (516).

Participants were given a series of 8 summaries of calls to a fictitious crisis hotline about people in mental health emergencies. 

The participants' task was to decide who should respond to the patients, either a medical team or the police. Some of the participants used algorithmic decision support recommendations to choose who should intervene. The AI recommendations were either a biased or an unbiased language model.


Previous
Previous

The future of computational pathology

Next
Next

Embedded ethics: a proposal for integrating ethics into the development of medical AI