Who is afraid of black box algorithms?
On the epistemological and ethical basis of trust in medical AI

Source: BMJ Journals


The use of algorithms and artificial intelligence in the field of health can improve the efficiency of diagnosis or treatment of patients, or even the processing of large amounts of data.

This technological progress also brings with it a number of questions and uncertainties. 

 

In fact, there are several learning methods for training algorithms, ranging from simple and comprehensible learning methods to complex self-learning methods that are incomprehensible to health professionals.

In this sector, doctors must be able to justify the results and reports produced by self-learning algorithms.

 

The analysis by researchers J.M. Durán and K.R. provides an answer to this concern about the opacity of self-learning algorithms.

It explains the notions of methodological and epistemological opacity, as well as the ethical concerns in the use of these algorithms.


Previous
Previous

Embedded ethics: a proposal for integrating ethics into the development of medical AI

Next
Next

How Artificial Intelligence Is Helping Improve Medical Processes