As artificial intelligence enters medicine, doctors should learn from mistakes in other fields

March 14, 2018

AI can improve diagnostics and help doctors make better decisions, but researchers say it also raises ethical concerns.

As machine learning enters the high-stakes world of medicine, a designer of machine-learning systems, a bioethicist, and a clinician are working together to learn from mistakes made in other industries. Publishing in the New England Journal of Medicine, the authors don’t deny the tremendous benefits of machine learning and artificial intelligence to medicine. But they call for an examination of ethical concerns like bias in algorithms and data, privacy, and commercial incentives. We speak to lead author Danton Char of Stanford University about the piece.

ResearchGate: What motivated you to write this?

Danton Char:  Because of the big data analytics involved, machine learning is critical to realizing the benefits of precision health. However, in non-medical contexts ethical challenges associated with machine learning have already started to emerge. Systems designed to aid judges in sentencing have demonstrated unintended discriminatory biases. Commercial algorithms have been intentionally designed to perform in ethically wrongful ways. There is an urgent need to examine these ethical challenges before machine-learning systems are fully implemented to clinical care. Otherwise, potential harms will eclipse the potential benefits of machine learning.

I think my concerns are similar to those about AI implementations in non-medical fields, but the stakes are higher. If Netflix gets its personalized film recommendations wrong, it's not as big a deal as if we implement an AI system that makes clinical judgements that lead to someone dying.

RG: Do you think these concerns can ever be completely alleviated?

Char: That's certainly what we – a machine learning designer, a bioethicist, and a clinician working together as a team – are hoping to accomplish with our research.

RG: To what extent should physicians understand the AI applications they use?

Char: Physicians, as a group right now, don't know the limitations of machine learning. There is a danger in treating machine learning and AI like a "black box" and not working closely with medical algorithm designers. Not understanding how it works could lead to an over-reliance on a system’s recommendations without the discerning use of our clinical judgement.  Or it could lead to an under-reliance on machine learning and AI without drawing on the potential benefits.

RG: Are there parts of medicine that should always remain outside of AI’s influence?

Char: I think there are things humans will prove to do better than AI and machine-learning systems. While we all have a tendency to become enamored by our ideas, the ability to step back and ask questions like "is this a good idea?" is something humans can do that machine systems currently can't.

RG: Is there a specific AI application in medicine that you consider to be particularly problematic ethically?

Char: The most immediate problems for AI implementation in clinical medicine come from biases in the underlying data used to train the AI system, which can then be reflected in the recommendations the AI makes. Designer intent is also a potential concern. AI systems that are developed to maximize profit may be at odds with efforts to maximize the greatest number of positive health outcomes.

A larger problem is the erosion of the fiduciary relationship between the physician and patient. This isn't new with AI, but we've seen core values of care, like confidentiality, begin to weaken. There’s been a change from visiting your doctor to visiting a healthcare system that has a care team and electronic records about you. The role of an AI system in this fiduciary relationship, and how it could change the relationship, is still unclear.

RG: What would you like the physicians to take away from your study?

Char: Physicians need to think critically about the implications of AI in clinical medicine and to be sensitive to the need for clinicians to assist in the design and implementation of these systems. They should also be aware of potential problems like bias, designer intent, remaining ignorant about AI, and the role of the AI system in the fiduciary relationship (for which physicians are responsible) between physician and patient.

RG: Should patients also inform themselves about the role of AI in medicine?

Char: We all need to be paying attention.

Featured image courtesy of Ars Electronica.

Read more stories