A Hippocratic Oath for your AI doctor

A broad new report from the World Health Organization (WHO) lays out ethical principles for the use of artificial intelligence in medicine.

Health is one of the most promising areas of expansion for AI, and the pandemic only accelerated the adoption of machine learning tools. But adding algorithms to health care will require that AI can follow the most basic rule of human medicine: “Do no harm” — and that won’t be simple.

After nearly two years of consultations by international experts, the WHO report makes the case that the use of AI in medicine offers great promise for both rich and poorer countries, but “only if ethics and human rights are put at the heart of its design, deployment and use,” the authors write.

AI is already being used in medicine to detect tumors in radiological scans, predict how outbreaks will unfold and analyze doctors’ case notes and patient conversations. In the future, it could help speed the process of drug discovery, give real-time diagnosis from better health wearables and even act as “virtual nurses” to remote patients.

To get the most out of AI in medicine while minimizing harm, the WHO report lays out a kind of “Hippocratic Oath” for artificial practitioners of the medical arts. The principles include that humans — both clinicians and patients — remain the ultimate decision-makers in medicine, AI in health primarily “does no harm” and any recommendations or actions by AI remain transparent and explainable. AI technologies should be clearly accountable for patient outcomes, engineered to be usable to the widest possible population and designed to ensure they actually work in real-world conditions — not just in trials.

Read the full article here.