.
E

ighteen months after member states called on the World Health Organization (WHO) to create a global strategy for the establishment of digital health systems, the UN's public health agency has issued guidelines for the ethical use of artificial intelligence (AI) technologies in the health sector. The recently published guidance document acknowledges the benefits AI can provide while urging that ethics and human rights be at the core of its design and use.

Despite sharing similar functions, AI systems vary widely in terms of autonomy: some AI systems are formulaic and predictable while highly autonomous systems use neural networks that adapt to changes in the world around them. For example, autocorrect is an AI system, but so are the programs that steer self-driving cars.

Given this wide range in ability, the WHO endorsed the claim that AI systems are positioned to augment––and in some cases replace and surpass––the capabilities of humans in various areas of health care: diagnosis, drug development, clinical care, and public health monitoring. Specifically, the WHO reported that AI systems can scan images for cancers, interpret X-rays for tuberculosis, provide telemedicine for rural patients, sophisticate the function of artificial limbs, or even predict how likely a patient is to show up late to an appointment, among other actions.

These functions will only be possible, however, provided that humans remain at the center of an ethical decision-making process.

Due to the varying degree of autonomy inherent in AI systems, the WHO recognized the need to confront and regulate potential ethical challenges to implementing these technologies in the health sector.

As the report pointed out, although many principles and guidelines have been developed for the ethical application of AI in the health sectors, there has been no consensus on a shared definition, or agreed best practices or ethical requirements. 

To provide a universal basis for health AI, the WHO streamlined its regulations under a set of six core ethical principles that seek to protect human rights and comply with existing legal obligations: (1) protect autonomy; (2) promote human wellbeing, human safety, and the public interest; (3) ensure transparency, explainability, and intelligibility; (4) foster responsibility and accountability; (5) ensure inclusiveness and equity; (6) promote AI that is responsive and sustainable.

Protect autonomy

The WHO advised that all AI systems for health should maintain a degree of human control, with systems designed to ultimately leave medical decisions to the discretion of humans.

“In practice,” suggested the WHO, “this could include deciding whether to use an AI system for a particular health-care decision, to vary the level of human discretion and decision-making, and to develop AI technologies that can rank decisions when appropriate (as opposed to a single decision).”

Another dimension of human autonomy is protecting the privacy of individuals during data collection. AI systems benefit from training with large quantities of data. To use the example of AI that scans images for cancers, the more cancer-positive images that a system can reference, the more successful it will be in detecting cancer in complex or nuanced cases.

But this medical data is sensitive and tied to individuals. Therefore, the WHO stated that governments should regulate the collection, storage, and access to medical data for AI systems through data-collection laws and public-private partnerships (PPPs). Specifically, the document encourages the implementation of agreements of informed consent and the anonymization of health data.

Promote human wellbeing, human safety, and the public interest

In a twist on the Hippocratic oath for the digital age, the WHO reaffirms that AI technologies should not cause harm.  The WHO defines harm in various forms.

With respect to mental and physical harm to individuals, the WHO stressed that AI decisions for patient health should include quality control measures to identify and correct errors. “Such an error, if fixed in an algorithm, could cause irreparable harm to thousands of people in a short time if the technology is used widely,” predicted the WHO. 

With respect to the public interest, the WHO identified cybersecurity as an area of concern. As digital health services increasingly rely on AI systems, it is possible that cybercrime organizations will target these systems for malware attacks or ransomware operations, such as the operation carried out against Colonial Pipeline in April of 2021.

Ensure transparency, explainability, and intelligibility

For developers, users, patients, and regulators, the WHO asserted that AI systems for health must be understandable. This requirement would prohibit what are known as “black-box” algorithms, or neural networks whose complexity makes the decision-making process incomprehensible––even to its developers.

In fact, the WHO instructed AI developers to prioritize the transparency of the AI system over its accuracy. In other words, although black-box AI systems can make highly accurate predictions through autonomous reasoning processes, the WHO advised that the inherent risks of abandoning human surveillance outweigh any potential benefits.

Foster responsibility and accountability

Supported by the WHO’s call for intelligibility is the request that governments implement a mechanism to assign responsibility for an AI system that produces adverse effects. The document advocated for a mechanism that identifies the relative roles of all actors involved in the system’s development and deployment.

As an example of a legal measure for accountability, the WHO referenced a U.S. bill from 2019, the “Algorithmic Accountability Act.” This bill, though not enacted, appeared in the document as an example of government regulation: it would require that companies study and fix flawed AI algorithms by conducting impact assessments and government-regulated clinical trials.

Ensure inclusiveness and equity

AI systems should ensure equity and inclusion by extinguishing biases and removing barriers to distribution, instructed the WHO. Such biases include those related to age, sex, gender, income, race, ethnicity, sexual orientation, and ability.

“Societal bias and discrimination are often replicated by AI technologies,” warned the WHO in the guidance document. Data training sets may exclude certain groups, and preexisting biases in healthcare are likely to impact the output of its AI systems.

For example, the guidance document cited an AI system designed to detect skin cancers, which produced racially biased outcomes. The training data given to the AI system was largely limited to “fair-skinned” populations and therefore was not accurate or relevant in detecting cancerous skin lesions for people of color.

Additionally, an AI system may be inequitable if it excludes considerations for underrepresented groups, such as rural communities or ethnic minorities. Generally, explained the WHO, this problem can be avoided by receiving diverse opinions during the development of AI.

A final source of bias identified in the document is the unequal distribution of health AI to certain demographic groups––what is known as the “digital divide.” The WHO acknowledged that some low-to-middle income countries do not have the foundational technological infrastructure or regulatory capacity to support AI systems. The document recommended that technology providers offer infrastructure and affordable devices to these countries to support AI systems.

Promote AI that is responsive and sustainable.

In its last ethical principle, the WHO directed AI designers to promote systems that are both responsive and sustainable––two attributes that hold wide implications.

First, to be responsive, individuals must be able to assess the performance of an AI system during its lifespan to determine what alterations, if any, should be made to ensure the proper functioning of the system.

Second, the WHO uses the term “sustainable” in two ways. With regard to the environment, the AI system must not further environmental damage. Carbon emissions can be lowered by minimizing the amount of data that the AI processes, suggested the WHO. With regard to sustaining employment, the WHO recommended that institutions prepare employees for AI integration by offering sufficient technological training and anticipating potential job loss.

Altogether, the WHO offered these six ethical principles to governments and ministries of health to aid in the creation of national and international regulatory frameworks. But the WHO acknowledged that the guidance document is just that––a starting point, from which countries will create context-specific provisions for the use of AI systems in their respective health sectors.

About
Thomas Plant
:
Thomas Plant is an analyst at Valens Global and supports the organization’s work on domestic extremism. He is also an incoming Fulbright research scholar to Estonia and the co-founder of William & Mary’s DisinfoLab, the nation’s first undergraduate disinformation research lab.
The views presented in this article are the author’s own and do not necessarily represent the views of any other organization.

a global affairs media network

www.diplomaticourier.com

WHO Issues Guidelines for Ethical AI in Health Sector

Image by ThinkStock photos.

July 16, 2021

The WHO has issued guidelines for the ethical use of AI technologies in the health sector, suggesting that AI can augment or even surpass human capabilities in many areas of health care. To do so, however, humans must remain at the center of an ethical decision-making process.

E

ighteen months after member states called on the World Health Organization (WHO) to create a global strategy for the establishment of digital health systems, the UN's public health agency has issued guidelines for the ethical use of artificial intelligence (AI) technologies in the health sector. The recently published guidance document acknowledges the benefits AI can provide while urging that ethics and human rights be at the core of its design and use.

Despite sharing similar functions, AI systems vary widely in terms of autonomy: some AI systems are formulaic and predictable while highly autonomous systems use neural networks that adapt to changes in the world around them. For example, autocorrect is an AI system, but so are the programs that steer self-driving cars.

Given this wide range in ability, the WHO endorsed the claim that AI systems are positioned to augment––and in some cases replace and surpass––the capabilities of humans in various areas of health care: diagnosis, drug development, clinical care, and public health monitoring. Specifically, the WHO reported that AI systems can scan images for cancers, interpret X-rays for tuberculosis, provide telemedicine for rural patients, sophisticate the function of artificial limbs, or even predict how likely a patient is to show up late to an appointment, among other actions.

These functions will only be possible, however, provided that humans remain at the center of an ethical decision-making process.

Due to the varying degree of autonomy inherent in AI systems, the WHO recognized the need to confront and regulate potential ethical challenges to implementing these technologies in the health sector.

As the report pointed out, although many principles and guidelines have been developed for the ethical application of AI in the health sectors, there has been no consensus on a shared definition, or agreed best practices or ethical requirements. 

To provide a universal basis for health AI, the WHO streamlined its regulations under a set of six core ethical principles that seek to protect human rights and comply with existing legal obligations: (1) protect autonomy; (2) promote human wellbeing, human safety, and the public interest; (3) ensure transparency, explainability, and intelligibility; (4) foster responsibility and accountability; (5) ensure inclusiveness and equity; (6) promote AI that is responsive and sustainable.

Protect autonomy

The WHO advised that all AI systems for health should maintain a degree of human control, with systems designed to ultimately leave medical decisions to the discretion of humans.

“In practice,” suggested the WHO, “this could include deciding whether to use an AI system for a particular health-care decision, to vary the level of human discretion and decision-making, and to develop AI technologies that can rank decisions when appropriate (as opposed to a single decision).”

Another dimension of human autonomy is protecting the privacy of individuals during data collection. AI systems benefit from training with large quantities of data. To use the example of AI that scans images for cancers, the more cancer-positive images that a system can reference, the more successful it will be in detecting cancer in complex or nuanced cases.

But this medical data is sensitive and tied to individuals. Therefore, the WHO stated that governments should regulate the collection, storage, and access to medical data for AI systems through data-collection laws and public-private partnerships (PPPs). Specifically, the document encourages the implementation of agreements of informed consent and the anonymization of health data.

Promote human wellbeing, human safety, and the public interest

In a twist on the Hippocratic oath for the digital age, the WHO reaffirms that AI technologies should not cause harm.  The WHO defines harm in various forms.

With respect to mental and physical harm to individuals, the WHO stressed that AI decisions for patient health should include quality control measures to identify and correct errors. “Such an error, if fixed in an algorithm, could cause irreparable harm to thousands of people in a short time if the technology is used widely,” predicted the WHO. 

With respect to the public interest, the WHO identified cybersecurity as an area of concern. As digital health services increasingly rely on AI systems, it is possible that cybercrime organizations will target these systems for malware attacks or ransomware operations, such as the operation carried out against Colonial Pipeline in April of 2021.

Ensure transparency, explainability, and intelligibility

For developers, users, patients, and regulators, the WHO asserted that AI systems for health must be understandable. This requirement would prohibit what are known as “black-box” algorithms, or neural networks whose complexity makes the decision-making process incomprehensible––even to its developers.

In fact, the WHO instructed AI developers to prioritize the transparency of the AI system over its accuracy. In other words, although black-box AI systems can make highly accurate predictions through autonomous reasoning processes, the WHO advised that the inherent risks of abandoning human surveillance outweigh any potential benefits.

Foster responsibility and accountability

Supported by the WHO’s call for intelligibility is the request that governments implement a mechanism to assign responsibility for an AI system that produces adverse effects. The document advocated for a mechanism that identifies the relative roles of all actors involved in the system’s development and deployment.

As an example of a legal measure for accountability, the WHO referenced a U.S. bill from 2019, the “Algorithmic Accountability Act.” This bill, though not enacted, appeared in the document as an example of government regulation: it would require that companies study and fix flawed AI algorithms by conducting impact assessments and government-regulated clinical trials.

Ensure inclusiveness and equity

AI systems should ensure equity and inclusion by extinguishing biases and removing barriers to distribution, instructed the WHO. Such biases include those related to age, sex, gender, income, race, ethnicity, sexual orientation, and ability.

“Societal bias and discrimination are often replicated by AI technologies,” warned the WHO in the guidance document. Data training sets may exclude certain groups, and preexisting biases in healthcare are likely to impact the output of its AI systems.

For example, the guidance document cited an AI system designed to detect skin cancers, which produced racially biased outcomes. The training data given to the AI system was largely limited to “fair-skinned” populations and therefore was not accurate or relevant in detecting cancerous skin lesions for people of color.

Additionally, an AI system may be inequitable if it excludes considerations for underrepresented groups, such as rural communities or ethnic minorities. Generally, explained the WHO, this problem can be avoided by receiving diverse opinions during the development of AI.

A final source of bias identified in the document is the unequal distribution of health AI to certain demographic groups––what is known as the “digital divide.” The WHO acknowledged that some low-to-middle income countries do not have the foundational technological infrastructure or regulatory capacity to support AI systems. The document recommended that technology providers offer infrastructure and affordable devices to these countries to support AI systems.

Promote AI that is responsive and sustainable.

In its last ethical principle, the WHO directed AI designers to promote systems that are both responsive and sustainable––two attributes that hold wide implications.

First, to be responsive, individuals must be able to assess the performance of an AI system during its lifespan to determine what alterations, if any, should be made to ensure the proper functioning of the system.

Second, the WHO uses the term “sustainable” in two ways. With regard to the environment, the AI system must not further environmental damage. Carbon emissions can be lowered by minimizing the amount of data that the AI processes, suggested the WHO. With regard to sustaining employment, the WHO recommended that institutions prepare employees for AI integration by offering sufficient technological training and anticipating potential job loss.

Altogether, the WHO offered these six ethical principles to governments and ministries of health to aid in the creation of national and international regulatory frameworks. But the WHO acknowledged that the guidance document is just that––a starting point, from which countries will create context-specific provisions for the use of AI systems in their respective health sectors.

About
Thomas Plant
:
Thomas Plant is an analyst at Valens Global and supports the organization’s work on domestic extremism. He is also an incoming Fulbright research scholar to Estonia and the co-founder of William & Mary’s DisinfoLab, the nation’s first undergraduate disinformation research lab.
The views presented in this article are the author’s own and do not necessarily represent the views of any other organization.