2021

Ethics of Digital Health Tools and AI in Health

'As technologies such as AI and machine learning continue to make an impact on the healthcare space, startups will play an increasing role in such transformation.'

The use of artificial intelligence (AI) in health and digital health tools such as the diagnostic tools and the radiological image analysis tools shows booming success. AI is also used to help develop new medicines and assist health and medical related research. The use of AI can benefit regions and populations where significant gaps in health care delivery and services occur. For instance, an AI tool developed by Zebra Medical Vision started rolling out in hospitals in India to speed up diagnosis of people with signs of tuberculosis. ust in September 2021,  U.S. health regulators authorized the first AI tool to help diagnose prostate cancer. The tools are promising as they allow improved efficiency of the healthcare system, better access to scarce specialist resources and skills and can handle the large amount of health data available.


With the expansion of AI in healthcare and digital health tools, the awareness of newly generated ethical issues is growing. The ethical principles include the fairness and bias when collecting data,  privacy and data protection when the data has been collected, safety, accountability and transparency of the machine learning algorithms involved in the digital tools, etc. Discussing and working on these issues proactively is the key to the long-term success of AI in digital health tools and the health systems. 


Fairness and bias when collecting data

Datasets used to train the AI algorithms should be as complete as possible, meaning the data should include all race, gender and age groups. However, there is a risk that the tools might deepen the inequalities if not handled carefully. In fact, compared to white people, black people in U.S. emergency rooms are 40% less likely to receive pain medication. The existing medical data sets contain information mostly from white, adult men, yet data from other genders, ethnic groups and age groups is under-represented. The insufficiency of data diversity might lead to biased algorithms. To solve this problem, the US National Institutes of Health (NIH) created a research program - All of Us - to establish a database focusing on the previously under-represented groups to expand the available data sets for better quality healthcare development for everyone.


Privacy and data protection

Actions such as the collection, use, analysis and sharing around health data constantly raise concerns about individual privacy issues, as the lack of privacy can harm individuals by causing discrimination on their health status, or even affact a person’s dignity if the health data is inappropriately broadcast. Unfortunately, even for areas with strong data-protection legislation, it is difficult to keep personal data private. Especially when it comes to the temptation of giving up privacy in return for medical care or financial reward, which further separates different socio-economic groups.

Health related data can be used for targeted advertising merchandise and services and for predicting products to be used by companies such as insurance firms. In early 2021, the Singapore Government admitted that data collected for COVID-19 proximity tracing were inappropriately used for criminal investigation.


Safety, accountability and transparency of the machine learning algorithms involved in the digital tools

As research shows, most people are not comfortable with AI algorithms acting autonomously without a human doctor’s monitoring and supervision. When the digital tool fails, accountability such as certain mechanisms should be available for questioning and actions. For example, health-care institutions and public health agencies can publish information regularly on how decisions have been made for the acceptance of an AI digital health tool and how the related technology will be evaluated periodically. Additionally, AI technologies should be intelligible to the developers of the digital tools as well as the medical professionals, patients and regulators. If the information of an AI technology is documented and published before deployment, this will encourage meaningful public consultation and debate on how the technology is developed and whether or not it should be used for the digital health tools. As a result, while AI may not replace human clinical decision-making, if used appropriately, it can improve decisions made by the clinicians.


AI for digital health tools is rapidly evolving. To use AI successfully in digital health tools, ethical issues must be overcome. Digital health tool companies need to support AI ethics becoming part of the company culture and it would behoove the business to run cross-functional teams and hire specific ethicist specialists. The business of digital health tools is built and run by highly trained business leaders and AI researchers who not only provide innovative work, but also care about the system and people. The ethical concerns discussed in this article are not to discourage the potential use of AI in digital health tools, but to support AI fulfilling its potential and promise in health.

About THE author

Jingyi Ran

Jingyi Ran is a chemistry PhD candidate at the University of Southern California focusing on the researchof heterogeneous catalytic CO2 conversions using solid-state materials. She is a board member of theUSC chapter of the Materials Research Society and a member of Women in Chemistry at USC. Jingyi isenthusiastic about incorporating her knowledge of chemistry, data science, and the passion for AI tocreate a more sustainable environment and more positive society.

Are you ready?

Get Early Access to Life Summit 2021!