The AI LA 2020 Life Summit did more than just showcase AI tools in the life sciences space. Several of the sessions took a deep dive into AI’s wide-ranging impact on science, research, and development at large. These areas included identifying questions around the ethics of creating AI models; the applications of certain types of data; how to use AI for healthcare interventions at the population and individual levels; and the surprising areas that AI can affect healthcare operations at large.
George Tolomiczenko, the Director of Medical Innovations at UC Irvine, used his keynote to push for an ethical approach to AI in healthcare. He warned that allowing purely business-sponsored development could have disastrous consequences for AI’s future. If we take AI as only a tool to make money or improve a return on investment for corporations, we can too easily get wrapped up in expanding AI’s capabilities and giving it control over many of our day-to-day processes without asking what the potential consequences are. This type of expansion ends up being AI development for its own sake, and forgets to focus on the impact AI has on people. Instead of this doomsday scenario, he urged for a partnership between industry and academia to ensure AI is used to help humanity.
The AI in Healthcare panel echoed this sentiment and the challenges facing product development in healthcare versus the consumer industry. Aditya Khosla, Co-founder and CTO at PathAI, pointed out that new versions of consumer products can be shipped quickly and constantly iterated upon so they get developed quicker, but healthcare’s process will always be slower—and that’s a good thing. He highlighted that healthcare has necessary regulations in place due to the higher stakes involved when dealing with human life. When AI is influencing people’s decisions about their health, a false positive or a false negative can have life-altering consequences. Additionally, many of the AI-driven tools in healthcare operate on a population level, so a mistaken result could affect thousands of people.
Laura Li, the founder of Breakthrough Genomics, pointed out that getting to the point where AI can make correct evaluations of data within the healthcare space requires significant work cleaning the data before ingesting it. The other panelists on the AI in Healthcare panel agreed, highlighting that much of healthcare data is often unstructured, existing in electronic medical records (EMRs) without much context. This unorganized data results in difficulty using healthcare data, even though there is a large amount of it.
During the discussion of difficulty in using healthcare data, Anant Madabhushi, Professor at Case Western Reserve University, identified another issue with much of the data: it does not take into account underrepresented communities. He discussed a recent study where the AI was able to identify differences at the cellular level between black and white men with prostate cancer. For Dr. Madabhushi and the rest of the panelists, this pointed to a need to identify certain risk categories and compile data specifically for those populations in order to bring the right types of care to patients.
In the Quantified Self and AI panel, Gary Wolf and Matthew Markert discussed a similar, but slightly different take on the challenges around data. Wolf, the founder of Quantified Self and contributing editor at Wired, championed the idea that there are limitations to the healthcare decisions made based on top-down large-scale clinical trials. These trials are good for certain conclusions about how to administer healthcare, but don’t always identify what is true for an individual person.
Matthew Markert, a neurologist with Sutter Health - Palo Alto Medical Foundation, agreed with Wolf on the necessity of interventions on the individual level and focusing on environmental or social factors affecting health, but pushed for more AI intervention. Markert argued that we can be good at pattern recognition for recognition of straightforward stressors that affect our health, but we need to offload some of the recognition onto something or someone smarter than us. That could be consensus review or other existing non-AI means, but for Markert, AI held the most promise for identifying previously unknown health-related variables.
For Wolf, the solution is empowering individuals to perform n=1 experiments where each person identifies patterns and connections that are unique to themselves, then makes healthcare decisions based on that data. He argued for taking a step back and asking what we can do with the tools we currently have at our disposal to improve people’s health and not just focusing on what AI will be able to do for healthcare in the future.
The Promise of AI Beyond Machine Learning
Wolf highlights our ability to work with data in ways that don’t necessarily lean into AI, but ask similar questions that we would ask of an AI application. He urges us to learn how to ask thoughtful questions about our own health and find ways to answer those questions in productive ways. For him, the ways that AI has changed the way we ask questions about our health is just as important as the technology itself.
The AI in healthcare panel concluded with a similar thought process. Ron Li, Assistant Professor and Medical Informatics Director at AI Clinical Integration at Stanford, wrapped up the panel by pointing out that one of the major benefits from AI has been the ability to increase the efficiency of care delivery services. The machine learning component of AI might be a critical component, but its real power is to change the way that providers communicate with each other, streamline workflows, and change team structures.
Toward a Human-Centric AI
The ideas in both the AI and Healthcare and QS and AI panels aligned with Tolomiczenko’s initial call for human-centred AI development. Looking at data of individuals through QS methods, working within necessary regulatory structures, and using AI to affect processes are all ways to keep the patient in mind while pushing AI forward.