Tuesday, June 16, 2020

Using artificial intelligence in primary care: progress and challenges

- Kenny Lin, MD, MPH

As applications of artificial intelligence (AI) in health care multiply, AI-enabled clinical decision support is coming to primary care. For example, a recent article in the Journal of Family Practice discussed applications of machine learning (ML) software to screening for diabetic retinopathy (DR) and colorectal cancer, and a study in the Journal of the American Board of Family Medicine utilized ML to create a new clinical prediction tool for unhealthy drinking in adults. Although research on primary care AI remains limited in scope and diversity of authorship, Drs. Winston Liaw and Ioannis Kakadiaris argued in a Family Medicine commentary that appropriately guided, such research could help preserve the parts of primary care that physicians and patients value most:

The digital future is not a passing trend. We will not return to paper charts. The volume of information we are expected to manage will not decline. Without a strategy for our digital present and future, our specialty risks being paralyzed by data, overwhelmed by measures, and more burned out than we already are.

We can define our future, by embracing AI and using it to preserve our most precious resource—time with patients. Adaptation to this new reality is key for our continued evolution, and AI has the potential to make us better family physicians. ... For AI to elevate the practice of family medicine, family medicine needs to participate in relevant design, policy, payment, research, and delivery decisions.

Evaluation and implementation of AI-based clinical approaches is challenging. In addition to being externally validated and corrected for biases, ML models should be transparent about data sources and assumptions and quantify and communicate uncertainty. In addition, involvement of clinicians in model building and adoption into clinical decision support systems is essential.

In the Diagnostic Tests feature in the March 1 issue of AFP, Dr. Margot Savoy reviewed an application that seemingly adheres to all of the best practices for AI in primary care. IDx-DR, a software program that uses AI to analyze retinal images from an automated nonmydriatic camera, is approved by the U.S. Food and Drug Administration for DR screening in adults 22 years and older. In a prospective study of 819 adults with diabetes recruited from 10 primary care practices, IDx-DR correctly identified 173 of the 198 patients with more than minimal DR according to the reference standard.

In a separate project, Google Health researchers evaluated the implementation of a deep learning algorithm for DR detection in 11 clinics in Thailand, a country with low screening and early treatment rates due to a shortage of ophthalmologists. Unexpected issues arose, according to an article in the MIT Technology Review:

When it worked well, the AI did speed things up. But it sometimes failed to give a result at all. Like most image recognition systems, the deep-learning model had been trained on high-quality scans; to ensure accuracy, it was designed to reject images that fell below a certain threshold of quality. With nurses scanning dozens of patients an hour and often taking the photos in poor lighting conditions, more than a fifth of the images were rejected.

Patients whose images were kicked out of the system were told they would have to visit a specialist at another clinic on another day. If they found it hard to take time off work or did not have a car, this was obviously inconvenient. Nurses felt frustrated, especially when they believed the rejected scans showed no signs of disease and the follow-up appointments were unnecessary.


Like all primary care tools, the way that AI-enabled decision support is implemented in real life will contribute as much to its success or failure as test results under optimal conditions.