Artificial intelligence (AI) is poised to revolutionize healthcare by enabling advanced diagnostics, improving risk prediction through data analysis, and personalizing treatment plans based on individual patient data. These advancements promise to enhance overall care quality and outcomes. Yet, there is a risk that biases inherent in these systems could exacerbate existing disparities in healthcare. Of particular concern is the impact this might have on patient care across diverse populations.
AI’s potential lies in its ability to process vast amounts of data, enabling more accurate and timely medical insights. However, the programs depend heavily on the training data. A recent article in The Lancet Digital Health underscores this critical issue: many AI algorithms are trained on datasets that do not adequately represent all patient demographics. The authors highlight that women and minority groups are frequently underrepresented in cardiac diagnostic imaging datasets. This results in a lack of relevant data for developing accurate AI models.
The Lancet article provides compelling examples of how these biases manifest in real-world applications. For instance, in a 2022 study, an ECG deep learning algorithm demonstrated decreased accuracy in detecting valvular heart disease among Black patients compared to White patients. The algorithm detected the disease in 4.4% of Black patients versus 10.0% in White patients. Similarly, the authors reference a 2023 retrospective study that revealed that sicker Black patients received similar risk scores to healthier White patients. This led to the under-referral of Black patients for complex care programs.
The challenge extends beyond data representation and encompasses biases in the design and testing of algorithms. The Lancet article points out that measurement bias can occur when training datasets include erroneous data or when technologies, such as pulse oximeters, perform sub-optimally for certain skin tones. Similarly, variable selection bias arises when important factors are omitted from the training data, leading to incorrect associations or mislabeling of outcomes.
To address these challenges and promote health equity, several strategies must be implemented. The Lancet article emphasizes the importance of transparent reporting on the sources of training data, variable selection, and algorithm testing. Ensuring diverse representation in training datasets is crucial. By incorporating multi-region and multicenter data, researchers can better capture the variability needed for fair AI models.
It is crucial to diversify the teams comprising healthcare experts and data scientists. These teams need to make sure that AI models are tested on a range of demographic groups and actively check for biases. Monitoring real-world performance also heavily relies on regulatory scrutiny, such as the FDA’s action plans to reduce AI bias.
Healthcare results could be greatly improved by AI, but we must be careful to address any potential biases. We can leverage AI’s benefits by putting a priority on varied datasets, creating inclusive development teams, and following strict regulatory guidelines. This strategy reduces risks while maximizing AI’s beneficial effects. In the end, eliminating biases in AI goes beyond better medical care. It is essential if we are to ensure that technical developments contribute to global health equity, offering fair and accurate care across diverse populations.
Read the original article from The Lancet Digital Health here
With extensive leadership experience in the provider and payer spaces, Peter W. McCauley leads a national team of Medical and Nurse Executives. They offer clinical support to healthplan matrix partners in U.S. commercial markets. This includes new business medical client sales and retention, supplemental health benefits, and customized medical solutions for large national employer clients. They also provide value-based care solutions that translate improved clinical outcomes and cost reductions into employer client total medical cost savings. Dr. McCauley continues to practice pediatrics on Chicago’s far south side.
Connect with me on LinkedIn