People of color (POC) have been disproportionately affected by COVID in part due to bias in clinical care. In the midst of the COVID-19 pandemic, the Journal of the American Medical Informatics Association (JAMIA) released an article arguing that biases in AI have resulted in a disproportionate negative impact on people of color. While some people thought the pandemic would be the great equalizer, it actually brought to the forefront the discrepancies POCs face in healthcare, due in part, to unbiased AI in healthcare.
“The profile of patients infected and hospitalized with COVID-19 highlights stark health disparities as racial and ethnic minority groups suffer a disproportionate burden of illness and death.1 Two main factors are thought to underlie this burden. First, the economic and social circumstances of many minorities limit their ability to socially distance and increase their odds of infection. Second, the existing health disparities experienced by many minorities negatively affect the disease progression. Furthermore, implicit racial biases existing in clinical care are exaggerated by acute stressors such as the risk of personal infection among healthcare providers.”2
The healthcare industry has readily embraced AI believing it helps guide clinical, unbiased decision making. In fact, nine out of 10 hospitals now have an AI strategy in place and 75% of healthcare executives count AI initiatives as being more critical than ever, according to the same report. But, AI doesn’t always mean unbiased results, as existing algorithms for analyzing data are typically biased because the data they are built from is non-representational of the people they analyze.
AI algorithms created in research labs with access to less diverse populations may be limited in their applicability to a broader, more diverse population. For example, Maine has the highest percentage of people over 65 (20.6%) and almost 95% of its residents identify as White. Thus, if using only data from Maine to train algorithms to diagnose medical conditions, the training will be more biased towards representing outcomes in white patients over 65. As a result in this scenario, the likelihood of the algorithm having similar accuracy when applied to a non-White person under 65 years old is lower. Obtaining a larger, more diverse data set with which to retrain diagnostic algorithms has been difficult in the past due to HIPAA and the many challenges of obtaining or pooling patient data across geographic and organizational boundaries.
Every industry that deals with AI will have its own biases to address. However, there are steps that data scientists can take to decrease bias, including:
- Have the ability to work with many other healthcare institutions that work with different populations in order to have access to a more diverse population and therefore, more diverse data
- Enable models to be more accurate by using real data, not synthetic data;
- Incorporate third-party data that may mitigate inherent biases in first-party data, allows for fair or more ethical and responsible data;
- Avoid anonymizing data, which can remove information critical to generating high-quality, accurate insights.
Until recently, the process to access third-party data has been painful, expensive, slow and broken. In order to overcome the inherent bias created by limited diversity in a given data set, a research lab will either have to complete complex legal agreement, deploy technologies to integrate external data sources that are slow or may degrade compute performance of their networks, or require anonymization and manual de-identification of their data to enable data interactions with other entities, which can cost upward of $1 million per data set per collaborator. Luckily, we’ve identified a solution that addresses the ethical challenges of AI in healthcare below.
TripleBlind addresses all of the strategies listed above to reduce risk of bias created in AI models. The TripleBlind privacy enhancing computation solution enables data collaboration that does not involve moving or sharing the raw data. It allows companies to collaborate around sensitive information while enabling enforcement of HIPAA, GDPR and other data privacy and data residency standards. As a result, TripleBlind arms healthcare systems with a solution that enables them to use real data when creating diagnostic algorithms, eliminates the need for anonymization, and facilitates working with other healthcare systems to safely share patient data sets.
Interested in learning more? Contact us to schedule a free demo at firstname.lastname@example.org.
1Pan D , Sze S , Minhas JS , et al. The impact of ethnicity on clinical outcomes in COVID-19: A systematic review. EClinicalMedicine 2020; 23: 100404.