AI pitfalls and what not to do: mitigating bias in AI
Various forms of artificial intelligence (AI) applications are being deployed and used in many healthcare systems. As the use of these applications increases, we are learning the failures of these models and how they can perpetuate bias. With these new lessons, we need to prioritize bias evaluation and mitigation for radiology applications; all the while not ignoring the impact of changes in the larger enterprise AI deployment which may have downstream impact on performance of AI models. In this paper, we provide an updated review of known pitfalls causing AI bias and discuss strategies for mitigating these biases within the context of AI deployment in the larger healthcare enterprise. We describe these pitfalls by framing them in the larger AI lifecycle from problem definition, data set selection and curation, model training and deployment emphasizing that bias exists across a spectrum and is a sequela of a combination of both human and machine factors.
Here you can access the publication.
Judy Wawira Gichoya, MD, Kaesha Thomas, MD, Leo Anthony Celi, MD, Nabile Safdar, MD, MPH, Imon Banerjee, PhD, John D Banja, PhD, Laleh Seyyed-Kalantari, PhD, Hari Trivedi, MD, Saptarshi Purkayastha, PhD
How to access
1. Click on the link above
2. Register and set up a password. (If you are already registered, log in with your email address and password).
3. Follow the instructions to access the article
Bitte beachten Sie: bevor Sie die Inhalte des BIR nutzen könnten, müssen Sie sich einmalig kostenlos beim BIR registrieren. In der Registrierung geben Sie an, dass Sie DRG-Mitglied sind. |