Explainable AI for Diabetic Retinopathy Detection: A Systematic Review of Machine Learning Approaches
Keywords:
The field possesses advanced classifiers, yet it requires immediate development of standardized explanation assessment tools and detailed lesion-specific validation and user testing in actual clinical environments to achieve safe medical practice adoption.Abstract
The increasing number of preventable blindness cases from Diabetic Retinopathy requires fast and efficient manual screening methods. Deep learning models achieve expert-level accuracy for DR image analysis, but their unexplained operation requires Explainable AI (XAI) methods including Grad-CAM and SHAP and LIME for transparency. The research evaluated 52 studies from 2015 to 2025 which integrated Deep Learning with Explainable AI techniques for Diabetic Retinopathy detection. The research used ResNet and EfficientNet variants as primary deep learning models while Grad-CAM served as the leading explainable AI method. The research indicates that XAI integration preserves high model accuracy while showing promise for enhancing human-AI collaboration yet the evaluation approaches for XAI explanations remain insufficient. Most research studies conduct basic visual assessment instead of using established quantitative assessment methods or seeking ophthalmologist validation.



