Modular Deep Learning Framework for Ovarian Ultrasound Image Segmentation Using Enhanced U-Net Architecture and BCE–Dice Optimization

Authors

  • Mrs. Shilpa Sharma , Dr. Ritu Tandon Author

Keywords:

Ovarian Ultrasound, U-Net, Dice Coefficient, BCE Loss, Image Segmentation, MMOTU Dataset, Deep Learning, Medical Imaging.

Abstract

Ovarian cancer diagnosis relies heavily on precise lesion segmentation from ultrasound images, which remains a challenging task due to speckle noise, low contrast, and anatomical variability. This study proposes a Modular Deep Learning Framework for Ovarian Ultrasound Image Segmentation Using an Enhanced U-Net Architecture and Binary Cross-Entropy (BCE) Dice Optimization, designed to improve segmentation accuracy, computational efficiency, and generalization. The framework integrates five modular algorithms covering dataset preprocessing, splitting, model definition, training, and evaluation. The methodology begins by preprocessing 1469 MMOTU ultrasound image–mask pairs, resizing them to 256×256 pixels and normalizing intensities for consistency. The dataset is divided into 80% for training and 20% for validation using efficient PyTorch DataLoaders. The enhanced U-Net model employs convolutional, batch normalization, and ReLU layers with skip connections for spatial preservation, is trained using the Adam optimizer (learning rate = 1×10⁻³), and uses a composite BCE–Dice loss function to enhance overlap precision. Quantitative results demonstrate a Dice coefficient of 0.96, IoU of 0.90, Precision of 0.94, and Recall of 0.92, outperforming existing GAN-based and hybrid segmentation models such as MGGAN + U-Net. Furthermore, image quality evaluation metrics achieved SSIM = 0.9834, FID = 12.85, and LPIPS = 0.03457, confirming improved perceptual fidelity and structural consistency. The proposed framework significantly reduces computational complexity and training time while maintaining superior accuracy.

Downloads

Published

2025-12-04