A Secure and Scalable DDPG-Based Framework for Dynamic Hospital Occupancy Management in Cloud-Enabled Healthcare Networks
Keywords:
Reinforcement Learning, Deep Deterministic Policy Gradient (DDPG), Hospital Occupancy Management, Cloud Computing, Attribute-Based Encryption (ABE).Abstract
Hospital overcrowding and inefficient bed allocation remain persistent challenges in multispecialty healthcare systems, particularly during pandemics and peak admission periods. Traditional scheduling methods fail to adapt dynamically to fluctuating patient inflows, resulting in long waiting times and reduced quality of care. This study introduces a secured and adaptive hospital occupancy management framework that integrates Deep Deterministic Policy Gradient (DDPG) reinforcement learning with cloud-based deployment and Ciphertext-Policy Attribute-Based Encryption (CP-ABE). Patient data are preprocessed through normalization and imputation, and grouped into clinically homogeneous cohorts using Mahalanobis distance clustering. The DDPG agent learns optimized allocation strategies by minimizing wait times, improving fairness, and maximizing bed utilization. Deployed on AWS cloud infrastructure, the system ensures scalability and real-time integration across hospital networks, while CP-ABE enforces fine-grained access control for data security. Experimental evaluation on a dataset of 50,000 patient records demonstrates superior performance compared to conventional machine learning and rule-based methods, achieving 87.2% bed utilization, an average 12.3-minute reduction in wait time, and faster convergence with a runtime of 41.3 seconds. The results establish the proposed framework as a robust, secure, and scalable solution for real-time hospital occupancy management in cloud-enabled healthcare ecosystems.



