TY - GEN EP - 53 T3 - Studies in Computational Intelligence SP - 45 N2 - We introduce a dual-hormone control algorithm for people with Type 1 Diabetes (T1D) which uses deep reinforcement learning (RL). Specifically, double dilated recurrent neural networks are used to learn the control strategy, trained by a variant of Q-learning. The inputs to the model include the real-time sensed glucose and meal carbohydrate content, and the outputs are the actions necessary to deliver dual-hormone (basal insulin and glucagon) control. Without prior knowledge of the glucose-insulin metabolism, we develop a data-driven model using the UVA/Padova Simulator. We first pre-train a generalized model using long-term exploration in an environment with average T1D subject parameters provided by the simulator, then adopt importance sampling to train personalized models for each individual. In-silico, the proposed algorithm largely reduces adverse glycemic events, and achieves time in range, i.e., the percentage of normoglycemia, 93% for the adults and 83% for the adolescents, which outperforms previous approaches significantly. These results indicate that deep RL has great potential to improve the treatment of chronic diseases such as diabetes. A1 - Zhu, T A1 - Li, K A1 - Georgiou, P AV - public N1 - This version is the author accepted manuscript. For information on re-use, please refer to the publisher?s terms and conditions. CY - Cham, Switzerland TI - Personalized Dual-Hormone Control for Type 1 Diabetes Using Deep Reinforcement Learning UR - https://doi.org/10.1007/978-3-030-53352-6_5 Y1 - 2020/// ID - discovery10117791 PB - Springer ER -