TY  - GEN
SP  - 2481
N2  - Generalization error bounds are essential to understanding machine learning algorithms. This paper presents novel expected generalization error upper bounds based on the average joint distribution between the output hypothesis and each input training sample. Multiple generalization error upper bounds based on different information measures are provided, including Wasserstein distance, total variation distance, KL divergence, and Jensen-Shannon divergence. Due to the convexity of the information measures, the proposed bounds in terms of Wasserstein distance and total variation distance are shown to be tighter than their counterparts based on individual samples in the literature. An example is provided to demonstrate the tightness of the proposed generalization error bounds.
ID  - discovery10175497
AV  - public
A1  - Aminian, Gholamali
A1  - Bu, Yuheng
A1  - Wornell, Gregory W
A1  - Rodrigues, Miguel RD
Y1  - 2022///
EP  - 2486
UR  - https://doi.org/10.1109/ISIT50566.2022.9834474
N1  - This version is the author accepted manuscript. For information on re-use, please refer to the publisher's terms and conditions.
TI  - Tighter Expected Generalization Error Bounds via Convexity of Information Measures
PB  - Institute of Electrical and Electronics Engineers (IEEE)
ER  -