TY  - JOUR
KW  - AutoML
KW  -  Computer Science
KW  -  Computer Science
KW  -  Artificial Intelligence
KW  -  Computer vision
KW  -  Multi arm bandits
KW  -  Neural architecture search
KW  -  Object recognition
KW  -  Science & Technology
KW  -  Technology
N1  - This version is the author accepted manuscript. For information on re-use, please refer to the publisher?s terms and conditions.
IS  - 1
EP  - 96
SN  - 0885-6125
TI  - Manas: multi-agent neural architecture search
AV  - public
A1  - Lopes, Vasco
A1  - Carlucci, Fabio Maria
A1  - Esperanca, Pedro M
A1  - Singh, Marco
A1  - Yang, Antoine
A1  - Gabillon, Victor
A1  - Xu, Hang
A1  - Chen, Zewei
A1  - Wang, Jun
JF  - Machine Learning
PB  - Springer Verlag
VL  - 113
SP  - 73
N2  - The Neural Architecture Search (NAS) problem is typically formulated as a graph search problem where the goal is to learn the optimal operations over edges in order to maximize a graph-level global objective. Due to the large architecture parameter space, efficiency is a key bottleneck preventing NAS from its practical use. In this work, we address the issue by framing NAS as a multi-agent problem where agents control a subset of the network and coordinate to reach optimal architectures. We provide two distinct lightweight implementations, with reduced memory requirements (1/8th of state-of-the-art), and performances above those of much more computationally expensive methods. Theoretically, we demonstrate vanishing regrets of the form O(T) , with T being the total number of rounds. Finally, we perform experiments on CIFAR-10 and ImageNet, and aware that random search and random sampling are (often ignored) effective baselines, we conducted additional experiments on 3 alternative datasets, with complexity constraints, and 2 network configurations, and achieve competitive results in comparison with the baselines and other methods.
ID  - discovery10194864
UR  - http://dx.doi.org/10.1007/s10994-023-06379-w
Y1  - 2024/01//
ER  -