TY  - GEN
N1  - This version is the author accepted manuscript. For information on re-use, please refer to the publisher?s terms and conditions.
TI  - A software-defined architecture and prototype for disaggregated memory rack scale systems
EP  - 307
AV  - public
SP  - 300
Y1  - 2018/04/23/
CY  - Pythagorion, Greece
KW  - disaggregation
KW  -  extended memory
KW  -  serverless
computing
KW  -  pooled computing
KW  -  rack scale systems
KW  -  rack scale datacenters
KW  -  software-defined systems
KW  -  cloud datacenters
KW  -  internetscale computer
A1  - Syrivelis, D
A1  - Reale, A
A1  - Katrinis, K
A1  - Syrigos, I
A1  - Bielski, M
A1  - Theodoropoulos, D
A1  - Pnevmatikatos, DN
A1  - Zervas, G
ID  - discovery10069632
N2  - Disaggregation and rack-scale systems have the potential of drastically increasing TCO and utilization of cloud datacenters, while maintaining performance. In this paper, we present a novel rack-scale system architecture featuring software-defined remote memory disaggregation. Our hardware design and operating system extensions enable unmodified applications to dynamically attach to memory segments residing on physically remote memory pools and use such remote segments in a byte-addressable manner, as if they were local to the application. Our system features also a control plane that automates software-defined dynamic matching of compute to memory resources, as driven by datacenter workload needs. We prototyped our system on the commercially available Zynq Ultrascale+ MPSoC platform. To our knowledge, this is the first time a software-defined disaggregated system has been prototyped on commercial hardware and evaluated through industry standard software benchmarks. Our initial results - using benchmarks that are artificially highly adversarial in terms of memory bandwidth - show that disaggregated memory access exhibits a round-trip latency of only 134 clock cycles; and a throughput penalty of as low as 55%, relative to locally-attached memory. We also discuss estimations as to how our findings may translate to applications with pragmatically milder memory aggressiveness levels, as well as innovation avenues across the stack opened up by our work.
PB  - IEEE
UR  - https://doi.org/10.1109/SAMOS.2017.8344644
ER  -