August 28, 2022 to September 1, 2022
Serbian Academy of Sciences and Arts – SASA
Europe/Belgrade timezone

Neural network distributed training and optimization library (NNLO)

Aug 30, 2022, 6:00 PM
1h 30m
Lobby (Faculty of Physics)


Faculty of Physics

Board: S05-HEP-216
Poster presentation S05 High Energy Physics (Particles and Fields) Poster session


Irena Veljanovic (CERN)


With deep learning becoming very popular among LHC experiments, it is expected that speeding up the network training and optimization will soon be an issue. To this purpose, we are developing a dedicated tool at CMS, Neural Network Learning and Optimization library (NNLO). NNLO aims to support both widely known deep learning libraries Tensorflow and PyTorch. It should help engineers and scientists to easier scale neural network learning and hyperparameter optimization. Supported training configurations are a single GPU, a single node with multiple GPUs and multiple nodes with multiple GPUs. One of the advantages of the NNLO library is the seamless transition between resources, enabling researchers to quickly scale up from workstations to HPC and cloud resources. Compared to manual distributed training, NNLO facilitates the transition from a single to multiple GPUs without losing performance.

Primary author


Presentation materials