Speaker
Description
With deep learning becoming very popular among LHC experiments, it is expected that speeding up the network training and optimization will soon be an issue. To this purpose, we are developing a dedicated tool at CMS, Neural Network Learning and Optimization library (NNLO). NNLO aims to support both widely known deep learning libraries Tensorflow and PyTorch. It should help engineers and scientists to easier scale neural network learning and hyperparameter optimization. Supported training configurations are a single GPU, a single node with multiple GPUs and multiple nodes with multiple GPUs. One of the advantages of the NNLO library is the seamless transition between resources, enabling researchers to quickly scale up from workstations to HPC and cloud resources. Compared to manual distributed training, NNLO facilitates the transition from a single to multiple GPUs without losing performance.