Jun 25 – 28, 2024
ETH Zurich
Europe/Zurich timezone

Contrastive Learning for Robust Representations of Neutrino Data

Jun 27, 2024, 1:25 PM
25m
HCI J4 (ETH Zurich)

HCI J4

ETH Zurich

ETH Zürich, Hönggerberg campus, Stefano-​​Franscini-​Platz 5, 8093 Zurich, Switzerland.

Speaker

Alex Wilkinson (University College London)

Description

A key challenge in the application of deep learning to neutrino experiments is overcoming the discrepancy between data and Monte Carlo simulation used in training. In order to mitigate bias when deep learning models are used as part of an analysis they must be made robust to mismodelling of the detector simulation. We demonstrate that contrastive learning can be applied as a pre-training step to minimise the dependence of downstream tasks on the detector simulation. This is achieved by the self-supervised algorithm of contrasting different simulated views and augmentations of the same event during the training. The contrastive model can then be frozen and the representation it generates used as training data for classification and regression tasks. We use sparse tensor networks to apply this method to 3D LArTPC data. We show that the contrastive pre-training produces representations that are robust to shifts in the detector simulation both in and out of sample.

Type of contribution Talk: 30 minutes.

Primary authors

Alex Wilkinson (University College London) Radi Radev (CERN) Dr Saul Alonso Monsalve (ETH Zurich)

Presentation materials