Exploring the interpretability of deep neural networks used for gravitational lens finding with a sensitivity probe

C. Jacobs, K. Glazebrook, A. K. Qin, T. Collett

Research output: Contribution to journalArticlepeer-review

32 Downloads (Pure)

Abstract

Artificial neural networks are finding increasing use in astronomy, but understanding the limitations of these models can be difficult. We utilize a statistical method, a sensitivity probe, designed to complement established methods for interpreting neural network behavior by quantifying the sensitivity of a model's performance to various properties of the inputs. We apply this method to neural networks trained to classify images of galaxy-galaxy strong lenses in the Dark Energy Survey. We find that the networks are highly sensitive to color, the simulated PSF used in training, and occlusion of light from a lensed source, but are insensitive to Einstein radius, and performance degrades smoothly with source and lens magnitudes. From this we identify weaknesses in the training sets used to constrain the networks, particularly the over-sensitivity to PSF, and constrain the selection function of the lens-finder as a function of galaxy photometric magnitudes, with accuracy decreasing significantly where the g-band magnitude of the lens source is greater than 21.5 and the r-band magnitude of the lens is less than 19.
Original languageEnglish
Article number100535
Number of pages16
JournalAstronomy and Computing
Volume38
Early online date23 Dec 2021
DOIs
Publication statusPublished - 1 Jan 2022

Keywords

  • astro-ph.IM
  • methods: statistical
  • gravitational lensing
  • neural networks

Fingerprint

Dive into the research topics of 'Exploring the interpretability of deep neural networks used for gravitational lens finding with a sensitivity probe'. Together they form a unique fingerprint.

Cite this