In any imaging survey, measuring accurately the astronomical background light is crucial to obtain good photometry. This paper introduces BKGNET, a deep neural network to predict the background and its associated error. BKGNET has been developed for data from the Physics of the Accelerating Universe Survey (PAUS), an imaging survey using a 40 narrow-band filter camera (PAUCam). The images obtained with PAUCam are affected by scattered light: an optical effect consisting of light multiply reflected that deposits energy in specific detector regions affecting the science measurements. Fortunately, scattered light is not a random effect, but it can be predicted and corrected for. We have found that BKGNET background predictions are very robust to distorting effects, while still being statistically accurate. On average, the use of BKGnet improves the photometric flux measurements by 7 per cent and up to 20 per cent at the bright end. BKGNET also removes a systematic trend in the background error estimation with magnitude in the i band that is present with the current PAU data management method. With BKGNET, we reduce the photometric redshift outlier rate by 35 per cent for the best 20 per cent galaxies selected with a photometric quality parameter.
- Instrumentation: photometers
- Light pollution
- Techniques: photometric
FingerprintDive into the research topics of 'The PAU survey: background light estimation with deep learning techniques'. Together they form a unique fingerprint.
Data availability statement for 'The PAU survey: background light estimation with deep learning techniques'.
Cabayol-Garcia, L. (Creator), Eriksen, M. (Creator), Alarcon, A. (Creator), Amara, A. (Creator), Carretero, J. (Creator), Casas, R. (Creator), Castander, F. J. (Creator), Fernández, E. (Creator), García-Bellido, J. (Creator), Gaztanaga, E. (Creator), Hoekstra, H. (Creator), Miquel, R. (Creator), Neissner, C. (Creator), Padilla, C. (Creator), Sánchez, E. (Creator), Serrano, S. (Creator), Sevilla-Noarbe, I. (Creator), Siudek, M. (Creator), Tallada, P. (Creator) & Tortorelli, L. (Creator), Oxford University Press, 23 Nov 2019