Abstract
Imaging systems’ performance at low light intensity is affected by shot noise, which becomes increasingly strong as the power of the light source decreases. In this Letter, we experimentally demonstrate the use of deep neural networks to recover objects illuminated with weak light and demonstrate better performance than with the classical Gerchberg-Saxton phase retrieval algorithm for equivalent signal over noise ratio. The prior contained in the training image set can be leveraged by the deep neural network to detect features with a signal over noise ratio close to one. We apply this principle to a phase retrieval problem and show successful recovery of the object’s most salient features with as little as one photon per detector pixel on average in the illumination beam. We also show that the phase reconstruction is significantly improved by training the neural network with an initial estimate of the object, as opposed to training it with the raw intensity measurement.
- Received 25 June 2018
DOI:https://doi.org/10.1103/PhysRevLett.121.243902
© 2018 American Physical Society