Penn State scientists have created a device that uses cone cells sensitive to red, green, and blue light and a neural network to process visual information and produce images.
Photodetectors convert light into electrical signals and are crucial for optics. Narrowband ones focus on specific light spectrums, such as red, green, and blue, which compose visible light. Cameras’ silicon photodetectors detect light indiscriminately, without color differentiation. A filter is required to divide the incoming light into red, green and blue, with each color reaching only one section of the light sensor, resulting in a two-thirds light loss.
Penn State scientists have created a device that mimics the red, green, and blue photoreceptors and neural networks of human eyes to produce images, taking inspiration from nature. The device mimicked nature using cone cells sensitive to red, green and blue light and a neural network to process visual information. The scientists made a new sensor array using narrowband perovskite photodetectors that mimic cone cells and paired it with a neuromorphic algorithm to produce accurate images.
The researchers suggest that the design avoids information loss caused by filtering and could enable future camera sensing with higher spatial resolution. Using perovskite materials, the new devices generate power while absorbing light, potentially enabling battery-free cameras. The device resembles solar cells that produce electricity from light. It generates current when light is applied, making it similar to our eyes, which capture information from light without needing energy. Perovskites create electron-hole pairs when hit by light, and sending them in opposite directions generates an electrical current. Scientists have developed narrowband photodetectors by creating unbalanced perovskites with faster hole transport and manipulating their architecture.
A sensor array made with perovskites collected information from a projected image. A three-sub-layer neuromorphic algorithm processed the red, green and blue layers to reconstruct the image. Neuromorphic algorithms mimic the human brain. The team tried directly merging signals from color layers, but the image needed clarification. Neuromorphic processing produced a more precise image by mimicking the human retina’s neural network. Joining the device and algorithm demonstrates neural network functionality’s importance in human vision processing.
The scientists suggest that this technology could lead to advancements in artificial retina biotechnology, potentially replacing damaged cells and restoring vision.
Reference : Yuchen Hou et al, Retina-inspired narrowband perovskite sensor array for panchromatic imaging, Science Advances (2023). DOI: 10.1126/sciadv.ade2338