Researchers Create Photo-Sensor that Behaves like the Human Eye

26-01-2021 | By Robin Mitchell

Researchers have recently used a perovskite semiconductor to create a photo-sensor that behaves similarly to photoreceptors in the human eye. How does the human eye work, what challenges do typical camera systems face, and how can this new sensor provide a potential revolution to image technology?

How does the human eye work?

When asked the question “what is the most complex organ in the human body”, many would say that the human eye is the most complex, but in terms of evolution, the human eye is in fact a very simple organ. The human eye works in a very similar fashion to cameras with light sensors on the retina, a lens that focuses the incoming image, and a pupil that acts as an aperture that controls how much light enters the eye. 

The light sensors at the back of the eye come in different types depending on their function (absolute light detection or colour detection), and humans have three colour sensors (called cones), and generic light sensors called rods. The eye only has 6 million cones concentrated at the centre of vision but has over 100 million rods. This means that our eyes are designed to detect image and motion over colour, but colour information is only important for whatever we are directly looking at.

Of all these light sensors, the brain only deals with one million signals at any time, so how does the eye determine which signals to send to the brain? What makes the eye rather clever is that it actually pre-processes information before sending signals to the brain. 

Specifically, the eye doesn’t respond to absolute images, but instead responds to changes in light (i.e. potential motion). This is how the brain can quickly respond to motion in the outermost part of human peripheral vision. 


What challenges do typical camera systems face?

The human eye operates on light signal changes whereas traditional cameras based on electronics operate on absolute light levels. As light hits a sensor, it generates either a voltage or current, and this is then translated to a digital scale of increasing brightness. 

While this allows for the capture of all and any detail in the field of view of a camera, it also means that there is a lot of data to process. This causes complications when using AI systems as objects in the image that are not of interest (such as the background). They have to either be removed before feeding the image to the AI algorithm, or the AI algorithm has to learn how to ignore that data.

Researchers Develop “Human-Eye” Like Sensor

Recently, researchers from Oregon State University have developed a semiconductor photo-sensor that operates more like the human eye than a standard camera. The new sensor is based on perovskite, and as such the sensor only reacts to changes in light levels instead of absolute levels.

The sensor has a capacitive structure instead of a diode structure, but semiconductor material allows the capacitive structure to react to light. The bottom layer of the new device is standard silicon dioxide due to its insulator properties. The top layer is the perovskite methylammonium lead iodide semiconductor which provides the capacitor with light-sensitive properties. When the sensor is suddenly exposed to light, a large voltage spike is detected across the sensor, but this quickly decays despite the light level remaining constant. 

Using the data gathered from the sensor, the researchers developed a simulation model based on the new sensor to simulate how a reactive camera would operate. Once modelled, the researchers fed video into the sensor system. They could see that only moving objects are shown at the output while static images remained black. 

If a reactive camera using such technology could be developed, it would allow camera-based technologies to remove a whole stage of image processing in an identical way to how the human eye works. The camera sensor only sends imagery data of moving objects, which reduces the processing load from stages between the camera and image processing software. For example, security camera systems would focus on moving objects in real-time while AI-driven robotics could react to movement far faster than they currently can. 

Such a camera may be able to further help with image systems that can also ignore moving backgrounds. For example, humans can ignore objects in the background even if they are moving, but even this new system would still react to the background as the camera moves around. Thus, if this camera system could also be made to ignore detections which result from the camera itself moving, that would allow for trivial object detection and avoidance. 

Read More

Profile.jpg

By Robin Mitchell

Robin Mitchell is an electronic engineer who has been involved in electronics since the age of 13. After completing a BEng at the University of Warwick, Robin moved into the field of online content creation, developing articles, news pieces, and projects aimed at professionals and makers alike. Currently, Robin runs a small electronics business, MitchElectronics, which produces educational kits and resources.