You dont have javascript enabled! Please enable it!

Image sensors 2022: A comprehensive look at current image sensors, image processing and the future of image sensing industry

At first, we’ll look at the 2 most used sensors in consumer products and get to know about their advantages and disadvantages. And finally, we’ll see where this industry is moving towards.

What are image sensors?

Image sensors can be found in a wide range of electronic imaging devices, including digital cameras, camera phones, medical imaging equipment, camera modules, night vision equipment such as thermal radar, imaging devices, and sonar, to name a few. With the advancement of technology, chemical and analog imaging are being replaced by electronic and digital imaging.

Let’s look at 2 of the most popular image sensors around:

CCD and CMOS for Image sensors

The charge-coupled device (CCD) and the complementary metal-oxide-semiconductor (CMOS) are the two main types of digital image sensors. MOS technology is used in both CMOS and CCD sensors, with MOS capacitors serving as the building blocks of the CCD and MOSFET amplifiers serving as the building blocks of the CMOS sensor.

Silicon-atom
Image sensors: Silicon-atom
922proxy.com

Most image sensors in use today are made up of silicon. Silicon has some amazing properties. So here you see a silicon atom and when you hit a silicon atom with a photon with sufficient energy, it releases an electron and what is created is called an electron-hole pair. So silicon does most of the job when it comes to image sensing.

You hit it with light and it generates electrons. The work that remains to be done is to be able to read out these electrons, convert them to voltage, and read them out. And also not to forget you are not looking at a single pixel, just one lattice of silicon, you have millions of pixels that you want to be able to read these charges out from. That’s where a lot of work has gone into making these image sensors.

CCD working principle for Image sensors

ccd gif
ccd gif

Let’s talk about the first technology to create image sensors, this is called CCD or charge-coupled device. All these blocks we see here are pixels. These pixels act like a bucket, where a photon arrives and gets converted into an electron. So how CCD works is, in each of the pixels photon to electron conversion happens.

Each row then passes each pixel’s electron counts to the next row and that passes to the next row and then the next row and finally comes down to the final row, where it is read out horizontally one pixel at a time. And the voltage is measured in terms of the electron counts in each pixel. And then this analog voltage is converted with analog to digital converter to get digital output. So, that’s the process for CCDs.

CMOS working principle

cmos
cmos

And then there are Complimentary Metal-Oxide-Semiconductor or CMOS sensors. In this case, the same conversion of a photon to an electron happens in each pixel, but sitting next to is a circuit that converts the electrons to voltage. So each pixel’s voltage can be read out independently.

Image sensors for Image Processing

Now in this picture you can see these 3 different colors on the pixels, these are color filters. You see the sensors can only help to measure luminance, so that means how bright or darker it is. To bring color, some image processing techniques are used, such as interpolation and demosaicing of the different RGB-colored pictures and then combined to form the original picture.

rgb

Typical image sensors, such as those found in digital cameras, are made up of several individual photosensors that all capture light. These photosensors can capture the intensity of light but not its wavelength out of the box (color). As a result, image sensors usually have a “color filter array” or “color filter mosaic” overlaid on them. This overlay is made up of several small filters that cover the known pixels and allow them to render color data.

By essentially averaging the color data from the various interpolated color filters and the relative brightness registered by the pixels, the digital image processor can decode the color of an area. The Bayer filter is one of the most common filter arrangements used in modern devices

Bayer Filter Design and the Demosaicing Process

The Bayer filter, named after its creator Bryce Bayer, is a microfilter overlay for image sensors that allows photosensors to record light wavelength in addition to light intensity. The Bayer filter is the most common of these, and it can be found in almost every modern digital camera.

To interpret the color information arriving at the sensor, this filter employs a mosaic pattern consisting of two parts green, one part red, and one part blue. For the Bayer pattern to be converted into full-blown color data, digital algorithms must interpolate it or “demosaic” it. Ezaaz Image sensors

Did you know that in daytime vision, the human retina is naturally more sensitive to green light? In an attempt to mimic our visual perception, Bayer used this knowledge when he chose his filter proportions, which favor green light.

He proposed a color set of cyan, magenta, and yellow for another version of the filter, It was made later because the dyes were not available at the time. The CMY version has a higher quantum efficiency, and some newer digital cameras have it. Ezaaz Image sensors

Each pixel receives input from all three primary colors, but because each pixel only records one of the three, they are unable to output complete wavelength information. Therefore, both software and camera firmware can use a variety of algorithms to convert the “Bayer pattern” image into a true-color image based on the full colors of each pixel. Demosaicing is the term for this procedure. Ezaaz Image sensors

To get an idea of the full color, the simplest demosaicing algorithms average the input of nearby pixels. Here’s an illustration:

  • Two pixels recording blue and two pixels recording red may surround a pixel recording green.
  • These five total pixels provide enough information to estimate the green’s full-color values.
  • Similarly, the green value for the blues and reds around it is estimated using this complete color value.

This demosaicing technique works well in large areas of constant color or smooth changes, but it may lose detail in high-contrast areas where colors change abruptly, resulting in color bleeding and other color artifacts like zippering.

To render colors more accurately, more sophisticated algorithms make complex sets of assumptions about how color values correlate or about the content of the image.

Advantages and Disadvantages (CCD VS CMOS)

System Integration:

  • CCD is an old technology, so the peripheral devices such as timers and A to D converters are integrated into a separate chip, increasing the overall size of the sensor.
  • CMOS sensors are fabricated pretty similarly to integrated circuits. That’s why these peripheral components can be included in a single chip. So the CMOS sensors are quite compact.

Power Consumption:

  • For the CCD sensors, it requires different power supplies for different timing clocks. And then the typical voltage is 7 to 10 volts.
  • Now for the CMOS sensors, it requires a single power supply, and also it requires less voltage than CCDs at 3.3 to 5 volts.

So for the applications where power consumption is the main criteria, the CMOS sensor is preferred over CCDs.

Processing speed:

As we demonstrated in the previous slides on how the sensors work, in the case of CCD sensors the charge that is generated in each pixel is converted into voltage one by one. So, the overall processing speed is slower than CMOS sensors.

Noise and sensitivity:
  • In the CMOS sensor as we know that the charge to voltage converter circuit and also the amplification circuit are built-in with each pixel, so it takes much space and leaves small space for the light-sensitive area, leading to a low fill factor.
  • And for that sensitivity is going to be less in the CMOS sensor than CCD sensor. And because that dynamic range is going to be higher in CCD.
  • CMOS sensor will create more noise too, as there’s going to be many amplifiers that are not going to be all identical. So, because of that, we’ll see non-uniform amplification.

Image distortion:

CCD

Blooming
Blooming

In terms of distortion, both sensors have their problems. For example, if you expose the CCD sensors for a longer time, you might see the effect known as “blooming”. Some anti-blooming techniques are there although.

CMOS

Image sensors
Image sensors

Whereas in CMOS sensors, a common distortion problem is known as “rolling shutter”. For a second, look at this pic of the helicopter. The real wing is straight, but in the picture, it looks bendy. This happens because the image is read line by line from top to bottom. Although this problem can be solved with a “global shutter”.

rolling shutter
Rolling shutter

Future sensors Add here

This is what an image sensor looks like, so there are millions of pixels and each pixel is around 1 micron. With today’s technology, you can fit 100 million pixels in an image sensor. Now, this isn’t quite like Moore’s law. In computation according to Moore’s law, every 18 months with real estate you can double your computation power.

That doesn’t happen in the case of image sensors, in this case, you come down to around the wavelength of light, which is around, let’s say half a micron. Once your pixel is in that region, further reducing the size doesn’t buy you anything. Because now the resolution is limited by this diffraction effect itself.

So image resolution will continue to grow a little in the future but at some point, the only way you can increase resolution is by making the chip larger and larger. Ezaaz Image sensors

Hybrid image sensors: Increasing the spectral sensitivity into the SWIR region by adding a light-absorbing layer on top of a CMOS read-out circuit is a hybrid approach that uses either organic semiconductors or quantum dots. This new technology promises a significant price reduction and thus the adoption of SWIR imaging for new applications such as autonomous vehicles, which is currently dominated by expensive InGaAs sensors.

Extended-range silicon. Given the high cost of InGaAs sensors, there is a strong incentive to develop low-cost alternatives that can detect light at the lower end of the SWIR spectral range. Due to reduced scattering, such SWIR sensors could be used in vehicles to provide better vision through fog and dust. Ezaaz Image sensors

Thin film photodetectors. For acquiring biometric data and, if flexible, imaging through the skin, detection of light over a large area rather than a single small detector is highly desirable. Large-area image sensors are currently prohibitively expensive due to the high cost of silicon.

Emerging approaches based on solution-processable semiconductors, on the other hand, offer a compelling way to make large-area conformal photodetectors. The most developed approach is printed organic photodetectors (OPDs), with under-display fingerprint detection being actively investigated.

Event-based vision: Image sensing with a high temporal resolution is required for autonomous vehicles, drones, and high-speed industrial applications. High temporal resolution, on the other hand, produces vast amounts of data that require computationally intensive processing in traditional frame-based imaging. Dynamic vision sensing (DVS), also known as event-based vision, is a new technology that solves this problem.

Each sensor pixel reports timestamps that correspond to intensity changes, which is a completely new way of thinking about obtaining optical information. As a result, event-based vision can combine the higher temporal resolution of rapidly changing image regions with significantly less data transfer and processing.

Hyperspectral imaging: It is extremely beneficial for applications requiring object identification to obtain as much information from incident light as possible, so the classification algorithms have as many variables to work with as possible. Hyperspectral imaging, which uses a dispersive optical element and an image sensor to acquire a complete spectrum at each pixel to produce an (x, y, ) data cube, is a well-established technology that has gained traction in precision agriculture and industrial process inspection.

However, due to the high cost of InGaAs sensors, most hyperspectral cameras currently work on a line-scan principle, and SWIR hyperspectral imaging is limited to relatively niche applications. Snapshot imaging, which offers an alternative to line-scan cameras, and the new SWIR sensing technologies outlined above, which facilitate cost reduction and adoption for a wider range of applications, appear to be disrupting both of these aspects.

Flexible x-ray sensors: X-ray sensors are well-known and extremely useful in medical and security applications. However, because x-rays are difficult to focus on, sensors must cover a large area. Furthermore, a scintillator layer is commonly used because silicon cannot effectively absorb x-rays. Both of these factors, however, increase the size and weight of the sensor, making x-ray detectors bulky and unwieldy.

Flexible x-ray sensors with an amorphous silicon backplane are a viable alternative because they are lighter and more conformal (especially useful for imaging curved body parts). Direct x-ray sensors based on solution-processable semiconductors have the potential to reduce weight and complexity while also increasing spatial resolution.

Wavefront imaging: Wavefront imaging allows phase information from incident light to be extracted that would otherwise be lost by a conventional sensor. Optical component design/inspection and ophthalmology are two applications where this technique is currently used. Recent advancements, however, have resulted in significant resolution improvements, allowing this technology to be used more widely.

Biological imaging is one of the more promising new applications, in which the collecting phase, combined with intensity, reduces the effect of scattering and allows for more defined images.

Leave a Comment

error: Alert: Content is protected !!