It was suggested during the early 1990’s that Charge Coupled Devices (CCDs) were slowly becoming extinct and therefore were considered as ‘technological dinosaurs’1. Furthermore, in 2015 the Sony Corporation officially announced the end of mass CCD production, which, although expected, caused a stir within the professional imaging community2. Although it is typically perceived that most industrial or professional imaging is conducted by CMOS Image Sensor (CIS) technology, most remain based on CCD sensors.
Previously, both CCD and CIS technology cohabitated; however, CCDs quickly emerged as the superior technology due to its ability to meet stringent image quality requirements. At the same time, CMOS technology was a newly developed technology and therefore limited by its inherent noise and pixel complexity. Since most architectures were analog at this point in time, the concept of integrating the image processing features, such as System On-Chip, was not yet fathomed as a realistic possibility.
According to Moore’s Law, the shrinkage of the technological node has allowed this technology to increase in its competitiveness against others as a result of its rapid expansion in the early 2000s. CIS technologies are currently undergoing rapid transformations in their electro-optical performances, and, in many aspects, are often shown to be better as compared to CCDs.
CCD and CMOS: Two Different Branches With a Common Origin
CCD technology involves the conversion of photonic signals into electron packets that are then transferred to a common output structure to convert the electric charge into a voltage. From here, the signal is buffered and carried off-chip. Most functions within the CCD technology occur on the camera’s printed circuit board; however, when a given application has specialized requirements, the designer can change the electronics without redesigning the imager.
On the contrary, the charge-to-voltage conversion on a CMOS imager occurs in each pixel, thereby allowing most functions to be integrated directly into the chip. A CMOS imager can be operated with a single power supply and exhibits a unique flexibility in its readout potential with region-of-interest or windowing. CCDs are generally made in NMOS technology, whose performance is equipped with various specific features, some of which include overlapping double poly-silicon, anti- blooming, metal shields and a specified starting material.
CMOS technologies are typically consumer-oriented and based on standard CMOS process technology for digital ICs. Note that some adaptation of the technology is made for imaging, such as through the addition of a pinned photodiode. The manufacturing CMOS sensors is typically considered to be a much cheaper process as compared to that which is required for the production of CCDs; however, the performance is lower as well. This assertion is based on the market volume consideration; however, when considering other specialist business sectors, the two technologies can then be equivalent, or a CCD may emerge as a more economical choice3.
For example, most space programs are still based on CCD components. In doing so, these programs exhibit an optimized performance rate at the process level on limited quantities and cost, as well as an ability to ensure long-term supplies. Similarly, the science imaging market remains an avid user of primarily high-end CCD based solutions, with several new product developments currently in progress.
The system complexity of CMOS technology has improved, especially with developments that have involved the embedding of SOC architecture, of which can include analog to digital conversion, correlated double sampling, clock generation, voltage regulators or other features such as image post-processing, most of which were previously limited to the application system level design.
Modern CIS technologies are commonly made in 1 poly, 4 metal layers (1P4M) 180 nm down to currently 65 nm technology, therefore allowing the design of the pixels of a very high conversion factor to be combined with column gain amplification. This can then result in the ability of the CMOS to allow for both photo-response and sensitivity to light to be far better than that of commonly used CCDs. Note that CCDs do provide significant noise advantages as compared to CMOS imagers as a result of their improved stability of substrate biasing that has less on-chip circuitry and little to no fixed-pattern-noise.
Figure 1. CCD and CMOS architecture comparison
On the other hand, CIS sampling frequencies being potentially lower, the bandwidth required for reading out the pixel can be reduced and the temporal noise is thereby lower. Shuttering exposes all pixels of the array simultaneously. For CMOS technology, this approach consumes pixel area due to its requirements of extra transistors within each pixel. Since each pixel has an open-loop output amplifier and the offset and gain of each amplifier fluctuates considerably as a result of wafer processing variations, both dark and illuminated non-uniformities often end up being much worse as compared to those produced by CCDs.
Additionally, CMOS imagers have a lower power dissipation as compared to equivalent CCDs. The power dissipation of other circuits on the chip can be lower than that of a CCD through the use of companion chips that originated from an optimized analog system. Depending on the delivery volume, CMOS technology may be less expensive at the system level as compared to CCDs when the cost of externally implementing related circuit functions are considered. A summary of CCD and CMOS characteristics is presented in Table 1.
Table 1. CCD-CMOS Characteristics comparison
|Signal from pixel
|Signal from chip
||Lower at equivalent frame rate
||Moderate or low
||Moderate to high
||Moderate to high
||Moderate to high
||Moderate to high
||Low to moderate
||Moderate to High
||High to none
||Smearing, charge transfer inefficiency
||FPN, Motion (ERS), PLS
|Biasing and Clocking
||Multiple, higher voltage
|Relative R&D cost
||Lower or Higher depending on series
Some system features clearly allow for one technology to exhibit some advantages as compared to the other without affecting the overall performance or cost of the imager. Despite this, CMOS imagers have been shown to have an increased flexibility of implementation with the SOC approach and lower power consumption as compared to CCDs.
Noise Performance: A Common Misconception
For both CCD and CMOS technologies, the bandwidth of the video imaging chain must be carefully adjusted in order to minimize the read noise level into the digitization stage; however, the bandwidth must be sufficient enough to prevent the introduction of artifacts from entering the image. To determine the minimum threshold of bandwidth, the time required for the sampled signal to settle on a level sufficiently close to the ideal signal can be measured. The induced error should be negligible compared to the Least Significant Bit (LSB)4.
To determine the required bandwidth, the following criterion is applied:
In this equation, fc represents the amplification chain bandwidth, whereas fs represents the signal frequency. Additionally, N here represents the ADC resolution.
For example, if N=12, then the proper value is: fc ≈ 8.3fs
The noise is the result of two contributions: 1/f flicker noise and thermal noise as shown Figure 2. Flicker noise is frequently present in nature and its spectral density plays a role in a variety of natural phenomena including the fluctuations within the earth’s rate of rotation, undersea currents, weather, climate changes and more. In fact, a recent study on the flickering of a common candle has also shown that it fluctuates as 1/f.
When considering the MOS device and elements of the amplification chain, flicker noise has been determined to be the consequence of an electric charge that has been trapped within the gate oxide. The traps occur as a result of the defects generated by the technological process. In fact, the filling and emptying of these traps often lead to a concept known as “random telegraph noise (RTS),” a concept that refers to the fluctuations to occur within current flowing in the transistor channel6.
Each individual trap can be modeled using a Lorentzian mathematical model that is suitable to describe a resonant behavior. By using this mathematical model, the Lorentzian, which refers to the number of traps present at the surface of the MOSFET channel, can be summed to determine whether a 1/f spectrum can fit with the actual noise spectral density. As a result, the 1/f magnitude inversely depends on the surface of the MOSFET channel area.
Figure 2. Spectral noise density
To eliminate, or at least reduce, any potential variations of the common mode of the amplifier, the reset noise of the floating node, as well as the technological dispersions of the transistors present within CIS devices, will integrate the Correlated Double Sampling (CDS) stage by the video channel. The CDS stage then transforms the video signal transfer function, which can further be explained in mathematical terms by the following formula:
In this formula, each of the following variables represents these factors accordingly:
- fs: the sampling frequency
- n: CDS factor (typically n=2).
In Figure 3, it is shown that depending on the sampling rate, filtering can be used to eliminate the sampling frequency in the event that it is too high, which can occur as a result of the slowing down of the trap and release mechanism when compared to the CDS frequency. The combination of the filtering and low pass filter of the amplification chain can be simplified as an equivalent band pass filter as depicted in Figure 3.
The eqBP1 corresponds to a first order band pass filter. To create an integrated noise power that is equivalent to the HCDS function, the noise spectral function of eqBP1 is divided by a factor of two. The eqBP2 is a frequency notch approximation of eqBP1. Similarly, to allow the integrated noise power to also be equivalent to the HCDS function, both the lower and upper limits of the eqBP2 filter are respectively multiplied by (π/2)-1 and π/2.
Figure 3. Noise filtering functions
When considering the general case starting from the Figure 2 and Figure 3, the noise expression can be expressed in simpler terms according to the following formula
In fact, by combining the equation (1) and (4), the total integrated readout noise is approximated as follows:
Each of these formulas has been verified to closely match with the numerical simulation.
The CCD read noise can be extremely low for certain applications such as astronomy, as well as other scientific areas in which the image is read out at a very low frequency. The system design of CCDs incorporates electronics with a minimized frequency bandwidth, which allows this technology to avoid the integration of time fluctuations of the signal. In fact, when used for these applications, the 1/f component of the noise dominates.
In contrast, when used for high-speed video applications, noise levels are significantly higher, thereby leading to a drastic degradation of the signal to noise ratio. This theory has been further confirmed with measurements that have been taken under the actual conditions of noise performance of different CCD video cameras5.
When considering these factors, the CMOS image sensor is particularly advantageous as a result of its column parallel readout scheme, which is also shown in Figure 1. The threshold readout frequency is therefore divided by the number of columns as compared to the CCD.
Consequently, the readout noise of CIS is generally dominated by the 1/f contribution to allow for a continuous effort of improvement of the CMOS technology for imaging. It has been demonstrated recently that very good noise performance in the range of 1 e- and even below is achievable7,8
Figure 4. Read out noise as a function of fs
MTF and QE: The Pillars of Image Quality
Quantum efficiency (QE) plays an important role in electro-optical performance of image sensors, especially when considering any loss that occurs during the conversion of photon-to-electron that can ultimately lead to a decrease in the Signal-To-Noise- Ratio (SNR). Since QE can affect the numerator of SNR, as well as the denominator of noise in the event that the dominant contributing factor is the shot noise, which otherwise refers to the square root of the signal.
When taking these factors into consideration, both the CCD and CMOS are considered equal; however, the CCD has historically benefited from numerous years of technological process iterations for the enhancement of the QE. It is a relatively recent advance in the CIS domain.
Based on the physical properties of silicon including longer wavelengths that penetrate deeper into the photo- sensitive conversion zone, thick epitaxial material can be used to increase the QE in the upper red and NIR wavelengths. In fact, according to the Beer-Lambert law, the absorbed energy in this situation is exponentially dependent on the thickness of the material.
CCDs that are typically dedicated to high-end applications actually benefit from these technologies, particularly those that utilize silicon materials and back side illumination (BSI) to recover a high broadband QE and sensitivity in the near infrared (NIR).
Figure 5. QE benchmark
The interline transfer CCD (ITCCD) is based on “vertical overflow drain (VOD),” which is otherwise referred to as “vertical anti-blooming (VAB),” both of which denote a specific manufacturing process that was originally developed during the early 1980s12. While the performance of the VAB is typically very good, some drawbacks of this technology involve cutting the response in the red, which can ultimately lead to a rejection of the NIR part of the spectrum.
Figure 6. Deep depletion approach
As a result of this, CCDs are unable to cannot benefit from BSI. However, high end CCDs, as well as CMOS sensors, are not limited by this factor since they often use horizontal anti-blooming (HAB). Additionally, thin detection layers are not subject to crosstalk as a result of the charges that are unable to diffuse from pixel to pixel. This therefore allows the spatial resolution, which is otherwise referred to as the modulated transfer function (MTF), of ITCCD and standard CIS to be food.
To gain sensitivity in the NIR domain, a significant increase of material thickness must be present. While this may be true, it is important to note that when a material is too thick, MTF degradation can occur as a result of an increased electro-optical crosstalk. Image quality is a combination of MTF and QE, which, when combined, referred to Detective Quantum Efficiency. Therefore, consideration must be given to both the spatial and temporal domains. Figure 6 demonstrates how deep depletion photodiodes with adapted silicon doping methods are used to recover MTF.
The CIS are generally made on technologies that are inspired from those used for integrated circuits, particularly DRAM/memory processes. CIS therefore do not generally utilize the specific recipe as described earlier. However, note that several recent publications have demonstrated how the implementation of specific processes have significantly improved the QE to ultimately be close to those obtained with high-end CCD as depicted in Figure 59,10. The most recent advancements of CMOS technology has incorporated techniques include light guides, deep trench isolation (DTI), buried μlens, and even stacked die containing pixel transistor beneath the photosensitive area.
The “pinned photodiode” (PPD) or the “hole accumulation diode” (HAD), was initially developed in an effort to remove the lag and allow for a full charge transfer from the photodiode to the ITCCD register to occur12. One of the major recent developments of CMOS imagers has been the adoption of the ITCCD photodiode structure that originally occurred in the early 2000s11 as shown in Figure 7.
Figure 7. ITCCD and 5T CMOS pixels side by side
In CMOS devices, the architecture of pixel orientation is typically referred to by the number of transistors per pixel. Most CMOS imagers tend to use an electronic rolling shutter that is most beneficial in integration and can be realized with as little as three transistors (3T). While commendable for its simplicity, the 3T pixel architecture often suffers from a higher pixel generated temporal noise from kT/C (or thermal) noise in the circuit. While damaging, this thermal noise cannot be removed in a simple way.
The pinned photodiode was initially introduced into the CIS technology in an effort to remove the noise from the reset of the floating diffusion. Since its discovery, the pinned photodiode has been further investigated and has subsequently allows for the development of the four transistors pixel (4T).
The 4T architecture performs a Correlated Double Sampling (CDS) to remove the reset temporal noise, as well as allow transistor sharing schemes between pixels in an effort to reduce the number of effective transistors per pixel to be lower than two. Evidently, fewer transistors in the pixel free up more area for the photosensitive part or fill factor to more directly couple light into the pixel. However, ERS introduces image distortion when capturing fast motion video or images containing as shown in Figure 8.
Figure 8. Image artefacts: CMOS ERS distortion
The PPD was exploited in a second stage to perform the global shutter (GS) capture. It removes the ERS artifacts and furthermore the temporal noise, as well as dark current and fixed pattern noise. The fifth transistor adjacent to the PPD (5T) is used to drain the excess charges and also to adjust the integration time in overlap mode (readout during integration). The GS mode is used with ITCCD but is in some case sensitive to smearing effects.
The smearing appears during charges transfer and produces vertical stripes in the image as shown in Figure 9. This defect is particularly visible in high contrast images and should not be confused with the blooming that potentially produces a similar artifact. In order to reduce this problem, the frame-interline- transfer (FIT) CCD architecture is generally implemented. The FIT has also the advantage of higher video rate. The CMOS equivalent parameter of smearing is the Global Shutter Efficiency (GSE), which can also be referred to as parasitic light sensitivity (PLS), can correspond to the ratio of sensitivity of the sensing node to the photodiode.
Figure 9. Image artefacts: CCD smearing
Depending upon the application, GSE can vary. For example, for ITCCD, GSE levels are generally between -88 dB to -100 dB13, whereas for CMOS devices the GSE will often be within the range of -74 dB to -120 dB, and as low as -160 dB when 3D stacked architectures are used14. The use of advanced and customized pixel micro-lenses (e.g. zero- gap) makes a significant difference in terms of the sensitivity of devices over wavelength response. Additionally, these micro-lenses can also limit fill factor loss a result of the transistors present within the CMOS pixel, thereby allowing this factor to be a major contributor to improvement of GSE performance.
Future Advances of CMOS Imaging Technology
CCD technology is particularly suitable for the application of time delay integration (TDI). TDI, which refers to the integration and the synchronous summation of electrons with the scanning of the scene, is relatively straightforward with a charge transfer device. In fact, this technique is primarily used to maximize the SNR, as well as to preserve adequate image definition (MTF).
Recent research on this area has involved several attempts to reproduce the signal summation, for both the analog domain (voltage)18 as well as the digital domain in an effort to promote the CMOS TDI. For space earth observation, as well as for machine vision, CCD time delay integration architecture is in a high demand for its low noise and high sensitivity performances. While this may be true, the most promising results were obtained once the best of both technologies was considered and a combination of charge transfer registers and column wise ADC converters were utilized on the basis of a CMOS process17.
Despite the progress that has taken place in this area of research, the sensitivity of CMOS image sensors remains limited by the readout noise that occurs when used for extremely low light applications (i.e. few tens of μlux). To this end, the scientific market has become increasingly interested in the EMCCD,15 which operates with electronic multiplication and can therefore potentially reduce noise levels.
Generally, CCDs are gradually being replaced by CMOS imagers, thereby allowing EMCCD to potentially make the transition to electron multiplying CMOS (EMCMOS)20. Similar to the way in which EMCCD was performed, this technology is expected to play a contributing role in improving image quality when used for extremely low light situations during both scientific or surveillance applications. CMOS technology enables lighter and smarter systems and lower power consumption, all the while being a less expensive option, particularly when used for large scale volumes (the so-called SWAP-C approach).
The main principle of electron multiplication is to apply a gain to the signal prior to the addition of any noise that occurs as a result of the readout chain. In this manner, the noise is divided by this gain to ultimately improve the SNR. As a result of the CCD principle, the signal is then transferred in the form of electron packets, during which the multiplication is applied to each pixel prior to reading out. For CMOS applications, the signal is in the voltage domain thereby signifying that the multiplication must be applied prior to transferring the floating node and before adding the noise from the source follower transistor16.
Growing in popularity, three-dimensional (3D) imaging that incorporates depth information measurement has found to utilize the Time-Of-Flight (TOF) techniques. A technique that was originally described in 1995 as a “lock-in imager” for CCDs, the main principle involves a source of pulsed artificial light that is located on the sensor plane is emitted.
The reflected wave then returns and is used in a correlated function to extract the distance. The first attempt to incorporate ToF with CMOS was inspired by the CCD pixel22. A second method has involved the use of Current Assisted Photonic Demodulators (CAPD).
Both of these approaches have led to a mass production of industrial 3D sensors for a variety of applications including people counting, safety control, metrology, industrial robotics, gesture recognition and automotive Advanced Driver Assistance Systems (ADAS). This is a typical example of how a concept that was originally developed on CCD technology has been further utilized and perfected on CMOS for high-volume industrial scale.
CMOS technology deployment has branched out into several novel application fields as well. For example, the Single- Photon Avalanche Diode (SPAD) is a solid state solution that was originally developed to replace photomultiplier tubes (PMT), similar to the way in which vidicon tubes during the 1980s was replaced in professional cameras by CCDs. SPAD is basically a p-n junction biased in reverse voltage above the breakdown mode within the Geiger mode. The structure of the SPAD is highly unstable, thereby allowing any energetic disturbances to lead to an avalanche effect that can actually be manipulated for single-photon detection.
When used for this purpose, the avalanche is deactivated through a principle known as passive quenching. Passive quenching occurs as a result of the implementation of a simple resistive component between the SPAD and the supply voltage, as well as a result of active quenching when an embedded MOSFET channel is used to produce a digital signal representation of a quantum event. In principle, SPAD is a simple structure-based CMOS technology that does not require complex procedures to be utilized for image sensors.
On the other hand, since CMOS technologies requires the use of complex circuitry, the operation of a SPAD array ends up being a much more complicated process to achieved. The SPAD triggering and event counting is, by definition, asynchronous, like the arrival of photons, whereas the choice of CMOS technology is judicious. For example, it is possible to proceed in a very quick scan of the pixel array to determine those that have transitioned. The assembly of these frames produces a video sequence23.
The early determination of the end of CCDs was, at the time, a prophesy1. While it has proven to be true, the transition is still taking considerably longer than originally anticipated. Furthermore, the variety and inventiveness of pixel structures that have been developed for CMOS imagers have surpassed previous expectations, as this technology has been made achievable with the transistors etchings downsize and the evolution of the CMOS fabrication technology that is now completely adjusted for CIS production.
Major industrial imaging manufacturers still compete on price and electro-optical performance. When using cameras, advancing technology has greatly expanded from simply taking pictures to capturing the best times of their lives in a way that ensures all light conditions are perfect, regardless of the impacting factors on the quality of the image.
Additionally, industrial applications have also greatly benefited from these advances. For example, the advancement of vision systems has been increasingly based on imagers that follow the trend of consumer demand including the shrink of pixels. Speed, for example, is also an important economic factor since it maximizes the through-put of expensive production machines and automated processes/inspection.
Novel applications are also finding ways to further propagate the development of sensors to achieve extreme capabilities without tolerating any additional noise in the image. As a result, imaging technology, particularly CMOS technology, has greatly expanded past simple image capture and display to the development of 3D augmented reality, particularly in an effort to achieve a different perception of space.
References and Further Reading
- Active pixel sensors: Are CCD dinosaurs?
- ER Fossum IS&T/SPIE's Symposium on Electronic Imaging: Science and Technology, 2-14A. http://www.vision-systems.com/articles/2015/03/sony-rumored-to-discontinue-production-of-ccd-sensors.html
- CCD vs. CMOS, Dave Litwiller, Photonics Spectra, 2001
- Determination of the optimal electrical bandwidth in CCD- and CMOS-based image detector applications, Robert H. Philbrick, SPIE 5499, Optical and Infrared Detectors for Astronomy, 2004.
- CMOS vs. CCD: Changing Technology to Suit HDTV Broadcast, Lester J. Kozlowski, 2003
- Fundamental performance differences between CMOS and CCD imagers: Part 1, James Janesick et al., SPIE 6276, High Energy, Optical, and Infrared Detectors for Astronomy II, 62760M, 2006.
- A 0.7 e-rms Temporal Readout Noise CMOS Image Sensor for Low Light Level Imaging, Y. Chen et al., IEEE International Solid-State Circuits Conference (ISSCC), 2012.
- L2CMOS Image Sensor for Low Light Vision, Pierre Fereyre et al., International Image Sensor Workshop, 2011.
- Night Vision CMOS Image Sensors Pixel for SubmilliLux Light Conditions Amos Fenigstein, International Image Sensor Workshop, 2015.
- A Review of the Pinned Photodiode for CCD and CMOS Image Sensors Eric R. Fossum, et al., IEEE Journal Of The Electron Devices Society, Vol. 2, no. 3, May 2014.
- No image lag photodiode structure in the interline CCD image sensor, N Teranishi et al., Electron Devices Meeting, 1982 International (Volume:28 ), 1982.
- A 3D stacked CMOS image sensor with 16Mpixel global-shutter mode and 2Mpixel 10000fps mode using 4 million interconnections, Symposium on VLSI Circuits (VLSI Circuits), T. Kondo et al., pages C90 - C91, 2015
- M. S. Robbins and B. J. Hadwen, “The noise performance of electron multiplying charge-coupled devices,” IEEE Transactions on Electron Devices, vol. 50, no. 5, pp. 1227–1232, May 2003.
- Electron Multiplying Device Made on a 180 nm Standard CMOS Imaging Technology, Pierre Fereyre et al., International Image Sensor Workshop, June 2015.
- First Measurements of True Charge Transfer TDI (Time Delay Integration) Using a Standard CMOS Technology, F. Mayer et al., International Conference on Space Optics, 2012.
- CMOS long linear array for space application G. Lepage, Proc. SPIE 6068, Sensors, Cameras, and Systems for Scientific/Industrial Applications VII, 606807, 2006.
- Time-Delay-Integration Architectures in CMOS Image Sensors G. Lepage et al., IEEE Transactions On Electron Devices, vol. 56, no. 11, November 2009.
- R. Shimizu and Al., “A Charge-Multiplication CMOS Image Sensor Suitable for Low-Light-Level Imaging” IEEE Journal of Solid-State Circuits, vol. 44, no. 12, pp. 3603-3608 December 2009.
- The lock-in CCD-two-dimensional synchronous detection of light, T. Spirig, P. Seitz et al., IEEE Journal of Quantum Electronics, Vol. 31, Iss. 9, p. 1705 – 1708, Sep 1995.
- Demodulation pixels in CCD and CMOS technologies for time-of-flight ranging Robert Lange et al., Proc. SPIE 3965, Sensors and Camera Systems for Scientific, Industrial, and Digital Photography Applications, 177 (May 15, 2000).
- 320x240 Oversampled Digital Single Photon Counting Image Sensor N. AW. Dutton, VLSI Circuits Digest of Technical Papers, 2014.
This information has been sourced, reviewed and adapted from materials provided by Teledyne E2V.
For more information on this source, please visit Teledyne E2V.