# CCD Reduction Help Page

Chris Churchill
Professor of Astronomy

### Preliminary Message to the Reader

Here is presented a brief, yet comprehensive overview of electronic artifacts of a CCD data acquisition system that pollute the information captured on the CCD. Here you will learn the terms gain, g, read-noise, RN, digital number, DN, and Analog to Digital Unit, ADU. You will also learn the difference between "amplifier bias" and "electronic/read-time bias". For the following, the term "data frame" will be taken to represent the image on the CCD.

### CCD Artifacts

The artifacts commonly superimposed on data obtained with CCDs are either multiplicative or additive. If they are multiplicative, one corrects for the error by a division process. If they are additive, one corrects by subtraction. These include, but certainly are not limited to:
1) Analog to Digital Conversion
3) Amplifier Bias
5) Thermal (Dark) Current
6) Pixel Gain Variations (flat field)
7) Saturation
8) Charge Transfer Efficiency

### Analog to Digital Conversion

There are two distinct items to consider due to analogue to digital conversion. (1) For each pixel, the read-out amplifier must convert a measured amount of charge (analogue process) into an integer number of electrons and then store that number as a multi-bit binary code (digitization process). Both a read-noise and a discretization error arise from this process. The latter error is usually negligible compared to the read-noise. Usually, the discretization error is +/- 0.5 electrons, resulting from integer roundoff. If it is to be budgeted, one usually considers discretization as a contribution to the "effective" read-noise (see below). (2) The data values for each pixel output by a CCD system are in units called DN (digital number) or ADU (Analogue to Digital Units). These two terms are commonly used interchangeably. The gain factor

g = (number of electrons detected)/DN

gives the conversion of DN to electrons. The number of electrons is the physical quantity being measured (proportional to incident photons) and not the DN. Never make signal-to-noise (S/N) calculations in units of DN - always convert to electrons using the gain factor. Each pixel is virtually a separate "detector", having its own gain. However, in general an "average" gain factor for the CCD is quoted and used for the purposes of the noise budget.

The read-noise should always be greater than the average gain factor, so that the error due to read-noise is resolved. For example, if g = 10 e/DN and RN = 5 e, then the RN would not be resolved in DN (10 electrons would be required to register 1 DN, and there would be no way to distinguish the contribution of the RN to this digital number).

### Amplifier Bias Level

It takes current to read current. The amplifier bias is not a noise source, but rather is a measure of the electronic "zero" level that physically indicates zero photons counted on the CCD. One should think of this as a pedestal to be removed from the data to assure that they are restored to the number of electrons actually detected by the CCD. Usually, a region of the data frame is dedicated to storing this bias level. Basically, the amplifier current is read and stored on several columns of the CCD prior to the transfer of the CCD counts (prescan) and/or the amplifier current is read stored on several column of the CCD after the CCD data have been transfered (overscan). These regions are called the overscan or prescan region. Some systems do not porvide an overscan or prescan region. One should determine the mean level of the overscan column and subtract this value from all pixels. If a gradient is present across rows, then a linear function may be used to model the overscan data.

This is an additive noise source that arises from the simple fact that it takes time to read the CCD. During read-out, certain voltage generating/decrementing processes may systematically introduce structure across the data frame. Note that this is quite different from the amplifier bias. Often much confusion arises from loose usage of the term "bias". Electronic bias should be readily noticeable in so called "bias frames" (t = 0 integrations). To increase the S/N at such low counts, one should co-average many individual bias frames to obtain a "calibration bias" frame.

One should always convince oneself that this calibration frame should really be subtracted from the data frame, that the read-out process truly does produce a bias pattern on the data frames. Inspect the calibration frame for a lack of bias:

(1) the mean pixel count (in DN) should be zero (following overscan correction!)
(2) the count distribution function should be Gaussian with width equal to the read-noise
(3) a histogram of number of pixels verses count level should provide a visual "verification" of the distribution, width, and possible skew of the bias calibration data.

### Thermal (Dark) Current

This is additive noise that arises from either one or both of two sources: (1) a general background level resulting from thermally generated electrons that is quite low (few electrons per hour), (2) locally "hot" pixels that display vastly higher rates of thermal electrons generation. For both, the accumulation rate of electrons is proportional to exposure time and sensitive to temperature (~exp[-kT]). "Super-hot" pixels (those which saturate on the time scale of the read-out process) will create "hot" columns, since the pixel values downstream in that column pass through the hot pixel during the read process. Theoretically, dark current is linear with time. The treatment for dark current is to take {\tenit many} data frames with the shutter closed with integration times appropriate to one's program. A "calibration" dark frame is then created by co-averaging and cosmic ray removal.

There are a number of philosophies regarding dark current and the construction of one or several dark calibration frames with different integration times. One can assume time linearity and either (1) scale a single calibration dark frame to the data frame exposure time, or (2) interpolate between several calibration dark frames to the exposure time of the data frame before subtracting off this scaled calibration dark frame. Or, one can attempt to avoid the assumption of time linearity and obtain dark calibration frames with integration times the same as the data frame exposure times. The main concern is whether dark current subtraction should be really performed, since this process invariably leads to a reduction of S/N.

### Pixel Gain Variations

Gain variation from pixel to pixel is a multiplicative noise source which is due to (1) differences in wavelength dependent quantum efficiencies (QE = efficiency of conversion of incident photons into electrons) from pixel to pixel or across the CCD, (2) fringing problems, which are wavelength dependent interference creating global interference patterns (a serious yet subtle problem), (3) dust particles and the like in the optical path that cause discernible and repeatable patterns on the data frame. A fourth problem may be quantum efficiency hysteresis, where the QE of a pixel depends upon its exposure history. The technique of removing these "gain" variations involves generating a calibration flat field frame in an attempt to scale the gain in each pixel to the "average" gain of the CCD.

Making a calibration flat field frame is highly application dependent. If one is attempting to remove pixel to pixel variations, one usually divides the data frame by a normalized calibration flat field (mean of 1 DN), where the difference between the value in a pixel and unity gives the percent variation from the mean gain.

### Saturation

This is a situation when information is totally lost. There are two types of saturation: (1) the A/D converter, which converts number of counted electrons into a binary representation of DN, has a maximum value limited by the number of bits used to store the value, DN(max) = (2^[# of bits]) - 1. Usually, the number of bits is 15. If the A/D converter saturates it will reach a constant maximum output value. What one sees in the image is not the usual (2^15) - 1 = 32767, but roughly 29300 DN, since the amplifier bias level (as determined from the overscan column) is subtracted out on-line. (2) The "well depth", or the total number of electrons that a single pixel can store for later read-out has a maximum. When a pixel reaches this maximum, no more incident photons can be stored in that pixel (QE = 0), though electrons are still liberated by the photons. If this case, the excess liberated charge from the saturated pixel may spill over to adjacent pixels up and down the column, since the potential barriers are smallest in that direction. If the pixels are being saturated at a very high rate then enough charge may spill out to flow between columns too. The result is a streaming along rows and columns, and the degradation of nearby signal.

Most systems are designed so that the A/D converter saturates before the CCD wells saturate (because the gain is up high to properly sample the read-noise), meaning that the CCD full well is much larger than the A/D converter maximum count capacity. This results in the added benefit that one remains within the linear response range of the CCD (meaning the gain is independent of the number of detected electrons) all the way until A/D saturation occurs.

### Charge Transfer Efficiency

The charge transfer efficiency is the fraction of electrons that are successfully passed onto the next position during the horizontal and vertical shifting associated with the read-out process. Many CCDs have CTE in the 0.99995 (!) range; still, this can leave up to 10% of the charge in "downstream" pixels of a 2048x2048 CCD. Bad CTE results in minor smearing with the smears pointing away from the read-out edge of the CCD. If the CTE is low for a given row, or set of pixels, the smearing effect can be quite devastating. Though corrections may be applied, it is safest to keep the science away from these pixels.