Imagers and Imaging
a simple optical imager
Here s one on our 61-Inch Telescope
Here s one on our 61-Inch Telescope filter wheel in here dewar preamplifier
However, to get a large field we cannot afford to just buy more CCDs and continue to mount them at the Cassegrain focus - we have provide a faster beam to shrink the projected pixel scale. Here is the 90- Prime camera as an example.
Most of the instrument volume is a series of lenses to map the field onto a mosaic of four 4K X 4K CCDs with 15mm pixels at a pixel scale of 0.45 /pixel. The final f/ratio is 2.98 and the field of view is 1.16 X 1.16 degrees (when all the CCDs are working).?? The primary f/ratio is 2.67 - why does it have to be re-imaged?
Here is a cutaway drawing of a dewar similar to the one on 90-Prime. The radiation shield is important to reduce thermal loads.
The requirement for cold baffling makes an infrared camera more complex. This is the Gemini near infrared camera. The detector is a 1024 X 1024 InSb array with 25mm pixels with projected scales from 0.117 (shown) to 0.022 /pixel.
The optics are folded to fit within a compact dewar.
MIRI (and VISIR on the VLT) has an all-reflective optical design that gives good images over a reasonably large field. folds forms pupil three mirror anastigmat
The optics are folded to be extremely compact.
NIRCam overall layout; there are two identical cameras, to give maximum reliability (no single failure within the camera can cause loss of the NIRCam capabilities). An optical wedge brings the coronagraphs into the field without sacrificing FOV.
Each camera has a short and long wavelength arm for efficiency. The first optics stage is shared between the arms and the optics are folded for compactness.
MegaCam - a Cassegrain imager on the MMT
The MegaCam focal plane. The pixels project to 0.1 on the sky.
Now here s a BIG camera!
The optics look pretty familiar, though.
The images are very uniform in terms of encircled energy, but the images themselves change substantially.
Here is the focal plane. The extra small CCDs are used for guiding and out of focus image analysis.
Some CCD Specs: QE > 85% CTE > 0.99999 Read Noise < 5e Surface < 7mm peakto-valley f/ratio: 1.41 80% of energy in one pixel Pixel scale 0.23 FOV ~ 23 arcmin
Holmberg IX
OK, it s a lovely image, but how do we get the most out of it? Here is a menagerie of problems:
Pixel scale = 0.23 /pixel: how was it set??? Here is the MTF for the array (should be familiar ). Pixels need to be about half the FWHM of the image, or information is lost. Some can be recovered by dithering on a subpixel scale.
Because they put the filters near the focal plane, standard optical designs make stringent demands on filter uniformity; even if filters are uniform, dirt on them (or bugs) and reflections can be problems. Such issues are best dealt with by a lot of dithering while taking images.
Infrared cameras often provide a better solution by putting the filters at a pupil. filters can be at a pupil
Imagers typically have distortion here is the result for the MIRI imager. This is typically fitted with polynomials (often based on the optical design) and taken out in software.
More Problems: Ghost images cos N
Response gaps between pixels Response variations across a pixel
Latent images, fringing, bleeding, cross talk from one pixel to another in general.
Electronic ghosts, amplifier glow, pedestal effect
Hot and dead pixels Fixed patterns due to readout and array growth processes - see Orion arrays to right - 2K X 2K InSb Cosmic ray hits Amplifier transients Nonlinearity and soft saturation Photon emitting defects (PEDs) See Orion arrays to right Thermal drifts Freeze-out of charge carriers
Don t Despair!! Repetition is important Lets you remove transient signals like cosmic ray hits, amplifier transients Not changing things is important Many array artifacts like pedestal effects and MUX glow can be removed almost perfectly if the observing conditions are not changed Thermal drifts are minimized by keeping a constant cadence on the array Dithering is also important - putting the signal on a variety of pixels Lets you replace bad pixels with good data Allows generating calibration frames from the sky - can be the best kind of calibration Can allow you to identify and remove latent images
Reference pixels (IR arrays) or overscan (CCDs) may help solve some problems Let you track the behavior of the readout electronics and fix it in data reduction However, are not a cure-all, and in many cases behave sufficiently differently from the live pixels that they are not helpful Pixel scale is important Coarse pixel scales (relative to Nyquist) lose information irretrievably Coarse pixels make it difficult to remove intra-pixel sensitivity variations, make your measurements susceptible to inter-pixel gaps and cross talk, make cross talk a more significant problem, and so forth. Fine pixels limit the field of view and may increase the effective noise Information loss with undersampled data Intrapixel sensitivity variations can make your results ambiguous; standard reductions will miss pixel sensitivity variations (left) and may not give correct results imaging in spectral lines with fringing in the detector (right)
Assuming you have done everything right, you need to calibrate your raw data: Calibration must take into account the differing properties of the detectors in the array: pixel-to-pixel variations in amplifier offset pixel-to-pixel variations in dark current pixel-to-pixel variations in responsivity Three unknowns require three sets of data: Offset frame (sometimes called bias frame): very short exposure, no signals Dark current frame: long exposure, no signals Response frame (sometimes called flat field): uniform illumination Image data reduction consists of: Subtract offset from data, dark, and response frames to obtain data, dark, and response. Scale dark to exposure time of data and response and subtract from them to get data and response Divide data by response The result: if the data frame has a uniform exposure, then the product will be a uniform image at a level corresponding to the ratio of the exposure on the data frame to the exposure on the response frame (exposure = level of illumination multiplied by the exposure time). Sources will appear on top of this uniform background.
For best results Dark current and response frames may need to be obtained close in time to the data frames It may be necessary to use identical integration times for dark, response, and data frames Response and data frames should be taken with illumination of identical spectral character Need a minimum of 3 frames on source 5 or more is better to be sure there are no transient bad pixels (e.g., cosmic ray hits) Permanent bad pixels can be masked out by replacing values with the average of those from surrounding pixels. However, doing so can give bad data that looks good. It is better to take multiple exposures and move the source on the array between them to fill in the source structure with all good data bad pixels then just reduce the integration time at some points in the image.
A good strategy for imaging is: Take repeated exposures of the field, moving the source on the array between exposures Generate the response frame by a median average of these frames sources will disappear because they do not appear at the same place on any two frames Obtain dark frames with the same exposure time as used for the data and response frames Subtract dark from data and response (also takes out offset); divide corrected data by corrected response Shift frames to correct for frame-to-frame image motions Median average again to eliminate bad pixels and cosmic rays, while gaining signal to noise on the source image A more detailed description has been posted on the course syllabus! Once you have gotten to this point, you can use the image to measure positions (astrometry - next lecture) or photometry (coming soon).