IFU: Data Reduction

Introduction
The pipeline reduction through ORAC-DR will produce a datacube from the IFU spectral images which can be viewed in Gaia and manipulated using the ‘Datacube’ software developed for STARLINK by Alasdair Allan, or by using various KAPPA and FIGARO routines (described below). The template observing sequences discussed above will already contain the appropriate DR recipes. You should not normally need to change the recipe. If you do, please be careful, as many have specific requirements, e.g. flat fields and standards, which must be acquired before a target observation is obtained and reduced on-line. Tables of available DR recipes – and links to detailed descriptions of them – are available.
Below we give an overview of the pipeline, and offer some tips on post-pipeline reduction. The IFU version of ORAC-DR is also described in a Starlink User Note, which can be accessed by typing showme sun246 (if you’re at a starlink site), or from the starlink homepage: SUN246
Running ORAC-DR
1. To run ORAC-DR at the telescope type the following:
oracdr_uist oracdr -loop flag
2. To run ORAC-DR at your home institute type the following:
oracdr_uist 20071225 setenv ORAC_DATA_IN /home/cdavis/my/raw/data/ setenv ORAC_DATA_OUT /home/cdavis/my/reduced/data/ oracdr -list 1:100 &
When running ORAC-DR at your home institution, the pipeline needs to know when the data were taken (since files are labelled with the UT data), where the raw data are, and where reduced data are to be written. Obviously these directories need to exist on your machine, and the raw data need to be in the specified directory. In the above example, frames 1 to 100 from 20071225 will be reduced.
Note that the “array test” observations taken at the start of the night MUST be reduced BEFORE you try and reduce a block of your own data. Use the -list option to specify the data range. For example, if frames 1-19 were darks taken to measure the readnoise and set up a bad pixel mask (in the log the DR recipe will be set to MEASURE_READNOISE and DARK_AND_BPM), and frames 38,39,40-43 were the flat, arc and ABBA-style “quad” of observations on your standard star, you could type the following:
oracdr -list 1:19,38:43 &
Having reduced the array tests and flat/arc/standard star calibrations, you can then reduce your target observations, e.g.
oracdr -from 44 &
When running ORAC-DR, several windows will open as they are needed; an ORAC text display, GAIA windows and kapview 1-D spectral plotting windows. If you are at the telescope the pipeline will reduce the data as they are stored to disk, using the recipe name in the image header.
The DR recipe name stored in each file header will be used by the pipeline. However, this can be overridden if, for example, you decide you do not want to ratio the target spectra by a standard star, e.g.:
oracdr EXTENDED_SOURCE_NOSTD -list 31:38
where 31 to 38 were the eight observations of the target.
To exit (or abort) ORACDR click on EXIT in the text log window, or type ctrl-c in the xterm. The command oracdr_nuke can be used to kill all DR-related processes, should you be having problems.
Help on this and other ORAC-DR topics is available by typing
oracdr -help
Notes on what the pipeline actually does are given below…
Flats, Arcs and Standard Stars…
Observers should always take a flat as the first IFU frame. If the flat is not observed, then the DR will fail. The flat is used not only for flat fielding but also to locate the slices on the array. An arc is required next so that subsequent frames can be scrunched to a common wavelength scale (note that the DR cannot currently handle arcs from the Kypton lamp – please use only the Argon lamp). It may sometimes be necessary or convenient to postpone observation of a standard star until after observing your object. In this case it is possible to run the _NOSTD versions of the recipes.
There are some tips on how to deal with bad wavelength calibration later on this page…
An Overview of IFU Data Reduction
The IFU pipeline DR will wavelength-calibrate and “scrunch” (align arc or sky lines in the spectral images) spectral images, extract spectral “slices”, and compile a datacube. At present the most important/useful displayed data products are the “white-light” image (the gu(UTdate)_(num)_im frame) and the scrunched spectral image (the gu(UTdate)_(num) frame). Both will appear in a Gaia window. Note, however, that by specifying an “extract.images” file, images across narrow wavelength ranges can also be extracted from the data (see below). For standard stars, a wavelength-calibrated spectrum will also be extracted and displayed.
The extracted spectra and spectral-images will be continually updated as data are taken, so that the observer can follow the increase in signal-to-noise on their source with time. Note that extended, line-emission sources may be more-clearly seen in the scrunched spectral image, while continuum and/or point sources should show up nicely in the white-light image. Don’t expect to see an emission-line object in a white-light image; there will probably be too much noise from the background – you’ll need to set up the extract.images file to see an image of such a target (discussed below).
The pages listed below give more details, and show example images of what you might expect to see at each stage of the reduction of IFU data.
Reducing flat field frames
Reducing arc frames
Reducing observations of a standard star
Reducing observations of an extended source
Scrunched Spectral Images and White-Light Images
As already mentioned, arguably the most important data products from the DR are the “scrunched” spectral image and the white light image. By viewing these a user can be confident that s/he has (1) detected the source and/or its emission lines, and (2) centred the target in the 3.3″x6.0″ IFU FOV.
Because the spectra in a “raw” IFU spectral image are jumbled on the array (below-left), one of the first things the DR does is rearrange the spectral slices (below-centre). For a bright target, the standard star say, this “scunched” image informs the observer that the target is well-centred left-right, within the available 3.3″ – because the spectra in the scrunched image are distributed about the centre of the array. However, in many cases its impossible to see the edges of each IFU slice, so one can’t easily establish that the star is centred up-down. Observers should therefore consult the white-light image (below-right) to make sure that the target is well centred in both axes.
RAW Image

SCRUNCHED Image

WHITE LIGHT

As a general rule of thumb – in the white-light image:
- if the target is TOO LOW, the telescope should be moved UP
- if the target is TOO FAR TO THE LEFT, the telescope should be moved LEFT
Further example spectral images, of flats, arcs, point sources and, in particular, extended emission-line targets, are given in the previous section.
Choosing what should be displayed – the “extract.images” file
The white-light image created automatically by the pipeline (the file with the suffix _im) simply represents the full datacube collapsed over its entire wavelength range. Consequently, its unlikely that it will show a pure line-emission object, since it includes the noise from the whole array.
Instead, users can “instruct” the pipeline to extract images across a narrow wavelength range by using an “extract.images” file. This simple ascii text file must be written to the reduced data directory before the DR is run. An example is shown below.
# An example extract.images file # Extract a broad-band K image K 2.1 2.3 # and a continuum subtracted H_2 1-0 S(1) image S1 2.1208 2.1228 2.1250 2.1270
Each line of the file should contain a suffix for the output image file and either two or four wavelengths (in microns). If a line contains two wavelengths then an image will be extracted from the cube between those wavelengths. If four wavelengths are given then two images will be extracted and the second will be subtracted from the first (for continuum-subtraction, say). Any lines in the file beginning with # are ignored.
When the DR is run with the extract.images file shown above present in the reduced data director ($ORAC_DATA_OUT), the DR will produce files with the sufix “_S1” and “_K”. The S1 image will be continuum-subtracted, since emission over the second wavelength range will be subtracted from the image over the first wavelength range. Examples of extracted images are shown in the previous section on reducing observations of an extended source.
To view the extracted images simply open them in Gaia – you’ll probably need to zoom in with the “Z” button…
Getting the right aspect ratio…
Note that at present the white-light and extracted images displayed by ORAC-DR will appear compressed by a factor of two in the X direction. This is due to the non-square (0.24 x 0.12 arcsec) pixels of the IFU.
Images can be displayed with the correct aspect ratio by running KAPPA:DISPLAY (type “kappa” then “kaphelp” for info.) with the options xmagn=2 ymagn=1 specified on the command line, or of course by binning in one dimension to give images with square 0.24 arcsec pixels. Alternatively, try KAPPA:PIXDUPE. With an expansion factor of 2,1 each pixel in X is copied to two pixels, so that when the processed image is displayed in Gaia; (1) the image dimensions are correct, (2) the dimensions of each pixel look correct (0.24″ x 0.12″), (3) resolution is not lost along the long axis (it remains 0.12″), and finally (4) flux calibration (on a pixel-by-pixel basis) is retained.
Examining Datacubes in GAIA
Gaia is a very powerful image analysis tool routinely used at UKIRT with all of our instruments. It now (as of December ’05) includes a tool for examining data cubes.
If Gaia is already open, use “open cube” under the “File” menu; alternatively, open an IFU cube in Gaia from the command line (e.g. > gaia gu20050101_100_cube).
The cube control panel should open automatically. Any two of the three dimensions of the cube may be displayed, although typically axes one and two (the spatial dimensions) are displayed and axis three (wavelength) is used for the animation. With axis three selected (as is the case below), the cube control panel allows users to run through the cube between the wavelengths (coordinates) specified under Animation controls, or display (and save) a collapsed image extracted from the cube between the wavelengths entered under the Collapse tab.

1.40 and 2.50 are the start and end wavelengths of the HK grism in UIST
(and of course UIST has a 1024 pix array)
The Starlink DATACUBE library of routines may also be used to examine IFU cubes. This tool is briefly described below.
EXTENDED_SOURCE versus MAP_EXTENDED_SOURCE
Three observing methods are currently in use with the IFU. The first, and simplest, called “nod to blank sky”, uses the EXTENDED_SOURCE recipe, which is similar to a spectroscopy quad in that 4 frames are observed in the order object-sky-sky-object, the object frames being at the same position on the source and the sky frames being on blank regions well off the target. This quad can be repeated many times to build up signal-to-noise.
However, because essentially all of the array is used with the IFU, with long exposure and repeat observations, cosmic ray hits and bad pixels can become a nuisance. Therefore, a second, similar mode, called “nod to blank sky with jitters”, is now available. In this mode, the same object-sky-sky-object quad is repeated. However, the object frames are slightly jittered, by one or two pixels. Because the object frames are not all spatially coincident, a recipe that takes into account the jittering is required: MAP_EXTENDED_SOURCE. An example offset sequence is listed below:
0",0" 60",0" 60",0" 0",0.12" 0",-0.12" 60",0" 60",0" 0",0"
Finally, for targets that over-fill the 3.3″x6.0″ field-of-view of the IFU, a third observing mode, “map extended source”, is available. Here the IFU is stepped across the extended target so that the resulting cube is much larger than the nominal 3.3″x6.0″ field of view. Again, the MAP_EXTENDED_SOURCE recipe must be used. An example offset sequence is shown below:
60",60" 1.5",0" 1.5",0" 60",60" 60",60" -1.5",0" -1.5",0" 60",60"
Note that the first (“p”) offset is always orthogonal to the long-axis of the IFU, regardless of position angle. Consequently, with the above sequence the IFU will be offset 1.5″ either side of the nominal centre of the target, so that an almost square 6″x6″ field is observed.
By necessity, the MAP_EXTENDED_SOURCE recipe deals with the data in a slightly different manner to the EXTENDED_SOURCE recipe. A scrunched spectral image containing all of the data is NOT produced, since not all object frames are centred on the same position on the target (as is the case with the EXTENDED_SOURCE recipe and observing mode). However, scrunched images from individual object-sky pairs are produced and left on disk for you to view in Gaia (suffix _scr). These are wavelength-calibrated, so use these to check wavelength-ranges for your extract.images file. Also, by adding the appropriate _scr frames together, you should be able to see all the data at a given position on the target and establish the true depth of the data (are those faint lines there?). (Note however that these scrunched spectral images are NOT divided by a standard.)
A data cube is of course produced by MAP_EXTENDED_SOURCE from all of the data (as is a cube that has been divided by the standard star cube and flux calibrated). The X and Y axes of the cube will obviously be larger than the usual 3.3″x6.0″ field, depending on the offsets used.
Finally, extracted images are also produced from the flux-calibrated data cube; again these will cover larger spatial areas on the sky, depending on the offsets used. Similar images, though over narrower wavelength ranges (for specific emission lines), can be acquired by running the recipe with an extract.images file in the reduced data directory, as described in the previous section.
The meaning of all suffixes used are given on the next page (data format).
Post-reduction of IFU data with Starlink Software
The pipeline DR probably goes most — if not all — of the way to reducing your data. However, there may be some additional steps that you wish to take. Starlink Kappa and Figaro routines are available which should facilitate this. Here we give some suggestions:
Creating your own Cubes
The pipeline flags deviant pixels (identified in the array tests run at the start fo the night) as bad. However, it does not try and fill in these pixels; nor does it deal with cosmic ray hits. Consequently, with long exposures and the ifu’s use of the full array, bad pixels can become a problem. If you find that you need to clean up a scrunched spectral image, starlink software may then be used to create a datacube from the cleaned 2-D spectral image. The KAPPA routines ndfcopy, slide and paste can be used to do this, provided the locations of the slices are known in the 2-D image, and the slides necessary to align columns in the final cube are known. Notes on how to flux-calibrate the cleaned cube with standard star observations are given below.
An example script, provided by Kris Lowe, is provided here. The script includes the slice locations and slides mentioned above.
Division by a standard and flux-calibration
To do the flux-calibration yourself, first reduce the science target data with the EXTENDED_SOURCE_NOSTD recipe. This will yield a flat-fielded, sky-subtracted, and wavelength-calibrated data cube.
Next, create a similar-sized cube from the standard-star observations. Reduce the standard star data as normal with the ORAC-DR routine STANDARD_STAR. From the scrunched (wavelength-calibrated) spectral image, extract six or seven spectra (optimal extraction works best; e.g. FIGARO:PROFILE and FIGARO:OPTEXTRACT) and coadd these. This spectrum can then be cleaned (e.g. FIGARO:ISEDIT), flux-scaled and divided by a blackbody function (FIGARO:BBODY) before it is “grown” into a data cube.
The Figaro commands to grow a 1-D spectrum into a 3-D cube are:
growx spectrum=standard_spec new=true image=temp ystart=1 yend=54 ysize=54 growyt image=temp new=true cube=stdcube xstart=1 xend=14 xsize=14
“growx” will create an image by copying the spectrum to 50 adjacent rows. “growyt” then copies this image plane to 14 adjacent image planes to give a cube with dimensions 14-pixels by 54-pixels, equivalent to the (14×0.24″) by (54×0.12″) = 3.4″ x 6.0″ IFU field-of-view. The standard star cube can then simply be divided into the science target data cube (KAPPA:DIV).
An example c-shell script is show here.
Extracting images and spectra
By careful examination of extracted images, spectra covering a limited area across a target can be extracted from a fully-reduced data cube. Likewise, by examining a scrunched and therefore wavelength-calibrated spectral image of a target, images in individual emission lines or over limited wavelength ranges can be extracted. For the latter, note that the spectral resolution with the IFU is two pixels, so extraction over four pixels in wavelength space (twice the FWHM) will probably give best results.
To extract an image from a reduced data cube, use FIGARO:XYPLANE, e.g.
xyplane cube=data_cube image=data_cube_brgamma
The “xyplane” routine will prompt for the wavelength range for the extraction. Obviously, by extracting images over adjacent continuum wavelengths (take the mean of two images extracted from the red and blue sides of the emission line), a continuum image, that takes into account the colour of the continuum slope, can be created and subtracted from the line image.
To extract a spectrum from a reduced data cube, first use FIGARO:XTPLANE to extract a 2-D image plane, then FIGARO:YSTRACT to extract and coadd adjacent rows to give a 1-D spectrum, e.g.
xtplane cube=data_cube image=tempplane ystract image=tempplane spectrum=nice_spectrum
Again, the two commands will prompt for the spatial ranges over which to extract data. Look at an image of the target with the 3.3″x6.0″ orientated with the long-axis in Y (as shown below): xtplane requires the x-axis range; ystract the y-axis range.

An example c-shell script for extracting images and/or spectra is show here.
Alternatively, use KAPPA routines COLLAPSE or NDFCOPY to collapse a cube into a 1-D spectrum. Note that KAPPA has been better supported than FIGARO in recent years…
Datacube – the IFU Data Handling Software
Finally, “DATACUBE”, starlink software written specifically for use with IFU data, can be used to analyse a fully-reduced data cube. For example, the STEP routine can be used to step though a series of images extracted from the cube. Each image is also saved to disk as an individual sdf file (chunk_1.sdf, chunk_2.sdf, etc.). E.G.:
> datacube DATACUBE applications are now available -- (Version 1.0-4) Support is available by emailing datacube@star.rl.ac.uk Type cubehelp for help on DATACUBE commands. Type 'showme sun237' to browse the hypertext documentation or 'showme sc16' to consult the IFU data product cookbook. > step -p NDF input file: ifu_20031016_91_cube Input NDF: File: ifu_20031016_91_cube.sdf Shape: No. of dimensions: 3 Dimension size(s): 14 x 53 x 1024 Pixel bounds : 1:14, 1:53, 1:1024 Total pixels : 759808 Lambda bounds : 1:2.4999 Lower lambda bound: 2.1 Upper lambda bound: 2.4 Lambda step size: 0.1 Stepping: Range: 2.1 - 2.4 Step: 0.1 Collapsing: White light image: 14 x 53 Wavelength range: 2.1 - 2.2 Output NDF: File: chunk_1.sdf Title: Setting to 2.1 - 2.2 Collapsing: White light image: 14 x 53 Wavelength range: 2.2 - 2.3 Output NDF: File: chunk_2.sdf Title: Setting to 2.2 - 2.3 Collapsing: White light image: 14 x 53 Wavelength range: 2.3 - 2.4 Output NDF: File: chunk_3.sdf Title: Setting to 2.3 - 2.4 Display: chunk_1.sdf chunk_2.sdf chunk_3.sdf
With the “-p” option the images should be displayed in a kapview display.
The COMPARE routine allows you to display a white-light image and select and display spectra from this image. As its name suggests, two spectra can be extracted and compared at the same time, as illustrated below.

The command “cubehelp” will list the available routines in DATACUBE.
Bad wavelength calibration
If the DR doesn’t do a great job at wavelength calibration and, in particular, if it doesn’t produce a scrunched arc spectrum where arc lines run almost seamlessly from slice to slice (a “bad” frame is shown here), then you can try tweaking the parameters used by the DR with FIGARO/IARC. To do this you must: (1) copy the appropriate primitive to the directory you’re working in, (2) edit the parameters in the primitive that are used by IARC, and (3) set the environment variable so that orac-dr knows where to find the updated primitive. With step (3) the pipeline will first look in your data directory for all primitives before resorting to files in the default primitive directory (in $ORAC_DIR).
For example:
> cp /ukirt_sw/oracdr/primitives/ifu/_WAVELENGTH_CALIBRATE_ . > setenv ORAC_PRIMITIVE_DIR `pwd` > xemacs _WAVELENGTH_CALIBRATE_
When editing the _WAVELENGTH_CALIBRATE_ primitive, look for the section that looks like:
# Do the Iarc, starting from the centre of a central slice orac_print "Running IARC on $in, starting at row $row.\n"; my $param1; my $param2; if( starversion_gt('figaro', '5.6-1') ) { $param1 = "image=$in rstart=$row file=$in.iar chanshift=$shift"; $param2 = "rwidth=1 rsigma=3 spread=t lock=f xcorr=f gap=1 sigmin=5"; } else { orac_warn "FIGARO is v5.6-1 or earlier. Will not use cross-correlation..."; $param1 = "image=$in rstart=$row file=$in.iar"; $param2 = "rwidth=1 rsigma=20 spread=t lock=f xcorr=f gap=1 sigmin=5"; } $Mon{'figaro3'}->obeyw("iarc", "$param1 $param2");
In the above code, the input parameters used by FIGARO/IARC are collectively defined by $param1 and $param2. For FIGARO versions later than 5.6-1 the top two parameter definitions will be used. (Type “figaro” to see which version you have installed.)
The parameters rsigma and rwidth are the arc line width and the number of consecutive rows to be binned and fit. (These and all other parameters are defined in the FIGARO on-line documentation.) If, as in the above case, rwidth is set to 1, IARC will try and fit every row individually. Since bad pixels may skew the fit, increasing rwidth might help. However, adjusting rsigma is often a better option, since it can help stop the pipeline from mis-identifying arc lines in adjacent rows.
Try fiddling with these parameters and re-reducing just the arc frame; you should see a noticeable change in the arc spectral image displayed in Gaia. You should also edit the orac_print line slightly (or add a new line) so that you know for sure that the pipeline is using your updated primitive.
See the starlink document SUN/232 and the section on “Customising Recipes” for further details on tailoring the pipeline to your specific needs.
Further reading…
There are further tips on how to reduce and analyse general spectroscopy data in these Starlink Cookbooks.
Previous: UIST ORAC-QT | Up: UIST IFU | Next: Available DR Recipes