Reducing the Wavefront Sensing Observations

Reducing the Wavefront Sensing Observations

Introduction

The aim is to analyse the 120 frames obtained during an ordinary Wavefront Sensing (WFS) run. Guidelines for acquiring the data are given on a separate page. Note also that some theory and background information is given on the WFS home page.

Note about data location: When running curve_dev on KIKI, raw images are written to /ukirtdata/raw/curve/UTdate/. The data are converted to NDF (from fits) and displayed on Kauwa by ORAC; reduced data from ORAC-DR go into /ukirtdata/reduced/curve/UTdate/. If the observer has also processed the data with “reduceall_new”, then these results are likewise written to /ukirtdata/reduced/curve/UTdate/.

The Basic Idea…

Defocussed stellar images are used to measure mis-alignments in the optical system as well as aberrations in the primary mirror figure. The WFS comprises a filter wheel placed at the telescope focus which contains a number of lenses. Two lenses, which give images that are out of focus by +1 and -1 metre, are used on a typical WFS run to image 60 stars scattered across the accessible sky (+/- 4 hours in H.A.; -40 to +60 degrees in Declination). Zernike polymonials are extracted from each pair of images, after they have been aligned, subtracted and the result normalised. This is done with the use of two packages, reduceall new and T-point, as described below. Values derived in this way are entered into a telescope look-up table, which is read at the beginning of each night and used to shape the primary and align the secondary (to correct for coma). The primary mirror support system is described in the main Telescope web pages. Aberrations are also corrected for in terms of telescope position on the sky.

T-point is an interactive telescope pointing and analysis system (see SUN/100.9) that allows you to input and fit the telescope pointing data and view residuals in a graphical format. By adding and removing terms one can adjust the model for systematic errors.

Reducing the Data with “Reduceall_new”

Login to KAUWA as curve (with the normal “observer” password).

The 120 “raw” frames output by ORAC-DR may have been “reduced” in parallel with the data acquisition. This should be done on KAUWA, in /ukirtdata/reduced/curve/UTDate/ . If this was not done during data acquisition, then you must run the reduction software by typing:

  > reduceall_new 1

If the UT date is anything other than today’s date, then add the UT date to the command line; also, if half of the data have already been reduced, start from the next frame number, e.g.

  > reduceall_new 17 20010124

The frame number should be odd, since data are reduced in pairs.

The reduceall_new routine produces a number of additional files; “.zer”, “.wav”, “.int”, “.log” and “.pha” for every other file obtained (it actually works on the raw “fits” files, rather than the NDF files produced by ORAC). SAOImage will display images as the data are being analysed.


NB. The following section can be missed out by following the instructions in the staff wiki.

Preparing the data for “T-point”

To interactively analyse the resulting files from reduceall_new we use the starlink package T-point. Having logged into KAUWA as curve, from the curve home directory you should first create a sub-directory for the results using the UT date of the observations. For example, for data taken on the 24th January 2001:

  > cd /home/curve/results
> mkdir 24jan01

You will notice in the /home/curve/results directory a number of other sub-directories; each contains the results from previous WFS runs. Now change directory to the ORAC-reduced directory for the current WFS run — in this case it would be:

 
> cd /ukirtdata/reduced/curve/20010124

Here, create an input file for T-point by using the “concat2” command. “concat2” is a shell script that extracts values from each “.zer” file and concatenates them into a single list. The resulting ascii text file (lets call it “jan01.dat”) should then be copied to the curve home directory /home/curve/results/24jan01/. You will work on the results in this directory, so finally cd back to this area; e.g.:

  > concat2 *.zer > jan01.dat
> cp jan01.dat /home/curve/results/24jan01/.
> cd /home/curve/results/24jan01/

The text file “jan01.dat” is not in quite the correct format for T-point: you must edit this file and add in the HA and Declination information. This is indeed a boring process; you may want to develop some clever method of transferring the information from the frame headers into the T-point input file (though at present the fits header doesn’t contain and HA/Dec information!). Otherwise you must do it from the log sheets. The first few characters in each line in the T-point file (jan01.dat in this example);

     " " , " Z05", " Z06", " Z07",...
"f20010124_00001.zer" , -0.01,...
"f20010124_00003.zer" , -0.13,...
"f20010124_00005.zer"...

must be converted to:

     " " , " Z05", " Z06", " Z07",...
0001 00 03 00 -39 42 00, -0.01....
0003 00 04 59 -15 00 00, -0.13....
0005 .... etc.

where the first figure is the frame number, the 2nd-4th figures the H.A., and the 5th-7th the Declination. Once this task is complete, the information pertaining to the four aberrations you will consider (astigmatism, spherical, coma and trefoil) should be transferred to four separate T-point files using the simple shell script “make tpoint files.sh”. Run this from the current directory with

  >../make_tpoint_files.sh

Give the name of the input file (jan01.dat in this example) and a title, e.g. January 2001, when prompted to do so. The four data files produced by this step are called astig.dat, spher.dat, tref.dat and coma.dat.


Analysing the data with “T-point”

You are now ready to run T-point on the data. Type:

  > tpoint
* inpro ../procs.dat
* call xw

Here the parameters to be plotted are read in from the “procs.dat” file; also the “call xw” command will open a plotting window.

From the original “jan01.dat” file the “make_tpoint_files.sh” script produces four files, each containing a table of parameters associated with each of the four aberrations you will be looking for in the data, namely, astigmatism, trefoil, spherical and coma. These parameters are measured for each star and thus for each telescope position on the sky. The four files are opened individually and analysed using a number of T-point commands, for example:

   * indat astig.dat
* call w9
* gmap
* slist
* mask 52
* unmask 52
* ...
* use ch id
* fit
* call w9
* use hxsh hxsd
* fit
* lose hxsh
* fit

In the above example the file containing the astigmatism parameters is opened. The “call w9” command plots nine graphs showing errors in X and D (H.A. and Declination) with respect to the other parameters. The “gmap” command allows you to focus in on the plot which shows the magnitude of — in this case — astigmatism measured for each position on the sky (note that the x and y-axes measure H.A. and Declination in this plot). Bad data values will have large vectors that are offset from surrounding points; these bad points may also be evident in the table of input values, which you can list with the “slist” command. Remove bad values from the data with the “mask” command (replace them with “unmask” if necessary), although note that one or two bad values (in 60) will have very little effect on the fitting.

When fitting the data, use (in addition to the standard parameters used with equatorial telescopes that were read in on running-up T-point) the CH and ID parameters; these are the Zernike zero-points (see the table below). T-point will give values for CH and ID that best fit the data. What does the population distribution (Popn S.D.) look like? Plot the nine graphs; is there an obvious sine or cosine dependence in the errors with respect to the telescope H.A. (X) or Declination (D)? In other words, do the Zernikes have an H.A. or Dec dependence? In a recent data set, the “dx vs H” plot for coma exhibited very strong sine dependence (top-left plot in the “w9” display). Adding HXSH to the fit removed the gradient altogether!! See the before and after plots. Use or lose fit parameters which give the best Popn SD.

Results from the analysis of each of the four data sets (astig.dat, coma.dat, etc.) should be “cut-and-pasted” into a text file, although T-point does produce a text log file of all steps taken. The plots themselves should be printed out for reference and postscript copies kept in the reduced directory.

Changing the Look-up Model Parameters

If necessary, the telescope look-up files may be modified. The files, primary.cfg and secondary.prg (for coma only) are in /jac_sw/itsroot/install/configs/. It is VERY IMPORTANT that you make a copy of the old file BEFORE you make any changes , e.g.;

  > cp primary.cfg primary_20010124.cfg
> cp secondary.cfg secondary_20010124.cfg

The suggested changes to the figures in primary.cfg and secondary.cfg are given in the “value” column in T-point. Only make changes if these values are significant with respect to the sigma (also given in T-point). The table below gives a translation of T-point terms to those used in the “.cfg” files.

AberationIn Telescope look-up tableIn T-PointIn Telescope look-up tableIn T-Point
AstigmatismIZ5CHIZ6ID
 HZ5SHHXSHHZ6SHHDSH
 HZ5CHHXCHHZ6CHHDCH
 HZ5SDHXSDHZ6SDHDSD
 HZ5CDHXCDHZ6CDHDCD
Coma*Z7CHZ8ID
TrefoilIZ9CHIZ10ID
 HZ9SHHXSHHZ10SHHDSH
 HZ9CHHXCHHZ10CHHDCH
 HZ9SDHXSDHZ10SDHDSD
 HZ9CDHXCDHZ10CDHDCD
Spherical**Z11CHZ22 (not in look-up table)ID

*In the “secondary.cfg” file.
**Z11 in “primary.cfg” is given by -CH.

To install the new changes, you must use the “pvload” command:

  > pvload primary.cfg
> pvload secondary.cfg

To see changes made to the primary support system and secondary position, you can also look at the relevant Epics displays. Try

  > dm topend.dl &

to see the active mirror support settings. Note that the Spherical set point is probably already set to its limit of +300. The primary mirror control settings used for astigmatism and trefoil correction are available under “more displays”.

What to expect from the data

UKIRT is thought to suffer from some astigmatism, although the 12 periferal actuators on the primary are used to correct for this. The telescope also suffers from spherical aberration which is beyond the scope of the mirror support and is therefore not accounted for. You should expect to see a recommended change of the order of 200-300 nm in Z11 (spherical) from the fitting.

In general, a good WFS run should yield relatively small Popn S.D. errors, of the order of:

   ASTIG  :-  Popn S.D. ~ 250  (150 x sqrt(2) for two axes)
COMA :- " " ~ 150 (100 " " " " " " )
TREF :- " " ~ 75 (50 " " " " " " )
SPHER :- " " ~ 35

Reference Material

  • A. Chrysostomou, et al., ??, “Active Optics at UKIRT”, SPIE
  • R.J. Knoll, 1976, “Zernike Polynomials and Atmospheric Turbulence”, J.Opt.Soc.Amer, 66, p.207.
  • C. Roddier, J.E. Graves, M.J. Northcott, F. Roddier, 1994, “Testing optical telescopes from defocused images”, SPIE, 2199, P.1127
  • C. Roddier, F. Roddier, 1993, “Wavefront reconstruction from defocussed stellar images and the testing of ground-based optical telescopes”, J.Opt.Soc.Amer, Vol.10, No.11, p.2277.