Tuesday, October 6, 2015

Digital Data from GSSP 2015

During GSSP 2015, I used a the QSI camera with the SV4 to image a familiar target.  I knew that there would be a limited amount of night to use, so I wanted to get the most from the super-dark skies of the area.  Thus, my target was the Crescent Nebula.

I'd imaged this target before at GSSP in 2012.  At the time, it was a real challenge and I struggled with chasing squiggly lines all night.  Last year, I didn't worry about it and went after different targets.

With the new camera, I wanted to return to the target and see what I could get with a different tactic.

The weather posed a challenge of course.  Green data is a little thin, with only a few subs for integration. On the nights I wanted to get Ha Continuum data - the clouds didn't cooperate.  In the end, I haven't used the Ha Continuum data yet, so I may end up removing that filter from the mix.  At the moment, though, it's a good filter for using to test star images.

In the end, this is what I ended up with:

11 subs for Luminance
11 Red
4 Green
7 Blue

Looking at the original data, there are issues with something in the setup.

Softness and problems with matching focus.

  I remember fighting suckerholes on some of the nights where part of an exposure would be blurred by clouds.  I can attribute some of these problems to those occurences.

  Another problem with focus is the way that the focuser must rack back and forth to find the proper spot.  I'll eventually need to figure out the proper backlash setting.  As I remember the evening under the stars, I may have given a setting that caused too much backlash, forcing the focuser further out of range.

  Additionally, there may be problems with tilt or offcentering - even though I'd worked this aspect quite a bit before the event. Another contributing factor for tilt is the focuser draw tube.  I can push and pull on the camera body and see the drawtube flex.  While this is not great, whether it actually makes a difference in the field has yet to be seen.  The movement is most likely the reason for the decentering.  If I can figure out where the proper point is, then I'll remember to "shove" the tube into place after setting up.  One usual fix for this kind of flexible focuser is to rack it nearly all the way in and use an extension tube.  Part of the challenge for this approach is that the focuser is only deployed about 42 mm.  This means that I should use an extension less than this value to properly preserve the ability to cover a range of wavelengths.  So far, I the shortest 2 inch extension is 35mm - which should work.  To test if this really makes a difference, I tried running the drawtube fully to the stop and tried pulling on it - and it still deflects.  Thus, using an extension may not make much of a difference.

  Lastly, and most likely, there could be a spacing adjustment required for the flattener to the imaging surface.  Based on the general practice of adding a 1/3 modifier for backwards movement of the focal point when inserting glass in the optical path, plus knowing the dimensions and tolerances for the QSI product, it's worth experimenting to see if adding a .5mm spacer to the length would tighten up the edge stars.

  Processing LRGB

  I am finally getting around to processing this data.  A number of things have conspired against getting to the task.  I was hurting from injuring my back unpacking the car from the camping trip.  I was also distracted by a lot of life issues and this removed most of my motivation for getting past the calibration step.

  Once a few of these barriers lifted, I was ready to start working on the data again.  I knew that it was good information, worth trying to see what could be seen.  The individual channels show promise, even with softness in the corners or other problems.  Once integrated, the softness evens out and the stars show decent values.

  The next step was what to do with the calibrated, registered, and stacked sets?  I knew the process up to this point, but was unimpressed with what I was getting out of the RGB combine.  I was missing something.  Back to the PixInsight forum to find out what was new for me to learn.

  I stumbled onto the Light Vortex Astronomy website and how it showed the step-by-step process of using the tools in PI to do a basic RGB combine.


  I've followed the example shown in the page.  I did modify the settings a bit to satisfy my personal taste.

  The example provided by LVA shows a portion of the Veil Nebula.  This is a good example as it features bright and not-so-bright gas features, plus a ton of distracting stars.  For my image of the Crescent Nebula, I knew that preserving the extended gas clouds would be important, plus managing the myriad star images.

  A difference between the example and the Crescent was the amount of gas appearing across the whole frame.  The Crescent Nebula is in the middle of a large gas cloud in Cygnus, near the bright star Sadr.  This made DBE a bit of a challenge.  I used a tolerance setting of .2 in DBE plus moving selection boxes around to fix the slight gradient that appeared across the image.

  Next, the background and color calibration step.  This is one of the parts that I was missing in my previous work.  In the past, I used the histogram tool to accomplish a lot of the color calibration by matching peaks during the stretch.  In the LVA example, the use of the background removal and Color Calibration steps, followed by the SCNR green removal left a very nicely balanced file without going through the stretching process.

  I'm not fond of too much noise reduction, as I find that it robs some information from the data.  Maybe it's all in my mind, but sometimes I can see structure in the noise and this makes me not want to remove these hints with a smoothing blur.

  As an example, at the linear Noise Reduction stage, instead of the suggested settings, I used:

1 Threshold 1.0 Iterations 3
2 Threshold .1 Iterations 1

  The LVA example does not show the value of using previews to test NR settings.  I use two samples for testing and evaluation.  I use a sample in an area of bright nebulosity to test how well the mask is performing and a second sample in a dim starless area to watch the background.  I usually inspect at 2:1 and 4:1 view to ensure that I'm not losing information.  Lastly, I review at 1:1 to confirm that the examples look good.

  The LVA steps suggests doing a three-part stretch with the histogram tool.  In the past, I've used the Masked Stretch tool to slowly iterate towards a goal.  This is a blind script in the sense that I don't know what I'll get out of it when run.  Still, it seems to keep the OSC stars from bloating too much as well as retaining star color.  For the current process, I stuck with the LVA steps.

  At this point, the Crescent Nebula does show, but it's not popping like I'd prefer.  There are a lot of clouds hiding in the background.
ACDNR Settings

StdDev 1.2
Amount 1
Iterations 1
Structure size 5

Blobby StarMask

  I didn't like the look of the Starmask when following the example that the LVA provided.  It looked too blobby to my eyes.  Still, I followed the example and was amazed at how it did the job.  Note the way that the brightest stars seem to have a tighter flare around them.  I'm not too happy with how this looks at 1:1, but at a non-pixel-peeping viewing distance, it's not really noticeable.  Also, I reduced some of the strength of the MorphologicalTransformation by only using 3 iterations instead of 4.  I like to see stars!
Pre Morph
Post Morph

For the non-linear LUM NR in ACDNR, I used the same settings as above, but instead applied 2 iterations.

For Morph transform on the LUM, I used 4 iterations (never mind what the screenshot shows).

L Combine - used .3 for saturation to boost the level of color.
Luminance Combine Settings

Final step, I used curves to apply a touch of contrast, a little more lightness, and more saturation in one step.  These weren't very strong changes, just enough to give a little more contrast and pop to the overall view.
Curves Adjustments

I had forgotten to plate solve the source files while they were still linear, so that was a big oops.  In the past, I ran into issues when trying to plate solve non-linear images, so I wasn't sure if the imagesolver script would make sense of the final file.  In the end, it did work and the results are listed below. Note that the SV4 ends up working at f6.3 given the default size of the pixels and the resolution of 1.7 arcseconds per pixel.

Image Plate Solver script version 4.0
Referentiation Matrix (Gnomonic projection = Matrix * Coords[x,y]):
           -0.000473902       +6.82833e-006           +0.755614
          -6.83618e-006        -0.000473868           +0.593975
                     +0                  +0                  +1
Projection origin.. [1612.177879 1230.203688]pix -> [RA:+20 12 03.04 Dec:+38 22 13.99]
Resolution ........ 1.706 arcsec/pix
Rotation .......... -0.824 deg
Focal ............. 652.83 mm
Pixel size ........ 5.40 um
Field of view ..... 1d 31' 40.7" x 1d 9' 57.2"
Image center ...... RA: 20 12 03.060  Dec: +38 22 14.34
Image bounds:
   top-left ....... RA: 20 15 56.234  Dec: +38 57 37.72
   top-right ...... RA: 20 08 04.778  Dec: +38 56 17.75
   bottom-left .... RA: 20 15 57.608  Dec: +37 47 41.26
   bottom-right ... RA: 20 08 13.683  Dec: +37 46 22.57

At this point, I saved the file to TIFF (and XISF) as a final set of images.  The TIFF file allows easy import to Lightroom for upload to Flickr and other online places.

In the end, the whole LVA system works well.  I believe that I will use it again.  What would be nice if I could modify the default settings on all the tools to the ones I used.  I will read through the PI documents to see if I can do this.  It would save headache in the future.

I would rather have the reds be more rich and full, like the reds that I see from the DSLR.  Maybe I'll play with that option in the future.

Lastly, I see that there is little value in the Ha Continuum filter so long as I do LRGB combines.  When I start doing more narrowband work, then it will be valuable.  Since I don't expect to get out to the dark sites anytime soon, this option will become more useful.  Working in the backyard means having to capture data under skyglow, so narrowband is the only way to go.

For my next project I will be going back to some data that I gathered in July on M13.  Since this object is a straightforward globular cluster, the challenge will be to keep the star colors, enhance sharpness, and reduce background skyglow.