Go Back   IceInSpace > Equipment > Astrophotography and Imaging Equipment and Discussions

Reply
 
Thread Tools Rating: Thread Rating: 8 votes, 5.00 average.
  #21  
Old 07-02-2014, 01:33 AM
alistairsam's Avatar
alistairsam
Registered User

alistairsam is offline
 
Join Date: Nov 2009
Location: Box Hill North, Vic
Posts: 1,837
Hi Ray

Your method is actually easy, I guess we're just trying to understand whats happening and why we're aiming for the optimimum adu and how.
Your earlier post explains this.
It would be great as an illustration if you could post a low res version of your 5mn sub meeting the target adu and another say 15min sub with higher adu and see what the differences are
As for binning, its common to use 2x2 for rgb data and reduce jts exposure by 1/3rd of L.
But the 8300 sensor has a known issue with horizonal bloom and saturates quickly, 2x2 may not yield good results.
Alistair
Reply With Quote
  #22  
Old 07-02-2014, 05:58 AM
Peter.M's Avatar
Peter.M
Registered User

Peter.M is offline
 
Join Date: Sep 2011
Location: Adelaide
Posts: 947
Quote:
Originally Posted by alistairsam View Post
As for binning, its common to use 2x2 for rgb data and reduce jts exposure by 1/3rd of L.
Generally I try to keep my binned subs at the same length as the L, and the reason for this is that each colour sub gets less light than the luminance. I have absolutely no evidence to show that this is better, but it seems to make sense in my head.
Reply With Quote
  #23  
Old 07-02-2014, 08:12 AM
Octane's Avatar
Octane (Humayun)
IIS Member #671

Octane is offline
 
Join Date: Dec 2005
Location: Canberra
Posts: 11,159
Alistair,

Why 1/3rd and not 1/4? I thought it was a quarter because you're using four times as many pixels?

H
Reply With Quote
  #24  
Old 07-02-2014, 09:22 AM
alistairsam's Avatar
alistairsam
Registered User

alistairsam is offline
 
Join Date: Nov 2009
Location: Box Hill North, Vic
Posts: 1,837
Quote:
Originally Posted by Octane View Post
Alistair,

Why 1/3rd and not 1/4? I thought it was a quarter because you're using four times as many pixels?

H
Hi H
I believe thats the theoretical gain but there are some losses and the sweet spot is in between 1/4 and 1/3.
To do with the fact that the pixel area for 4 pixels includes a small path for circuitry and therefore is not exactly 4 times.
This is lower for ccd as opposed to cmos where it is smaller but the microlens compensates to a certain extent.
Thats what I've read.
Better to get a bit more data than less.
The overall increase in imaging time would not be much between the two.
Peter, if you keep your binned length the same as unbinned, youre imaging longer for rgb that doesn't contribute too much to details. From what I understand, it would benefit to use that extra time for L and reduce noise.
cheers
Alistair
Reply With Quote
  #25  
Old 07-02-2014, 11:08 AM
rally
Registered User

rally is offline
 
Join Date: Sep 2007
Location: Australia
Posts: 896
Hi Ray,

Always good to have another tool in the arsenal.

I am not sure that this is greatly simpler than some of the calculators around.

I spent a few minutes one day just taking the calculations out of John Smiths (CCDWare) calculator and assumptions based on some of Stan Moores work and his recommendations so I can use that in the field.
I just grab a couple of superfluous junk images while I am setting up taking a few focus tpoint mapping and images - since half the time I have no internet out there anyway - I needed a PC based calculator.

I take one of the rubbish images or a test image, whack the exposure time and the average background ADU into my little spreadsheet and it gives me the ideal minimum sub exposure time for reducing my read noise to within a fraction of the sky noise - usually about 5%

Noting that I had previously entered the other values for my cameras into a table - camera gain (available from metadata in the FITS header), pedestal, read noise and preferred % of read noise to sky flux noise (background).

The philosophy of this method is - the Noise I have control over is lost within the noise I have no control over !
But its certainly not a perfect tool for determining ideal exposure times for a given subject
It only gives me a recommended 'minimum sub exposure time' to reduce the effects of read noise, if I really want to capture deep fine detail then I need to go much, much longer and if I want to capture a high dynamic range then I will need some much shorter exposures as well so I dont have overexposed data.
So the imaging needs to be broken up into a series of exposure blocks for faint data, normal data and bright data

Maxim and CCDSoft have a pedestal value of 100 that they add to the signal so we need to remove this artificial 100 from the average background signal (its there to avoid the possibility of negative values in the calculations in final signal values)
Since 100 out 1000 is 10% its not insignificant
So just wondering if that is relevant to your calculation ?

Cheers

Rally
Reply With Quote
  #26  
Old 07-02-2014, 12:19 PM
Shiraz's Avatar
Shiraz (Ray)
Registered User

Shiraz is offline
 
Join Date: Apr 2010
Location: ardrossan south australia
Posts: 4,918
thanks for feedback Alistair - sorry I misunderstood the basis for your questions. If you want to understand the underlying maths for the method, have a look at the Starizona/Smith stuff, but instead of solving for exposure time, it has been solved to provide a target ADU level.

Re binning, there are many unknowns in the ways the chip makers implement the process. Am coming to the conclusion that the best way may be to expose at 1x1 for colour and then bin in software if you want to get better SNR in the colour (at the expense of colour resolution). If you use the targetADU method for keeping the read noise insignificant, you should get just as much SNR advantage as hardware binning (probably more) and get 4x better dynamic range to keep the bright star colours.

regards ray

Last edited by Shiraz; 07-02-2014 at 01:00 PM.
Reply With Quote
  #27  
Old 07-02-2014, 12:40 PM
Shiraz's Avatar
Shiraz (Ray)
Registered User

Shiraz is offline
 
Join Date: Apr 2010
Location: ardrossan south australia
Posts: 4,918
Quote:
Originally Posted by rally View Post
Hi Ray,

Always good to have another tool in the arsenal.

I am not sure that this is greatly simpler than some of the calculators around.

I spent a few minutes one day just taking the calculations out of John Smiths (CCDWare) calculator and assumptions based on some of Stan Moores work and his recommendations so I can use that in the field.
I just grab a couple of superfluous junk images while I am setting up taking a few focus tpoint mapping and images - since half the time I have no internet out there anyway - I needed a PC based calculator.

I take one of the rubbish images or a test image, whack the exposure time and the average background ADU into my little spreadsheet and it gives me the ideal minimum sub exposure time for reducing my read noise to within a fraction of the sky noise - usually about 5%

Noting that I had previously entered the other values for my cameras into a table - camera gain (available from metadata in the FITS header), pedestal, read noise and preferred % of read noise to sky flux noise (background).

The philosophy of this method is - the Noise I have control over is lost within the noise I have no control over !
But its certainly not a perfect tool for determining ideal exposure times for a given subject
It only gives me a recommended 'minimum sub exposure time' to reduce the effects of read noise, if I really want to capture deep fine detail then I need to go much, much longer and if I want to capture a high dynamic range then I will need some much shorter exposures as well so I dont have overexposed data.
So the imaging needs to be broken up into a series of exposure blocks for faint data, normal data and bright data

Maxim and CCDSoft have a pedestal value of 100 that they add to the signal so we need to remove this artificial 100 from the average background signal (its there to avoid the possibility of negative values in the calculations in final signal values)
Since 100 out 1000 is 10% its not insignificant
So just wondering if that is relevant to your calculation ?

Cheers

Rally
Same philosophy Rally and the maths is the same as the Smith/Moore stuff only arranged for a different solution. The major difference between their approach and the suggested targetADU approach is that, with the suggested method, you will never again need to use a calculator/spreadsheet or take junk images - just do the target ADU calculation once for your camera and use that single result forevermore with any scope/filter/sky. I found that, even though I knew the theory and could use the calculators, most of the time I was winging it - I now have a systematic methodology that only requires me to remember one number and with that I can always get the exposures right, regardless of which scope/filter/sky I am using. I can see at a glance what is going on and immediately work out the best exposure in my head - doesn't get any simpler than that. It is not going to be for everyone though and if you already have a procedure that you trust, go for it.

I don't think that you will get much more detail in the final image by using longer subs than recommended by Smith. The individual longer subs will certainly look better - but if you add a larger number of the recommended shorter subs to get to the same total exposure, the final stacked images will be identical - except that the one with shorter subs will have better dynamic range.

As I understand it, the software packages you mention apply pedestals to calibrated data after bias/dark removal. The method I am suggesting is based on raw data without any processing - you don't need to save/process anything. I guess it could be applied to pre-processed data if the software does that on the fly - in which case the Bias term in the calculation would be the pedestal value - or whatever you would normally see if there was no signal (as in posts 1 or 10).

regards Ray

Last edited by Shiraz; 07-02-2014 at 07:52 PM.
Reply With Quote
  #28  
Old 07-02-2014, 09:55 PM
rally
Registered User

rally is offline
 
Join Date: Sep 2007
Location: Australia
Posts: 896
Ray,

MaxIm and CCDsoft both add 100 to each pixel in the image at the time of acquisition - its not part of any calibration. (Mira is different)

I still think the ideal exposure times are very dependent on what you are trying to achieve and level of detail in your chosen target, as opposed to purely the maths of the CCD and sky background.

If you are trying to capture really faint detail, (or lack of light - eg dark dust cloud structure) then there is no substitute for long exposure.
If the photon count is so low per minute or per hour, then it will be so low as to be buried within the read noise if the exposure time is short.
A longer exposure time is still required to get that data up and out of the noise, but also to bring it up to levels that arent going to suffer from quantisation problems during post processing.
That is where I think that a single exposure time isnt going to be able to provide the same level of high dynamic range information as multiple stacked exposures of varied times and combined together.

Rally
Reply With Quote
  #29  
Old 08-02-2014, 07:00 AM
Shiraz's Avatar
Shiraz (Ray)
Registered User

Shiraz is offline
 
Join Date: Apr 2010
Location: ardrossan south australia
Posts: 4,918
thanks for that info Rally. If the software adds 100 to everything, then that just forms part of the "Bias" bit and drops out in the wash. It should not affect the operation of the targetADU method at all.

I agree that there is no substitute for long overall exposure. We seem to disagree when it comes to the idea that, if subs are long enough to bury read noise, the end result from the combination of many such subs is the same as that from a smaller number of much longer subs that results in the same overall exposure.

regards Ray

Last edited by Shiraz; 08-02-2014 at 08:02 AM.
Reply With Quote
  #30  
Old 08-02-2014, 07:47 AM
Shiraz's Avatar
Shiraz (Ray)
Registered User

Shiraz is offline
 
Join Date: Apr 2010
Location: ardrossan south australia
Posts: 4,918
Probably time to summarise.

The following steps should deal with practical issues that have been raised. It assumes that the acquisition software allows you to sample pixel values under or near the cursor during acquisition. If you can think of any way to simplify/enhance this process, please chime in.


1. Take a short dark sub with your camera in whatever mode you normally use for acquisition. Using your acquisition software, move the cursor to sample the ADU levels at a few points around the centre of the image - estimate the average signal. This is "Bias" in the calculations.

2. Obtain the Read Noise (electrons RMS) and gain (electrons/ADU) data for your camera, either from the manufacturer's data or by measuring.
Calculate the target ADU using:

targetADU = Bias + 10*RN*RN/gain

You only need to do this once for your camera - remember the number, it's all you need to know.

3. From now on, when imaging with the calibrated camera, sample a few background sky regions (in a sub) with your acquisition software and adjust your sub exposure times so that the average background sky ADU levels are fairly close to the targetADU. You use the same targetADU for any scope, filter or sky conditions.

That's all you need to do.

If you use your camera in a binned mode, you will need to use the Bias, RN and gain data for the binned mode in the calculation (as you would for the online calculators). The targetADU for binned mode will be different from that for 1x1.

Last edited by Shiraz; 08-02-2014 at 12:19 PM.
Reply With Quote
  #31  
Old 08-02-2014, 03:55 PM
DJT (David)
Registered User

DJT is offline
 
Join Date: Nov 2011
Location: Sydney
Posts: 1,452
Hi Ray
Great thread. Am new to CCD imaging but am plugging in the numbers from my STL6303 for this and find the average ADU value I am picking up from my bias frames is around the 3800 mark, kind of interesting given Freds 1000ish with his STXLso am thinking user error. I use CCDOPS to grab the bias and to look at the pixel value

Method of measuring was basically taking a bias frame, popping the cursor around in various places to get an average under the cursor. I also whipped up the histogram and got a similar mean value as it made no sense.

I am assuming I am doing something wrong here. Attached is a screen grab of what I am looking at.

Based on the data I would be looking for an ADU value of around 5500. My 5 minute subs are pulling down 4200ish so longer subs ? Am in close burbs of Sydney..odd

Bias RON Gain
3829 15 1.35

Any hints? An hopping this is an embarrassing gaff on my part.
Attached Thumbnails
Click for full-size image (bias adu.jpg)
203.0 KB57 views
Reply With Quote
  #32  
Old 08-02-2014, 04:27 PM
Shiraz's Avatar
Shiraz (Ray)
Registered User

Shiraz is offline
 
Join Date: Apr 2010
Location: ardrossan south australia
Posts: 4,918
Hi David. I know nothing about the STL6303, but if that is the bias you get by doing a short dark sub, then I guess that is what it is - the screen shot you posted suggests that is the case. Might be worth asking SBIG if that is in the normal range, since you could possibly lose a little bit of dynamic range if the bias is set too high - probably not a big deal though.

Anyway, since it looks like the bias is 3800, aim for about 5500 as you suggest, based on the data you provided. You are getting about 4200 at 5 minutes, so going to about 20 minutes sub length should get you close under the same conditions.

Out of interest, other SBIG data for this camera shows shows RN of 13.5 and gain of 1.4 - if that was the case for your camera, you would be looking at a targetADU of about 5100 which would need about 15 minute subs. In any case, significantly longer subs would be worthwhile if possible - your mount should handle it OK.

very interested to know how you get on.

Regards ray

Last edited by Shiraz; 08-02-2014 at 10:35 PM.
Reply With Quote
  #33  
Old 09-02-2014, 11:34 AM
DJT (David)
Registered User

DJT is offline
 
Join Date: Nov 2011
Location: Sydney
Posts: 1,452
Cheers Ray
I had been steering away from longer subs as I don't have an lp filter in the train. The astrodon filters though seem to do a fairly good job of cutting out some if the worst of it so will push the subs longer and see where it gets me.

More likely to be limited by blooming with this CCD before I get anywhere near 15 minutes on broadband imaging

Will let you know how this goes

Thanks again
Reply With Quote
  #34  
Old 14-05-2014, 03:16 PM
DJT (David)
Registered User

DJT is offline
 
Join Date: Nov 2011
Location: Sydney
Posts: 1,452
Quote:
Originally Posted by Shiraz View Post
Hi David. I know nothing about the STL6303, but if that is the bias you get by doing a short dark sub, then I guess that is what it is - the screen shot you posted suggests that is the case. Might be worth asking SBIG if that is in the normal range, since you could possibly lose a little bit of dynamic range if the bias is set too high - probably not a big deal though....

...
very interested to know how you get on.

Regards ray
Hi Ray
I eventually followed up with SBIG on this as I slowly wade my through innumerable learning curves and the CCD came up as next in the queue.

Emailing with tech support, there was indeed an issue and the ADU levels I reported were outside of normal parameters. A quick response from David Morrow included instructions on how to do a reset and and its now operating at normal levels, around 1000.

I had a go at this sub-length technique last night but ran out of time to do any real sanity checks plus the full moon was'nt helping. Looks like I will be rebuilding a library of darks and bias...The reset made quite a difference and the old ones are now defunct.

So thumbs up for the initial thread as it triggered a realisation that all was not as it should be with the CCD.

cheers
Reply With Quote
  #35  
Old 15-05-2014, 09:14 AM
Shiraz's Avatar
Shiraz (Ray)
Registered User

Shiraz is offline
 
Join Date: Apr 2010
Location: ardrossan south australia
Posts: 4,918
at least one good result then - thanks for the feedback.
Reply With Quote
  #36  
Old 06-06-2014, 09:20 AM
Octane's Avatar
Octane (Humayun)
IIS Member #671

Octane is offline
 
Join Date: Dec 2005
Location: Canberra
Posts: 11,159
Hi Ray,

I just wanted to say that I finally got around to having a go with this last night.

I was using my FSQ-106N at f/8 (with the Extender-Q 1.6x) on an STL-11000M.

I took a ten second dark, and, my background (bias) was reading around the 830 mark.

The read noise for the camera is 13 electrons RMS and the gain is 0.8 electrons/ADU. Plugging those values in:

830 + ((10 * 13 * 13 )/ 0.8) = ~2943

I was trying to shoot through clouds, so, couldn't nail down the appropriate exposure time correctly, but, from my experimentation, pointed at Antares (near zenith) using the luminance filter, somewhere between 6 and 12 minutes would give me the appropriate background of ~2950 (after dark calibration). I then turned to M8, and using the hydrogen alpha filter, I got a reading of 650-odd after a ten minute exposure (again, dark-calibrated). This leads me to believe that I should be able to go for around an hour per sub-exposure. That's crazy considering guiding, clouds, and, whatever else throws itself our way. So, maybe 30-minutes might be a compromise.

The 6-12 minute guideline isn't bad, considering the Moon was out. Living on the coast (I've moved up from Canberra/Queanbeyan, where the sky was quite nice for the most part), there's nothing further east of here, so, it gets reasonably dark. I'm quite happy with this!

Thanks, again!

H
Reply With Quote
  #37  
Old 06-06-2014, 01:06 PM
Shiraz's Avatar
Shiraz (Ray)
Registered User

Shiraz is offline
 
Join Date: Apr 2010
Location: ardrossan south australia
Posts: 4,918
thanks for the feedback H - glad it seems to be useful.

Ha is scary isn't it - basic rule seems to be "as long as you can" regardless which camera and scope.

minor point, the targetADU is the raw signal including bias (the method was designed so that you could just look at a signal straight off the camera with no processing). If your dark cal removes the bias, you need to add it back in - or maybe not do the dark cal in the first place.

Regards ray
Reply With Quote
  #38  
Old 06-06-2014, 01:39 PM
Octane's Avatar
Octane (Humayun)
IIS Member #671

Octane is offline
 
Join Date: Dec 2005
Location: Canberra
Posts: 11,159
Oh! I'll try again without full calibration.

Will report back on luminance exposure duration for my setup.

Thank you!

H
Reply With Quote
  #39  
Old 17-06-2014, 11:09 AM
Octane's Avatar
Octane (Humayun)
IIS Member #671

Octane is offline
 
Join Date: Dec 2005
Location: Canberra
Posts: 11,159
Ray,

I took a 5-second dark and a 10-second dark last night, and, my background came out to very close to 780 in either image.

Plugging those values in with a gain of 0.81e- ADU, gives a target of ~2870.

I slewed over to Antares and did a 600-second exposure which yielded a background of 8,200-something. Clearly, way, way too long! The sky was quite dark last night before the Moon came up and Antares was at an altitude of about 47 degrees.

By the way, I'm getting the background ADU by simply applying the Range screen stretch (MaxIm DL), which shows the minimum and maximum values in the image. I hope this is the right way of doing it. I don't trust myself with clicking on various areas in the image and then taking an average reading.

The other way of seeing this value is also using the Information window and using the Area option (which defaults to the full frame) and gives a bunch of information about the image.

If it's clear tonight, I'll give it another go and try to fine tune the exposure.

H
Reply With Quote
  #40  
Old 17-06-2014, 11:57 AM
Shiraz's Avatar
Shiraz (Ray)
Registered User

Shiraz is offline
 
Join Date: Apr 2010
Location: ardrossan south australia
Posts: 4,918
thanks for persevering H - appreciated

Pretty sure your target ADU is right .

Using the cursor, you really only need to take one background sky sample somewhere well away from the glare of very bright stars and where the sky also has no obvious faint stars. Maybe take a few samples to get a general idea of what the average signal is, but even one single sample should be good enough to get the exposures about right, provided the cursor is not sitting directly on a star.

If you had Antares in the image, you would get some very high readings in part of the image. You need to use the minimum value returned by screen stretch, but that will slightly underestimate the sky signal in the centre of the field. I would recommend just sampling under the cursor as above.

edit: just looked up how Maxim works - suggest that you use the information widow in aperture mode and take the median from a region of sky with as few stars as possible - I think that a single sample taken that way will do the job. It should be quick and easy.

regards Ray

Last edited by Shiraz; 17-06-2014 at 12:29 PM.
Reply With Quote
Reply

Bookmarks

Thread Tools
Rate This Thread
Rate This Thread:

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +10. The time is now 01:02 PM.

Powered by vBulletin Version 3.8.7 | Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Advertisement
Bintel
Advertisement
Testar
Advertisement