In my experience with ZWO's driver, it remembers the last used settings, so it's very much set it and forget it. If you never go into the Advanced settings in the ASCOM driver, gain/offset/USB speed will not change.
It's pretty fool-proof...I've managed to get by with it for years
Yes its different as the download speeds at different gains are not much different I take it. With the FLI cameras the faster download speed is significantly faster which makes focusing quicker but for imaging you want the slower speed as it has a lower read noise.
I am still working to optimize my systems and of course fires and smoke have kept me closed for weeks also.
But I have been open the past few nights.
These 2 images taken tonight: They basically demonstrate the potential of the 36cm f2.2 RASA.
Both images are 3 mins in duration, QHY600M, Baader f2 Ha filter. One is binned 2x2, the other unbinned.
Images are also cropped...no flats acquired yet.
Martin
Very impressive for only 3 minutes exposure time. Lovely tight stars.
2x2 binning does not seem to make much of a gain as a CCD 2x2 does.
Any evidence of these halos around bright stars on LRGB images yet? (Alnitak doesn't count).
The FSQ106EDX4 images I linked show them being quite bad. Not sure if its the filters (as I recall the Baaders weren't so great for reflections) or the antireflection coating on the sensor window (if it has one).
Some guys on Cloudy Nights were getting some banding. It does not seem much of a problem and some think it may be from cables, power interference but have you seen that at all?
I read that in high gain mode the full well capacity drops to 16,700 down from 51,000 electrons but lower read noise. I suppose that would limit high gain's use to narrowband where the dynamic range needed is lower or use high gain with binning.
The Trius 695 camera has a well depth of around 20,000 electrons and I found that problematic with fuzziness around the brighter stars. Shorter exposures helped there. But small well depth is something I would avoid since that experience.
I thought there was significantly more signal in the 2x2 image actually, but much more noise too.
Not seeing big halos, but I am trying to track down some other weirdness, which I think is down to residual tilt. The black streaks that you can see in other images certainly appear with the 600C attached to the Sigma lens. I think they are caused by the cables coming off the camera/FW etc as they are obviously front mounted. But I take care of that in a different way.
No banding.
You really have to study the graphs on the QHY website to fully understand what the 3 different readout modes offer. I am yet decided on whether I should just settle on one mode/value.
Naming convention of darks and biases is critical as I build my library across the different readout modes.
One thing I just cannot get my head around is the offset......I just cannot figure out how to use it or what it does so it is set to a single value across all readmodes at present. Even the document written by Dr Q about setting that value makes no sense.
One thing I have found useful though is using the camera with SharpCap. I have been experimenting on adjusting read out modes and gain values whilst using a bahtinov mask. Back to that offset.....adjusting that real time under Sharpcap appears to have no impact at all.
I thought there was significantly more signal in the 2x2 image actually, but much more noise too.
Not seeing big halos, but I am trying to track down some other weirdness, which I think is down to residual tilt. The black streaks that you can see in other images certainly appear with the 600C attached to the Sigma lens. I think they are caused by the cables coming off the camera/FW etc as they are obviously front mounted. But I take care of that in a different way.
No banding.
You really have to study the graphs on the QHY website to fully understand what the 3 different readout modes offer. I am yet decided on whether I should just settle on one mode/value.
Naming convention of darks and biases is critical as I build my library across the different readout modes.
One thing I just cannot get my head around is the offset......I just cannot figure out how to use it or what it does so it is set to a single value across all readmodes at present. Even the document written by Dr Q about setting that value makes no sense.
One thing I have found useful though is using the camera with SharpCap. I have been experimenting on adjusting read out modes and gain values whilst using a bahtinov mask. Back to that offset.....adjusting that real time under Sharpcap appears to have no impact at all.
Martin
The extra noise in 2x2 would be from the extra readouts. I think the CMOS sensors need to do 4 readouts to get 2x2 so you get 4X read noise (perhaps its 2X, but I think its 4X). Apparently you get the same result by software binning the 1x1 data.
My understanding of offset is its like setting a black point. Too high and you clip the blacks, too low and it doesn't cut off noise.
I can see naming would be very important as I have lost some images in the past due to mixing the download speeds from the FLI cameras.
Overall the camera looks very promising and sensitive and quite flexible.
1x1 would suit FSQ style scopes or lenses and 2x2 would suit longer focal lengths I assume.
Tilt would be tough at F2.2. Does it get confused sometimes with collimation? It may look similar perhaps in some cases.
..if you look at the histogram of a bias frame, it should show a nice bell curve, which will have a lower and upper extent, with the peak being the median.
The offset will translate (aka shift in one dimension) the bell curve along the horizontal axis. It's kind of like a pedestal. To maximise precision of your frame calibration, then adjust the offset such that the left-hand extent (lowest value) of the bell curve pulls away from origin (zero). If you think about this, then you want your calibration masters to fully, numerically represent what the sensor is outputting, since you will be subtracting this from all your data frames.
Like Greg says, this is like a black point. If you clip on the low (black) end, you can end up with precision errors, or depending on what software you use, unsigned integer rollovers (where negative values can end up represented as high values).
Note that my experience is from my ZWO camera, and it's possible that QHY handles this differently...but the fundamental principle should be the same.
To eliminate a potential source of error, I fix the offset (at 50 on my ASI1600) and vary only the gain. As an aside, I always use the same USB speed value also, since my camera does not have memory buffer, and reading from the camera at different speeds could potentially alter the read noise profile.
Regarding experimentation...I would suggest taking your experimental frames using the ASCOM driver. I have heard some hairy stories from a mate with a QHY camera and the "native" support in Sharpcap. (FWIW, I use Sharpcap a lot for planetary and lunar and it is a great piece of software, but it is still at the mercy of the driver with which it must talk to the camera)
..if you look at the histogram of a bias frame, it should show a nice bell curve, which will have a lower and upper extent, with the peak being the median.
The offset will translate (aka shift in one dimension) the bell curve along the horizontal axis. It's kind of like a pedestal. To maximise precision of your frame calibration, then adjust the offset such that the left-hand extent (lowest value) of the bell curve pulls away from origin (zero). If you think about this, then you want your calibration masters to fully, numerically represent what the sensor is outputting, since you will be subtracting this from all your data frames.
Like Greg says, this is like a black point. If you clip on the low (black) end, you can end up with precision errors, or depending on what software you use, unsigned integer rollovers (where negative values can end up represented as high values).
Note that my experience is from my ZWO camera, and it's possible that QHY handles this differently...but the fundamental principle should be the same.
To eliminate a potential source of error, I fix the offset (at 50 on my ASI1600) and vary only the gain. As an aside, I always use the same USB speed value also, since my camera does not have memory buffer, and reading from the camera at different speeds could potentially alter the read noise profile.
Regarding experimentation...I would suggest taking your experimental frames using the ASCOM driver. I have heard some hairy stories from a mate with a QHY camera and the "native" support in Sharpcap. (FWIW, I use Sharpcap a lot for planetary and lunar and it is a great piece of software, but it is still at the mercy of the driver with which it must talk to the camera)
So the image attached is full frame - cropped only to remove the result of alignment.
Yes, you will notice there is weirdness going on lower right, and this is what I am trying to isolate. However, this is the best result I have achieved so far in terms of aligning the optical plane at f2.2 with this full frame sensor.
Details:
Celestron 36cm RASA f2.2
QHY600M with Baader f2 Ha filter.
12 x 3 minute exposures unguided on a 10Micron GM2000HPS mount.
Resolution: 9552 x 6298 pixels
Image scale: 0.98 arcsecs per pixel.
A 32bit image is 240Mb
I cannot believe I even managed to get this data last night. The optics were very, very hot, and I could only get the camera down to 0degs. So I didnt take any darks for this, or flats.
regards
Martin
Last edited by Martin Pugh; 01-02-2020 at 09:43 PM.
I have examined the bias files from this camera, across all read modes. The value of the offset is currently set at 50. Indeed, the histogram is well shifted to the right i.e well away from the 'black point'. So from your description below I need to adjust the offset so that the bellcurve shifts back to the left, but not 'clipped'
Correct?
thanks
Martin
Quote:
Originally Posted by Camelopardalis
Regarding the offset...
..if you look at the histogram of a bias frame, it should show a nice bell curve, which will have a lower and upper extent, with the peak being the median.
The offset will translate (aka shift in one dimension) the bell curve along the horizontal axis. It's kind of like a pedestal. To maximise precision of your frame calibration, then adjust the offset such that the left-hand extent (lowest value) of the bell curve pulls away from origin (zero). If you think about this, then you want your calibration masters to fully, numerically represent what the sensor is outputting, since you will be subtracting this from all your data frames.
Like Greg says, this is like a black point. If you clip on the low (black) end, you can end up with precision errors, or depending on what software you use, unsigned integer rollovers (where negative values can end up represented as high values).
Note that my experience is from my ZWO camera, and it's possible that QHY handles this differently...but the fundamental principle should be the same.
To eliminate a potential source of error, I fix the offset (at 50 on my ASI1600) and vary only the gain. As an aside, I always use the same USB speed value also, since my camera does not have memory buffer, and reading from the camera at different speeds could potentially alter the read noise profile.
Regarding experimentation...I would suggest taking your experimental frames using the ASCOM driver. I have heard some hairy stories from a mate with a QHY camera and the "native" support in Sharpcap. (FWIW, I use Sharpcap a lot for planetary and lunar and it is a great piece of software, but it is still at the mercy of the driver with which it must talk to the camera)
I have examined the bias files from this camera, across all read modes. The value of the offset is currently set at 50. Indeed, the histogram is well shifted to the right i.e well away from the 'black point'. So from your description below I need to adjust the offset so that the bellcurve shifts back to the left, but not 'clipped'
Correct?
thanks
Martin
Hi Martin,
Absolutely, just notch down the offset until it’s just shy of the left edge of the histogram.
Don't forget, if you're in the OAG camp, you're likely to need a new one of those as well.
I think if you're saving up for one of these, mono is the way to go. Just save for a bit more longer I guess.
I missed this one and yes, new OAG required. I was replacing the one I have been using anyway due to lack of rigidity so if I go mono I will look at the full setup of ASI6200 cam with the tilt plate removed, filter wheel then OAG with the tilt plate fitted to it.
I am hoping to catch Stellarvue before they anodise the body of the field corrector to go with my new SVX80T to get them to run the camera side thread a bit further down the outside and add some backfocus distance as the cam/wheel/oag/tilt plate combo in theory is bang on the 55mm required normally, and I will need a thread adapter that will consume 2mm and would prefer to have another couple of mm to fine tune with spacers.
The SVX80T has an image circle that covers a full frame and I bought the one with the 3" focuser to avoid vignetting, just in case I can manage a 6200.
From what I have read on Cloudynights some are reporting an offset of 50.
Greg.
Yep, I had that too, but the histogram was way to the right.
Are you both talking about the ASI6200MM? I thought Martin had the QHY600MM, and in CN I've seen references of 50 for the ASI6200. Would the two have the same offset being IMX455's or would the implementation around it influence a difference in their offsets?
Anyhow, here's how my 300sec darks at -10C with 0 Gain with my ASI6200MM.