Quote:
what is the "ideal" number for photos from a Canon ?
|
Very good question !
For DSS, ten photos is a good number. It suggest you to use Kappa Sigma. I read from DSS team that more than 30 photos will not improve the quality of the result. At least to noise issue.
The number of photos with Canon 1100D (or T3) depends from the level of signal.
The problem is saturation of RGB values when you stack.
With CCD cameras you can do hours of photo. My experience says that it is not true for Canon.
Each case is a particular case. I did captures with 1 to 2 hours. 20 to 40 photos. And, in some cases, the result was best with less than the total number of photos. I lost time, because I didn't use all the photos.
With my set of equipments and my Canon, my experience says that to cluster, the total time between 20 to 40 minutes is a good time. The ISO and individual time of exposition depends from histogram, as I said.
For galaxy and faint nebulas, 1 to 1.5 hours. Bright nebulas 40 to 60 minutes.
Normally I try to make the distance from the left mark of the histogram and the beginning of the curve with 1/8 or 1/4 of the curve area.
The level (height) of red, green and blue curves with the most similarity. This change with different ISO.
I don't consider if exist or not width difference between the red, green and blue curves. This width depends from the target; nebula and cluster have different shape.
For nebula and galaxy some times I use more than one set of exposure. For example: 20 or 40 photos with ISO 400 and 4 minutes and 2 to 5 photos with ISO 800 (or 1600) and 2 or 4 minutes. I stack them together, as group, in DSS. Or I stack them separated, and mix them with aid of layer mask in Photoshop.
The problem with nebula and galaxy is to catch the faint information and do not get saturation of the bright core.
As you can see there isn't an absolute answer to the question. The quality of the sky, light pollution and seeing have much influence in the result.
note:
When you work with L,R,G and B filter and mono camera, you can balance the amount of information during the graphic process. But with color camera, mainly Canon, you can't do it.
Mono camera with filter has all pixels to catch specific color information.
Color camera have more Green pixel than Red and Blue, and has a pattern. For our Canon (1100D or T3) it is RGBG.
It means that the Red and Blue informations is separated by 3 pixels. We can see like Red - noting - nothing - nothing - Red. In other words there are three black holes between the red (and blue) informations. You can think the red (and blue) pixel having 4 times the pixel size.
You can't think as 4272 pixels divided by 3. For me, it works like 3 different cameras assembled as one. For example: the red camera has 1068 pixels with size of 20.72 microns, without the real sensibility of a pixel with size of 20.72. This giant virtual pixel works as a 5.18 micron pixel.
That's seems to me the great problem of color camera in saturation of RGB information. What the internal electronic amplifier (ISO) make with these informations... I don't know.
Maybe I am talking nonsense... but I know that it is easy to saturate the RGB information with Canon... this, I know !