Hi guys this is a really dumb question but why not just make copies of a sub and stack them instead of taking x amout of subs just take x amount of copies of the original.
(make copies of the original each time not copies of copies)
Or does making a copy reduce the quality of the image to much?
Basically.....
The first image you take of an object is full of random holes that lack any detail. You want all the holes covered to create the smoothest most detailed image possible.
The second image you take has random holes in it too, but as they are in different places, when you lay it on top of the first image, it will cover some of the holes in the first.
So the more images added, the more the first image will be improved.
I think it has been generally explained.
Each image of the object will have the main features of the object the same. Recording of noise pulse will be random. Fuzzy edges mostly caused by air movement will be random. Errors on the chip will be corrected by darks and lights.
The result then is to add the main features of every frame. However the items that are random will not add on every frame. In fact some will even cancel. The net result is the sharpening of edges and accentuation of the main features.
Because all your subs being identical, stacking will not increase your signal to noise ratio which is how data rejection algorithms work. The only thing common between subs in a stack is the signal (stars and nebs). The difference is the noise, read-out, background, hot pixels (providing you dither), etc...If you stack the same sub you'll increase the signal and the noise equally and the algo won't reject anything.
Great answers, and a great question that many people would wonder!
I feel an article would be great about this.
Does anyone want to volunteer pulling those answers into a single article, maybe with a few examples of images :
a) the same 1 minute sub multiplied 10 times
b) 10 x 1 minute subs
This is actually a great question, and the answer can actually get relatively complex. The simple answer is that any real image will have significant noise in addition to the actual signal (i.e. galaxy, planet) that your trying to capture.
What happens is that the noise is not the same in successive images, whereas the object you are interested in is - usually - the same. So what happens is when you add the images the noise tends to cancel out, whereas the signal is enhanced. But if you add the same image the noise is that same and gets enhanced as much as the signal, so there is no net gain.
I hope you can see the following attachments which I hope show this:
1. Simulated slice across a star or planet, with noise added. This is an animated gif that would represent 4 successive images.
2. The result of adding one of the images from the simulated slice 4 times.
3. The last images is the result of adding the 4 different images from the slice.
In number 3 I hope you can see that the "planet" has much less noise than in number 2. I hope this helps demonstrate why successive images will show more than the same image added multiple times.
Just my mathematical understanding using basic statistical theory. Sorry about the accuracy of the colours as I am working in Excel 2003 and it only has 54 colours - I was hoping the RGB command would accurately work but I think the result is still effective in simply showing the concept.
Basically only pay attention to the "A,B,C,D" pixels on the top of the C graphic. I think you'll get the idea.
Remember average = (a1+a2+a3+..+aN)/N
Median = the middle number. Eg 1,4,5,2,7,2,3. For median, rearrange in order : 1,2,2,3,4,5,7. The median is the middle number which in this case is 3. The average of this set is (1+2+2+3+4+5+7)/7 = 24/7 = 3.43 for which there is no original set value. Therefore, averaging mathematically destroys your original set if you want a result explicitly from the set.
Hope I haven't complicated it or goofed it with errors.
Here is an example of some data I have been working on today. I took a single 60 second sub and replicated it 9 times and stacked the 10 files. The result is on the left. The images are four times the original size and strtched exactly the same.
On the right is the result of stacking 10 different subs of the same duration. As the files were different I could do some data rejection to get rid of the effects of defects in the CCD chip and other random noise.
I had some old film of the horse head (single 15 mins mono). I played with three copies and stretched them with different algorithms then added them as RG & B. With a bit of fiddling I got the colors nearly right When compared to a colour shot. Sorry I no longer have the pic. Got lost in a HDD crash about 12 years ago.