View Single Post
  #13  
Old 23-08-2012, 10:04 AM
irwjager's Avatar
irwjager (Ivo)
Registered User

irwjager is offline
 
Join Date: Apr 2010
Location: Melbourne
Posts: 532
Quote:
Originally Posted by Sali View Post
I can just imagine someone in three months considering this same topic, finding this thread and having their research cut short. I'd be more than happy to PM you if that's just how it's done round here (learning heaps already - thanks) but I'd like to make at least one more post about this here first.
There are no rules Sali, as far as I am concerned. I'll leave it to the moderators of this forum to put this discussion in the correct category. I just don't want to 'scare' true beginners, as we're starting to get quite technical...

I just wanted to invite you to correspond with me and help you with your program.
Quote:
Originally Posted by Sali View Post
Followed your links and googled some terms. What a mind-blowing adventure! Not possible without learning the true terminology. What I have so far called "uncompressed rasters" more acurrately refers to rasters which have retained their raw bayer matrix data. I don't know what process determines whether rasters are debayered/compressed or not. In my notes I just call this visual effect the "checkerboard matrix" because the red and blue pixels in those rasters pretty much always return no value at all - making the raster look like a green chess board.

The rasters which most likely contain stars all exhibit this appearance. Starsoup finds them based on the checkerboard pattern then examines each pixel: if the pixel fits neatly into the checkerboard pattern (in terms of colour and contrast) then it is probably not a star. If it exhibits huge variations in colour or contrast compared to its neighbouring pixels AND doesn't fit into the patern, then it is most likely a star and is recorded - even where there are two in one raster.

Some checkerboards (which Starsoup v1.0 cannot recognise) appear to have been filtered horizontally or horizontally+vertically (linear vs bilinear?); hence my 2 or 3 layers of compression observation. I guess that's a bunt observation?
What is really happening is this;
The CCD collects photons for the duration of the exposure time.
Some of these photons may be light, others may be thermal noise.
Some of the light may be light pollution, some of it Gegenschein, some of it stray light, some of it diffracted light, etc.
Some of the photons may be from heat generated within your phone.
Some of the pixels on the CCD may be dead (i.e. will always record 0), may be 'hot' (i.e. will always record full intensity).

Once the exposure is over, the data is read from the CCD. During reading, noise is introduced during analog to digital conversion as well as addition of a natural bias (some pixels/wells have a different starting value for 'empty').

Once the image has been digitized, the image needs to be debayered.
The 3GS' CCD is a 3 Megapixel sensor with 0.75MP allocated for red, 1.5MP allocated for green and 0.75MP allocated for blue.
From that 3MP *per channel* needs to be reconstructed, so the gaps in the checkerboard pattern need to be filled in (this is what debayering does). There are many algorithms to do just that. However, they all have one thing in common - they use neighbouring pixels to make up for the gaps. If the neighbouring pixels have recorded noise, then this noise is propagated into the gaps. The result is that noise is exacerbated.

Once the full image has been debayered (NOTE: the full image is debayered and no checkerboard pattern will remain). There is the 'zazzing-up' and compression stage.
Unfortunately, all camera (and phone) vendors use (undisclosed) color and contrast enhancement algorithms prior to saving to JPEG, as well as non-linear stretching, sharpening techniques and noise reduction. These tend to work well for 'normal use' but can have disastrous effects on astronomical subjects, making noise worse or destroying detail.

Once the auto-image processing routines have had their way with the raw data, the final result is compressed to JPEG. This is unfortunately another source of grave distortion of the recorded data.
JPEG achieves its high compression by throwing away precision when representing high frequency detail data (e.g. stars) and throwing away color data.
Quote:
Originally Posted by Sali View Post
My observation is that the 3Gs camera is mostly blind to this "infinite light" because it is simply not sensitive enough to record it. There may indeed be light out there - but the camera can't see it and we only want to record what the camera actually sees. In such cases, pixels which failed to record a value should return "black" when read to a computer screen. But with the 3Gs they don't - they return various noises which appear to have nothing to do with the subject.
This is unfortunately an erroneous observation (see above about the different sources of photons). Noise mitigation is not a black-and-white/binary scenario or problem. This is why a simple bias (show a pixel or not show a pixel) is not a viable solution.
Noise is uncertainty in a signal - it's a sliding scale.
Quote:
Originally Posted by Sali View Post
Fundamentally, Starsoup is based on my observation that the "noise" exhibited in the original picture is common across all images taken in near-no-light conditions with this camera. The only difference between a 3Gs picture of the night sky and a 3Gs picture taken inside a sealed box are those few rasters that have retained their checkerboard pattern (which I guess is the remnant bayer matrix). The noise (even those odd bright points which may "look" like stars at first) is present in both situations. It seems most likely the noise is not due to cosmic light and has no bearing on anything outside the hardware.
That is an excellent observation and is the reason why people try to obtain a model of this very pattern noise. These are called bias frames (a picture/frame taken with 0 exposure time) and dark frames (a picture taken with the same exposure time as the light frame - e.g. the 'real picture' - but in complete darkness).
By subtracting the dark frame from the light frame, one can obtain an image that is free from certain types of noises.
Quote:
Originally Posted by Sali View Post
Without doubt, that is a cool picture. Thanks for showing such interest! Downside is, the image you posted mimicks the results I originally got when adjusting the image by hand. It makes unusually bright clumps of noise look like stars - and I'm pretty sure they're not, based on the observations I mentioned above. I may well be wrong in my observation - but it's pretty weird to see similar noise patterns in two different images and then label that noise based on the image context rather than its pattern.
That's exactly what I'm talking about; we can't be fundamentally sure what is noise and what is not. Therefore we cannot make the decision to allow or remove said pixels.
Now, what we *can* do however (and what you seem to try to do at a basic level) is trying to find the characteristics of image features that we can be sure about are *not* noise. However, in order to do that, you need to know your enemy and learn about the different types of noise and artifacts - they all have different signatures. Only once you understand these characteristics can you try finding solutions to them. The 3GS image suffers not from one type of noise/problem, but many. Worse, they interact and strengthen each other.
Quote:
Originally Posted by Sali View Post
I've attached another image to show what I mean - though I guess you figure it already. With Starsoup, the top-left raster would obviously record a star: note the checkerboard pattern and the unique traits of the pixels itself (this example also shows why we preserve the neighbouring pixels - in case of blurring).
The 'checkerboard' pattern that you see should be wholly attributed to the JPEG compression stage which throws its hands up in the air trying to compress a lot of high frequency detail (e.g. random noise) and says 'Sorry, this is the best I can do with what I am allowed consume in bandwidth' while also trying to compress real detail.
Your assumptions about how to detect erroneous stars are fundamentally flawed unfortunately. That's why I'm urging you to try to understand how the image came to be from sensor to JPEG at every step of the way.
Quote:
Originally Posted by Sali View Post
Meanwhile, the bright point in the bottom right may indeed be a star - but the current version of Starsoup would not recognise it because its checkerboard has been filtered or compressed (how do we know which is responsible?). it looks linearly filtered to me - the new version should recognise this as a star.
The problem right now is that StarSoup is throwing away real stars making the sky unrecognisable. I urge you to have a look at any freely available planetarium software to see if you can match stars in the input image. You will see StarSoup's algorithm is throwing away real stars at random.
Quote:
Originally Posted by Sali View Post
But all the stuff inbetween is just soup. It's useless data that's made its way in at some point and which I sincerely doubt has any bearing on the image we tried to capture. It is noise that has been filtered and compressed - that's all. Manual non-biased adjustments cannot account for this.
Again, useless data doesn't exist. Data is always useful but, in the case of noise, merely suffers from uncertainty. It's not a black-or-white domain.
Quote:
Originally Posted by Sali View Post
Will your manually adjusted image compute in KStars? That might be really helpful for me - manually testing the new recognition algorithm.
Try matching your input image to the output of some planetarium software.
To measure StarSoup's accuracy, try plate solving software to see if it can successfully recognise the output image.

Again, feel free to PM me (or just post here if you want) - it's great to see someone tackling astro-specific image processing algorithms.

Cheers,
Reply With Quote