Go Back   IceInSpace > Beginners Start Here > Beginners Astrophotography

Reply
 
Thread Tools Rate Thread
  #1  
Old 16-08-2012, 02:35 PM
Sali
Registered User

Sali is offline
 
Join Date: Aug 2012
Posts: 3
Starsoup - Naked iPhone Astro Test

This is a photo of the 11/12/2011 total lunar eclipse from 32 13 S taken with my naked iPhone3Gs. No attachments, lenses or other peripheral harware - not even a tripod! Sigh. This was the first astrophotograph I ever took - so it has sentimental value more than anything. Obviously the original raw photo was unimpressive - so I wrote a program called Starsoup to fix it. I'll quicklly explain what Starsoup does, then lead into my actual question:

The program reverse-engineers the JPG format and sorts out actual objects in the image from the weird green soup you'll find if you capture an iPhone photo in total darkness. Basically, you point the iPhone at the night sky, take a photo, run the photo through Starsoup and the program tells you where all the stars are. Then it spits out a new image with ONLY the stars, cutting out all the useless soup. Chuck the clean image into your nearest favourite image editor and you've got yourself an actual picture of something - rather than murky green noise.

So why bother? There are definately better and more expensive ways to do this - like get a propper camera! But on a technical level, adjustments in programs like GIMP, Photoshop or, really, any other photo-editing program are not biased enough to recognise the difference between starlight and soupy noise. Some people might actually consider that a good thing. But even if you stick to pure stacking of the image, an unclean capture from a dirty camera can turn out looking VERY different from one cleaned by Starsoup. Often the noise can be mistaken for starlight. Ouch. My dream is for someone to take this idea and make a GIMP plugin out of it for easy application.



But today, I'm simply rewriting the "bias algorithms" in Starsoup to cut through more layers of compression (to find stars even if they have been compressed 2 or 3 times). The attached image is from the ORIGINAL version of Starsoup. It keeps a 3x3 grid of pixels around the star and stacks the opaque image over a blured layer. The blurred layer is just an arbitrary throw-back to light-leaking in film (for effect only).

So... My Question:

Do you consider it "bad form" to have a blur-layer to accentuate the stars? I can take it out of version 2.0 if it's not as cool as I thought.



I feel pretty noobish writing code to fix my cruddy photos, but maybe this program will have a more usefull application for the hobby later on. Who knows?
Attached Thumbnails
Click for full-size image (IMG_2599 111211 total lunar eclipse CLEAN.png)
60.5 KB116 views
Click for full-size image (Starsoup Crop.png)
25.3 KB67 views
Click for full-size image (IMG_2599 111211 total lunar eclipse ORIGINAL.JPG)
169.2 KB53 views

Last edited by Sali; 18-08-2012 at 10:43 PM. Reason: added extra images
Reply With Quote
  #2  
Old 17-08-2012, 10:42 AM
Poita (Peter)
Registered User

Poita is offline
 
Join Date: Jun 2011
Location: NSW Country
Posts: 3,586
It would be great to see a before and after shot.
Reply With Quote
  #3  
Old 18-08-2012, 09:33 AM
irwjager's Avatar
irwjager (Ivo)
Registered User

irwjager is offline
 
Join Date: Apr 2010
Location: Melbourne
Posts: 532
Hi Sali,

Always nice to see people think outside the box and going at it themselves. Newcomers are always the ones that have the potential to innovate because they approach problems with fresh eyes. Often people come to the same conclusion on how to solve their problems, but now and again people come up with some novel ideas worth pursuing.

Noise reduction (which is what your problem seems to boil down to) is a field of much research. It all comes down to preserving as much detail as possible, while getting rid of unwanted signal.
To do the best possible job, you need to undestand the nature of the noise in your image, as well as the characteristics of the detail you are trying to preserve. If you are able to accurately model the noise, you can subtract it form the signal and be left with the 'clean' signal.

Now, correct me if I'm wrong, but what you seem to be doing is clipping certain parts of the image to black. Aside from space not being pure black, this is quite a blunt way of getting rid of noise. This is because the question of noise is not a black-and-white problem - it is merely a degree of uncertainty in the signal which is quantifiable. For example, depending on your algorithm, a pixel may be deemed to be 25% noise and 75% 'real signal'. There are various ways of representing this in your image (diffusion by means of a Gaussian kernel is a popular one).

If you really want to understand the origin of the noise (and improve your program), I urge you to read up on the different types of noise you will find in your image in your particular situation. Also, don't forget that the JPEG is gamma corrected, modifying the distribution of your noise between dark, mid tone and bright parts (e.g. the noise distribution is no longer linear).

A 'before-and-after' shot would be nice!

As for your question about 'blurring'. What you would be simulating/restoring is an approximation to diffraction of the starlight by the aperture of your iPhone's camera. That profile (also called the Point Spread Function) is not a simple Gaussian blur, but follows the Airy disc pattern (look it up!) - assuming there are no further obstructions and your aperture is perfectly circular.

Cheers,
Reply With Quote
  #4  
Old 18-08-2012, 09:42 AM
silv's Avatar
silv (Annette)
Registered User

silv is offline
 
Join Date: Apr 2012
Location: Germany 54°N
Posts: 1,110
wouldn't focusing on the currently known approach obscure the newcomer's creative fresh view on things?
maybe, only without knowing anything about the currently agreed-to-be-best-methods, a new method can be derived.

it takes guts. from both sides: the new broom - and the old brooms, too.
Reply With Quote
  #5  
Old 18-08-2012, 10:17 AM
irwjager's Avatar
irwjager (Ivo)
Registered User

irwjager is offline
 
Join Date: Apr 2010
Location: Melbourne
Posts: 532
Quote:
Originally Posted by silv View Post
wouldn't focusing on the currently known approach obscure the newcomer's creative fresh view on things?
maybe, only without knowing anything about the currently agreed-to-be-best-methods, a new method can be derived.

it takes guts. from both sides: the new broom - and the old brooms, too.
Too true.
Unfortunately though, it appears that Sali's current algorithm has made 99.2% of his signal unrecoverable (it is clipped to black), which makes further processing and/or detail recovery impossible.
Reply With Quote
  #6  
Old 18-08-2012, 10:37 AM
graham.hobart's Avatar
graham.hobart (Graham stevens)
DeepSkySlacker

graham.hobart is offline
 
Join Date: Nov 2008
Location: hobart, tasmania
Posts: 2,241
stuff

my daughter's new hand print algorithm has added interesting depths to my TV picture. She's a real newbie!!

Graz lack of sleep Taz
Reply With Quote
  #7  
Old 18-08-2012, 11:30 AM
silv's Avatar
silv (Annette)
Registered User

silv is offline
 
Join Date: Apr 2012
Location: Germany 54°N
Posts: 1,110
doesn't that encourage you to invent a holographic TV?
you could make use of the parabolic mirror of that old newt in the garage...
Reply With Quote
  #8  
Old 18-08-2012, 11:37 AM
silv's Avatar
silv (Annette)
Registered User

silv is offline
 
Join Date: Apr 2012
Location: Germany 54°N
Posts: 1,110
yeah, Ivo, that might be so. (having read some of your posts, it IS probably so.)
but : you're an old broom. you are stuck in what you know.
without your first input, Sali might have arrived at the same conclusions or he might have swept us all off our feet and found an unthought-of new way.
that's just to underline my point, of course. superfluous, really. I know.
I'm procrastinating...
Reply With Quote
  #9  
Old 18-08-2012, 12:24 PM
irwjager's Avatar
irwjager (Ivo)
Registered User

irwjager is offline
 
Join Date: Apr 2010
Location: Melbourne
Posts: 532
Quote:
Originally Posted by silv View Post
yeah, Ivo, that might be so. (having read some of your posts, it IS probably so.)
but : you're an old broom. you are stuck in what you know.
without your first input, Sali might have arrived at the same conclusions or he might have swept us all off our feet and found an unthought-of new way.
that's just to underline my point, of course. superfluous, really. I know.
I'm procrastinating...
Hehehe
Don't get me wrong, I'm trying to encourage him(her?)!
About 24 months ago, my StarSoup was called StarWipe and I had exactly the same reason for writing it and hopping on the IIS forum. StarWipe solved my problem (light pollution) and I was looking for feedback on if/how this would help other people as well.

I'm merely trying to get across that, in order to make progress on new ideas or approaches in signal processing & algorithm development, simply throwing away 99.8% of the signal and replacing it with zeroes (as is evidenced in the histogram) is a dead end. Further processing is not possible because the data is no longer there - it has been deleted, not transformed. So either you're happy with this being the last step of your processing (I doubt this to be the case for any astrophotographers here on IIS), or this algorithm needs refining. I was trying to give pointers on how to refine this algorithm so that others might find it useful as well.

That said, without seeing a 'before' image we can only make educated guesses though on what the intents and purposes are of StarSoup.

Cheers,
Reply With Quote
  #10  
Old 18-08-2012, 11:40 PM
Sali
Registered User

Sali is offline
 
Join Date: Aug 2012
Posts: 3
(Ah, still noobing out the attachment system. Think I got it now.)


Poita: No worries. I have attached a before and after shot to the original post. Plus the whole original picture - may have accidentally uploaded as a jpg (oops) but you get the idea. Explanation follows:


Top-left in the cropped image is the top-left corner of the "original" image. I've included a manually adjusted version (top-right) to better show exactly what is going on in that image. Basically you have a bunch of compression clouds and a few uncompressed rasters. The uncompressed rasters (8x8 pixel chunks) are stars. Starsoup scans each uncompressed raster for its brightest points and amplifies them along with any neighbouring pixels (3x3 pixel chunk). It then culls everything else to black (bottom-left). The blur (bottom-right) is added in GIMP but should be automated in the new version - apparently needs a rethink (ala Ivo). The new version will also preserve entire rasters (because that data is valuable) but will still only amplify the stars themselves and neighbouring pixels.

Why does it cull everything else to black, you ask?

Because pretty much all the so-called "noise" you're seeing in the manually adjusted image (top-right) is actually what the 3Gs camera records as a "no data" reading. Which, in film terms, just means it should be black. But it isn't black - why?! Whenever the camera captures, it will record this weird soupy stuff - which, before compression, is just a bunch of stray pixels fired by mistake. I don't really know the science behind that, but I'm told it's basically just phantom signals which can be received and "dirty" the result anywhere from the original button-tap, through the photosensitive plate, down the wires to the hard-drive itself. Take a photo with this camera inside a sealed light-proof box and you'll still get this soup - or whatever it's actually called. Then the machine compresses the image internally before any other software can touch it - making the problem worse. So you get a bunch of cloudy-looking stuff interlaced with larger green splotches. The point is - those cloudy bits should read as black and it's only a fault of the hardware that they don't...

...So I'm not precious with them. The camera didn't really see that stuff - so we can cull it.

I figure the only element that truly matters are the rasters that have actually recorded something - which you can see in the top-right image. But it gets a little tricky when rasters that have genuinely recorded something become compressed because they are below the compression threshold (their contrast is so low the iPhone automatically compresses them during capture - sometimes two or three times). I don't know the default compression level of the 3Gs camera - but it is obviously bad. It's an added pain because the data from the "switched on" rasters then gets blurred with the ugly soup around them. But I think I can amend Starsoup to recognise these compressed rasters. The paterns they exhibit are quite uniform because, at the pixel level, all stars look pretty much the same (except for color).


OKAY.... Long winded. But basically, the program only culls data which should have been recorded as black in the first place. We can see which parts should be black because the rasters in that part of the image have not been activated - as they would be in the presence of starlight.

As a note: the uncompressed rasters show what "noise" actually looks like in the 3Gs camera before compression. Each uncompressed raster still contains soup and noise - but the contrasts are much higher and the noise and soup are much more obvious due to lack of compression. If it were saved as a bitmap during capture, the whole image would be REALLY SPECKLY. Compression here makes it tricky to apply bias - but that's the point of this program.

As far as I'm aware, this is a problem with any camera when dealing with jpg images on a per-pixel level. But I took the original image on my iPhone, so I've been using it as a workhorse for tests.


Hope the images are what you wanted.

And irwjager: I'm pretty sure you just said all this anyway and far better too, but I really don't know the terminology of this field yet. This program is a labour of love - I'd much rather just have a program that already does this. StarTools?
Reply With Quote
  #11  
Old 19-08-2012, 12:43 PM
irwjager's Avatar
irwjager (Ivo)
Registered User

irwjager is offline
 
Join Date: Apr 2010
Location: Melbourne
Posts: 532
Hi Sali,

Thanks for your response.
Don't be discouraged by my constructive criticism below - I love your thinking.

Quote:
Originally Posted by Sali View Post
(Ah, still noobing out the attachment system. Think I got it now.)


Poita: No worries. I have attached a before and after shot to the original post. Plus the whole original picture - may have accidentally uploaded as a jpg (oops) but you get the idea. Explanation follows:


Top-left in the cropped image is the top-left corner of the "original" image. I've included a manually adjusted version (top-right) to better show exactly what is going on in that image. Basically you have a bunch of compression clouds and a few uncompressed rasters. The uncompressed rasters (8x8 pixel chunks) are stars. Starsoup scans each uncompressed raster for its brightest points and amplifies them along with any neighbouring pixels (3x3 pixel chunk). It then culls everything else to black (bottom-left). The blur (bottom-right) is added in GIMP but should be automated in the new version - apparently needs a rethink (ala Ivo). The new version will also preserve entire rasters (because that data is valuable) but will still only amplify the stars themselves and neighbouring pixels.
Correct me if I'm wrong, but I thought the JFIF scheme (as used for most JPEG compressed images, including those from the IPhone 3GS) do not have 'uncompressed' blocks. They still have to go through the same quantization stage and, as a matter of fact, high frequency detail (like stars) are compressed more than low frequency detail.

Also, what do you do if there are multiple stars within the same 8x8 pixel block?
Quote:
Originally Posted by Sali View Post
Why does it cull everything else to black, you ask?

Because pretty much all the so-called "noise" you're seeing in the manually adjusted image (top-right) is actually what the 3Gs camera records as a "no data" reading. Which, in film terms, just means it should be black. But it isn't black - why?! Whenever the camera captures, it will record this weird soupy stuff - which, before compression, is just a bunch of stray pixels fired by mistake. I don't really know the science behind that, but I'm told it's basically just phantom signals which can be received and "dirty" the result anywhere from the original button-tap, through the photosensitive plate, down the wires to the hard-drive itself. Take a photo with this camera inside a sealed light-proof box and you'll still get this soup - or whatever it's actually called. Then the machine compresses the image internally before any other software can touch it - making the problem worse. So you get a bunch of cloudy-looking stuff interlaced with larger green splotches. The point is - those cloudy bits should read as black and it's only a fault of the hardware that they don't...

...So I'm not precious with them. The camera didn't really see that stuff - so we can cull it.
There are many different types of noise (for example see http://www.qsimaging.com/ccd_noise.html) that affect the image. Beyond that however it is also important to realise that space is not empty, black nor fundamentally dark. There is an infinite amount of light coming from faint stars, galaxies, nebulae, etc, which will contribute to the light levels recorded by the CCD. Deciding to cut this light out completely is a rather arbitrary decision.
In fact, using the time time and latitude you provided and plugging in those into KStars (free planetarium software), reveals that StarSoup is arbitrarily rejecting some pretty bright stars and kind of makes the sky unrecognisable to an astronomer or a software plate solver for that matter. It'd be great to see your software can be tweaked so that platesolving software can successfully detect the stars!

Also, don't forget to take into account the debyaring algorithm which will likely have exacerbated noise as well.
Quote:
Originally Posted by Sali View Post
I figure the only element that truly matters are the rasters that have actually recorded something - which you can see in the top-right image. But it gets a little tricky when rasters that have genuinely recorded something become compressed because they are below the compression threshold (their contrast is so low the iPhone automatically compresses them during capture - sometimes two or three times). I don't know the default compression level of the 3Gs camera - but it is obviously bad. It's an added pain because the data from the "switched on" rasters then gets blurred with the ugly soup around them. But I think I can amend Starsoup to recognise these compressed rasters. The paterns they exhibit are quite uniform because, at the pixel level, all stars look pretty much the same (except for color).
Can you tell me what you mean by getting compressed 2 or 3 times? JPEG/JFIF is just a single pass compression scheme, no?
Quote:
Originally Posted by Sali View Post
OKAY.... Long winded. But basically, the program only culls data which should have been recorded as black in the first place. We can see which parts should be black because the rasters in that part of the image have not been activated - as they would be in the presence of starlight.

As a note: the uncompressed rasters show what "noise" actually looks like in the 3Gs camera before compression. Each uncompressed raster still contains soup and noise - but the contrasts are much higher and the noise and soup are much more obvious due to lack of compression. If it were saved as a bitmap during capture, the whole image would be REALLY SPECKLY. Compression here makes it tricky to apply bias - but that's the point of this program.

As far as I'm aware, this is a problem with any camera when dealing with jpg images on a per-pixel level. But I took the original image on my iPhone, so I've been using it as a workhorse for tests.


Hope the images are what you wanted.

And irwjager: I'm pretty sure you just said all this anyway and far better too, but I really don't know the terminology of this field yet. This program is a labour of love - I'd much rather just have a program that already does this. StarTools?
I attached a version of your input image of what it could look like, but as you say, the data is quite noisy. If you would have 2 or more frames (4 or more would be perfect) and would stack them, then you could get some real nice images, even with very short exposure, highly compressed JPEG data.

Regardless, you got some really cool ideas and any labour of love is worthy of fostering and exploring further! Please do send me a PM if you'd like to chat further about what you're doing technically - I don't want to bore everyone in this forum with technical talk...

Cheers,
Attached Thumbnails
Click for full-size image (starsoup_ST13.jpg)
18.5 KB30 views
Reply With Quote
  #12  
Old 21-08-2012, 01:23 PM
Sali
Registered User

Sali is offline
 
Join Date: Aug 2012
Posts: 3
Quote:
Originally Posted by irwjager View Post
Please do send me a PM if you'd like to chat further about what you're doing technically - I don't want to bore everyone in this forum with technical talk.
I can just imagine someone in three months considering this same topic, finding this thread and having their research cut short. I'd be more than happy to PM you if that's just how it's done round here (learning heaps already - thanks) but I'd like to make at least one more post about this here first.


Quote:
Originally Posted by irwjager View Post
...I thought the JFIF scheme (as used for most JPEG compressed images, including those from the IPhone 3GS) do not have 'uncompressed' blocks .... Also, don't forget to take into account the debyaring algorithm which will likely have exacerbated noise as well .... Can you tell me what you mean by getting compressed 2 or 3 times? JPEG/JFIF is just a single pass compression scheme, no?

Also, what do you do if there are multiple stars within the same 8x8 pixel block?
Followed your links and googled some terms. What a mind-blowing adventure! Not possible without learning the true terminology. What I have so far called "uncompressed rasters" more acurrately refers to rasters which have retained their raw bayer matrix data. I don't know what process determines whether rasters are debayered/compressed or not. In my notes I just call this visual effect the "checkerboard matrix" because the red and blue pixels in those rasters pretty much always return no value at all - making the raster look like a green chess board.

The rasters which most likely contain stars all exhibit this appearance. Starsoup finds them based on the checkerboard pattern then examines each pixel: if the pixel fits neatly into the checkerboard pattern (in terms of colour and contrast) then it is probably not a star. If it exhibits huge variations in colour or contrast compared to its neighbouring pixels AND doesn't fit into the patern, then it is most likely a star and is recorded - even where there are two in one raster.

Some checkerboards (which Starsoup v1.0 cannot recognise) appear to have been filtered horizontally or horizontally+vertically (linear vs bilinear?); hence my 2 or 3 layers of compression observation. I guess that's a bunt observation?


Quote:
Originally Posted by irwjager View Post
...It is also important to realise that space is not empty, black nor fundamentally dark. There is an infinite amount of light coming from faint stars, galaxies, nebulae, etc, which will contribute to the light levels recorded by the CCD. Deciding to cut this light out completely is a rather arbitrary decision.
My observation is that the 3Gs camera is mostly blind to this "infinite light" because it is simply not sensitive enough to record it. There may indeed be light out there - but the camera can't see it and we only want to record what the camera actually sees. In such cases, pixels which failed to record a value should return "black" when read to a computer screen. But with the 3Gs they don't - they return various noises which appear to have nothing to do with the subject.

Fundamentally, Starsoup is based on my observation that the "noise" exhibited in the original picture is common across all images taken in near-no-light conditions with this camera. The only difference between a 3Gs picture of the night sky and a 3Gs picture taken inside a sealed box are those few rasters that have retained their checkerboard pattern (which I guess is the remnant bayer matrix). The noise (even those odd bright points which may "look" like stars at first) is present in both situations. It seems most likely the noise is not due to cosmic light and has no bearing on anything outside the hardware.


Quote:
Originally Posted by irwjager View Post
I attached a version of your input image of what it could look like, but as you say, the data is quite noisy. If you would have 2 or more frames (4 or more would be perfect) and would stack them, then you could get some real nice images, even with very short exposure, highly compressed JPEG data.
Without doubt, that is a cool picture. Thanks for showing such interest! Downside is, the image you posted mimicks the results I originally got when adjusting the image by hand. It makes unusually bright clumps of noise look like stars - and I'm pretty sure they're not, based on the observations I mentioned above. I may well be wrong in my observation - but it's pretty weird to see similar noise patterns in two different images and then label that noise based on the image context rather than its pattern.

I've attached another image to show what I mean - though I guess you figure it already. With Starsoup, the top-left raster would obviously record a star: note the checkerboard pattern and the unique traits of the pixels itself (this example also shows why we preserve the neighbouring pixels - in case of blurring).

Meanwhile, the bright point in the bottom right may indeed be a star - but the current version of Starsoup would not recognise it because its checkerboard has been filtered or compressed (how do we know which is responsible?). it looks linearly filtered to me - the new version should recognise this as a star.

But all the stuff inbetween is just soup. It's useless data that's made its way in at some point and which I sincerely doubt has any bearing on the image we tried to capture. It is noise that has been filtered and compressed - that's all. Manual non-biased adjustments cannot account for this.

Will your manually adjusted image compute in KStars? That might be really helpful for me - manually testing the new recognition algorithm.
Attached Thumbnails
Click for full-size image (manual comparisons.png)
37.1 KB13 views
Reply With Quote
  #13  
Old 23-08-2012, 10:04 AM
irwjager's Avatar
irwjager (Ivo)
Registered User

irwjager is offline
 
Join Date: Apr 2010
Location: Melbourne
Posts: 532
Quote:
Originally Posted by Sali View Post
I can just imagine someone in three months considering this same topic, finding this thread and having their research cut short. I'd be more than happy to PM you if that's just how it's done round here (learning heaps already - thanks) but I'd like to make at least one more post about this here first.
There are no rules Sali, as far as I am concerned. I'll leave it to the moderators of this forum to put this discussion in the correct category. I just don't want to 'scare' true beginners, as we're starting to get quite technical...

I just wanted to invite you to correspond with me and help you with your program.
Quote:
Originally Posted by Sali View Post
Followed your links and googled some terms. What a mind-blowing adventure! Not possible without learning the true terminology. What I have so far called "uncompressed rasters" more acurrately refers to rasters which have retained their raw bayer matrix data. I don't know what process determines whether rasters are debayered/compressed or not. In my notes I just call this visual effect the "checkerboard matrix" because the red and blue pixels in those rasters pretty much always return no value at all - making the raster look like a green chess board.

The rasters which most likely contain stars all exhibit this appearance. Starsoup finds them based on the checkerboard pattern then examines each pixel: if the pixel fits neatly into the checkerboard pattern (in terms of colour and contrast) then it is probably not a star. If it exhibits huge variations in colour or contrast compared to its neighbouring pixels AND doesn't fit into the patern, then it is most likely a star and is recorded - even where there are two in one raster.

Some checkerboards (which Starsoup v1.0 cannot recognise) appear to have been filtered horizontally or horizontally+vertically (linear vs bilinear?); hence my 2 or 3 layers of compression observation. I guess that's a bunt observation?
What is really happening is this;
The CCD collects photons for the duration of the exposure time.
Some of these photons may be light, others may be thermal noise.
Some of the light may be light pollution, some of it Gegenschein, some of it stray light, some of it diffracted light, etc.
Some of the photons may be from heat generated within your phone.
Some of the pixels on the CCD may be dead (i.e. will always record 0), may be 'hot' (i.e. will always record full intensity).

Once the exposure is over, the data is read from the CCD. During reading, noise is introduced during analog to digital conversion as well as addition of a natural bias (some pixels/wells have a different starting value for 'empty').

Once the image has been digitized, the image needs to be debayered.
The 3GS' CCD is a 3 Megapixel sensor with 0.75MP allocated for red, 1.5MP allocated for green and 0.75MP allocated for blue.
From that 3MP *per channel* needs to be reconstructed, so the gaps in the checkerboard pattern need to be filled in (this is what debayering does). There are many algorithms to do just that. However, they all have one thing in common - they use neighbouring pixels to make up for the gaps. If the neighbouring pixels have recorded noise, then this noise is propagated into the gaps. The result is that noise is exacerbated.

Once the full image has been debayered (NOTE: the full image is debayered and no checkerboard pattern will remain). There is the 'zazzing-up' and compression stage.
Unfortunately, all camera (and phone) vendors use (undisclosed) color and contrast enhancement algorithms prior to saving to JPEG, as well as non-linear stretching, sharpening techniques and noise reduction. These tend to work well for 'normal use' but can have disastrous effects on astronomical subjects, making noise worse or destroying detail.

Once the auto-image processing routines have had their way with the raw data, the final result is compressed to JPEG. This is unfortunately another source of grave distortion of the recorded data.
JPEG achieves its high compression by throwing away precision when representing high frequency detail data (e.g. stars) and throwing away color data.
Quote:
Originally Posted by Sali View Post
My observation is that the 3Gs camera is mostly blind to this "infinite light" because it is simply not sensitive enough to record it. There may indeed be light out there - but the camera can't see it and we only want to record what the camera actually sees. In such cases, pixels which failed to record a value should return "black" when read to a computer screen. But with the 3Gs they don't - they return various noises which appear to have nothing to do with the subject.
This is unfortunately an erroneous observation (see above about the different sources of photons). Noise mitigation is not a black-and-white/binary scenario or problem. This is why a simple bias (show a pixel or not show a pixel) is not a viable solution.
Noise is uncertainty in a signal - it's a sliding scale.
Quote:
Originally Posted by Sali View Post
Fundamentally, Starsoup is based on my observation that the "noise" exhibited in the original picture is common across all images taken in near-no-light conditions with this camera. The only difference between a 3Gs picture of the night sky and a 3Gs picture taken inside a sealed box are those few rasters that have retained their checkerboard pattern (which I guess is the remnant bayer matrix). The noise (even those odd bright points which may "look" like stars at first) is present in both situations. It seems most likely the noise is not due to cosmic light and has no bearing on anything outside the hardware.
That is an excellent observation and is the reason why people try to obtain a model of this very pattern noise. These are called bias frames (a picture/frame taken with 0 exposure time) and dark frames (a picture taken with the same exposure time as the light frame - e.g. the 'real picture' - but in complete darkness).
By subtracting the dark frame from the light frame, one can obtain an image that is free from certain types of noises.
Quote:
Originally Posted by Sali View Post
Without doubt, that is a cool picture. Thanks for showing such interest! Downside is, the image you posted mimicks the results I originally got when adjusting the image by hand. It makes unusually bright clumps of noise look like stars - and I'm pretty sure they're not, based on the observations I mentioned above. I may well be wrong in my observation - but it's pretty weird to see similar noise patterns in two different images and then label that noise based on the image context rather than its pattern.
That's exactly what I'm talking about; we can't be fundamentally sure what is noise and what is not. Therefore we cannot make the decision to allow or remove said pixels.
Now, what we *can* do however (and what you seem to try to do at a basic level) is trying to find the characteristics of image features that we can be sure about are *not* noise. However, in order to do that, you need to know your enemy and learn about the different types of noise and artifacts - they all have different signatures. Only once you understand these characteristics can you try finding solutions to them. The 3GS image suffers not from one type of noise/problem, but many. Worse, they interact and strengthen each other.
Quote:
Originally Posted by Sali View Post
I've attached another image to show what I mean - though I guess you figure it already. With Starsoup, the top-left raster would obviously record a star: note the checkerboard pattern and the unique traits of the pixels itself (this example also shows why we preserve the neighbouring pixels - in case of blurring).
The 'checkerboard' pattern that you see should be wholly attributed to the JPEG compression stage which throws its hands up in the air trying to compress a lot of high frequency detail (e.g. random noise) and says 'Sorry, this is the best I can do with what I am allowed consume in bandwidth' while also trying to compress real detail.
Your assumptions about how to detect erroneous stars are fundamentally flawed unfortunately. That's why I'm urging you to try to understand how the image came to be from sensor to JPEG at every step of the way.
Quote:
Originally Posted by Sali View Post
Meanwhile, the bright point in the bottom right may indeed be a star - but the current version of Starsoup would not recognise it because its checkerboard has been filtered or compressed (how do we know which is responsible?). it looks linearly filtered to me - the new version should recognise this as a star.
The problem right now is that StarSoup is throwing away real stars making the sky unrecognisable. I urge you to have a look at any freely available planetarium software to see if you can match stars in the input image. You will see StarSoup's algorithm is throwing away real stars at random.
Quote:
Originally Posted by Sali View Post
But all the stuff inbetween is just soup. It's useless data that's made its way in at some point and which I sincerely doubt has any bearing on the image we tried to capture. It is noise that has been filtered and compressed - that's all. Manual non-biased adjustments cannot account for this.
Again, useless data doesn't exist. Data is always useful but, in the case of noise, merely suffers from uncertainty. It's not a black-or-white domain.
Quote:
Originally Posted by Sali View Post
Will your manually adjusted image compute in KStars? That might be really helpful for me - manually testing the new recognition algorithm.
Try matching your input image to the output of some planetarium software.
To measure StarSoup's accuracy, try plate solving software to see if it can successfully recognise the output image.

Again, feel free to PM me (or just post here if you want) - it's great to see someone tackling astro-specific image processing algorithms.

Cheers,
Reply With Quote
  #14  
Old 23-08-2012, 11:55 AM
silv's Avatar
silv (Annette)
Registered User

silv is offline
 
Join Date: Apr 2012
Location: Germany 54°N
Posts: 1,110


I was just reading again how I am supposed to take my flats for image processing in DSS.
(Bugger: should've taken them yesterday night...! I knew it! I was just not wearing a white tshirt and was too lazy to go inside to fetch one)

The real problem seems to be the 3G's camera and inherent image processing features and bugs.

I imagine the outcome of StarSoup being an iPhone App for - let's say - $ 14.99.
So the program needs to work with what the iPhone has to offer.
Even if we WANT to have better quality or options for better processing - we won't get it. It's the iPhone Sali is working with.

Let's assume the iphone always gets taken out of the back pocket to just quickly shoot an image. So the noise temperature will always be roughly the same.
Sali can produce several darks and combine them to a Master Dark. (In DSS . Not by writing your own code.)
This - for the time being - gets built into StarSoup as the default Master Dark.
Flat frames: same.
(Later in the real App, the user can be instructed how to create and store his own master dark. nobody will expect Hubble Image Quality so an approximate is good enough.)

So, signal correction regarding these 2 factors as produced specifically by your Phone could be accomplished, somehow?
Split up into pixel groups of 3x3 or what you are using atm, you could maybe change the value of them into something approximately nightsky-ish: like dark slate-blue-brown.
In areas where your inspection has completely ruled out the possibility of stars AND the master dark + flat confirm signal common errors for your specific camera.
In areas where there MIGHT be stars, the pixel groups' light value/color value needs to be changed differently.
In case the Masters show a common noise pattern there, too, the pixels colors need to reflect that.

"Make it look nice. Fake it." In the best possible way based on real inspection of what the pixel color looks like in real stars up there.
I mean, don't let them all look white-ish. That's boring. Invent a little pattern that spreads some blue glow here and there.

You wrote that the camera really doesn't capture ANY light like the faint light from far away galaxies, nebula and such.
Is that really the case??
If you take the original and enhance the exposure value in GIMP or some other software, do the dark/noisy pixels not change their color?

If that is the case then my next thought is useless.
you could add a function to StarSoup to artificially enhance exposure time by tweaking the signal.... (obviously I have no idea how).
So that you get 1 second worth of light instead of only 1/40secs, for example.
This function not being a slider-like processing GUI button but an estimated average thingy, automatically applied to the image.
A proud fake - based on the knowledge of what it would look like in a RAW image taken with a DSLR.

See, this was the user input what I would like to have as an App on my iPhone 4s. ( Which has a different camera but that's just a minor tweak and a note in the "compatibility section" in the future App page.)

I like your conversation here on IIS. Glad you didn't go hiding in PM.
Hope you don't mind too much that I disturb your adult subject.
Don't bother responding if it's useless. I don't take offense.

I like your love's labor - and Iwo's contribution.
and can see monetary advantage for you if you see it through

Last edited by silv; 23-08-2012 at 12:09 PM.
Reply With Quote
  #15  
Old 23-08-2012, 12:14 PM
silv's Avatar
silv (Annette)
Registered User

silv is offline
 
Join Date: Apr 2012
Location: Germany 54°N
Posts: 1,110
Quote:
and can see monetary advantage for you if you see it through
on that note, maybe it IS better to go hiding in PM. ...
my German paranoia whispers: someone might steal your idea and publish before you.
Reply With Quote
  #16  
Old 23-08-2012, 12:18 PM
silv's Avatar
silv (Annette)
Registered User

silv is offline
 
Join Date: Apr 2012
Location: Germany 54°N
Posts: 1,110
Iwo, did you inspect your processed version with some star naming software?
Maybe the bright blob in the center (moon, as I understood) is confusing the software and by editing it out, recognition could be achieved. ?
Reply With Quote
  #17  
Old 23-08-2012, 03:16 PM
irwjager's Avatar
irwjager (Ivo)
Registered User

irwjager is offline
 
Join Date: Apr 2010
Location: Melbourne
Posts: 532
Pretty cool idea Silv!
On-board processing would be highly preferred;
the better option would probably be auto stacking multiple short exposures. This should help a little with pattern noise as well as different pixels will get used each frame for the same scene (as the earth rotates between frames).
Because you have direct access to the camera, no compression takes place before you can get your hands on the output. From there it is easier to create dark frames, remove hot- and dead pixels and get an overall higher quality output.
I'm actually a mobile developer by trade but between the daytime job and StarTools lack the time to do this.
Sali, are you up for the challenge?
Reply With Quote
  #18  
Old 23-08-2012, 04:08 PM
silv's Avatar
silv (Annette)
Registered User

silv is offline
 
Join Date: Apr 2012
Location: Germany 54°N
Posts: 1,110
"(as the earth rotates between frames)." <--- suggests a tripod is used.
Not a practical assumption. If there were a tripod then the user would also use a proper cam.
They're the click&consume-on-the-go people posting to facebook. they wouldnt have a tripod plus they would sway drunk between shots.
However, the gyroscpe thingy could maybe be used to aid in re-positioning the camera between shots into the same angle? Like in the "Panoramatic" App, maybe? (but the light signal is soo low might be nothing to work with for navigation.)
Reply With Quote
  #19  
Old 24-08-2012, 04:10 PM
irwjager's Avatar
irwjager (Ivo)
Registered User

irwjager is offline
 
Join Date: Apr 2010
Location: Melbourne
Posts: 532
Quote:
Originally Posted by silv View Post
"(as the earth rotates between frames)." <--- suggests a tripod is used.
Not a practical assumption. If there were a tripod then the user would also use a proper cam.
I was thinking people would just simply put the phone screen-down on the roof of their cars, or a table or even your leg/lap. Doesn't have to be long either. I don't think exposure time can be controlled on these types of cameras, so keeping very still shouldn't be too much of a problem.
Quote:
Originally Posted by silv View Post
They're the click&consume-on-the-go people posting to facebook. they wouldnt have a tripod plus they would sway drunk between shots.
Reply With Quote
Reply

Bookmarks

Thread Tools
Rate This Thread
Rate This Thread:

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +10. The time is now 07:05 AM.

Powered by vBulletin Version 3.8.7 | Copyright ©2000 - 2025, Jelsoft Enterprises Ltd.
Advertisement
Bintel
Advertisement