Go Back   IceInSpace > Equipment > Astrophotography and Imaging Equipment and Discussions
Register FAQ Calendar Today's Posts Search

Reply
 
Thread Tools Rate Thread
  #1  
Old 19-09-2019, 06:33 PM
The_bluester's Avatar
The_bluester (Paul)
Registered User

The_bluester is offline
 
Join Date: Feb 2011
Location: Kilmore, Australia
Posts: 3,355
ASI294 (IMX294) effective pixel size.

So, I have ben researching the pants off an eventual move to mono imaging, but one of the things it has highlighted to me is that the source I read might have gone a basic fact about the IMX294 based cameras wrong.


I had read and accepted that the "Pixel" size of the IMX294 was the "super pixel" size of the RGGB block (Actually that is even misleading as each "Pixel" is four sub pixels of each colour) but reading tonight it looks more like it is the "Pixel" (2X2 binned block of each colour) size, so like most OSC cameras it's effective resolution is likely lower than than the reported pixel size indicates when compared to a mono of the same "dimensions".


Does anyone know definitively?
Reply With Quote
  #2  
Old 19-09-2019, 07:11 PM
Camelopardalis's Avatar
Camelopardalis (Dunk)
Drifting from the pole

Camelopardalis is offline
 
Join Date: Feb 2013
Location: Brisbane
Posts: 5,460
Regardless, you get 12MP Bayer-arrayed output.

Your effective resolution will depend as much as your sampling and seeing...
Reply With Quote
  #3  
Old 19-09-2019, 08:04 PM
The_bluester's Avatar
The_bluester (Paul)
Registered User

The_bluester is offline
 
Join Date: Feb 2011
Location: Kilmore, Australia
Posts: 3,355
I should put that differently, effective resolution for broadly the same other conditions such as sampling and seeing.


Per my other thread I still see myself going mono eventually, but this one is more specifically about the actual pixel size of the IMX294. Sony don't really categorically state it. They state that the "Unit size" is 4.63um, but previous reading suggested that the "Unit" was the RGGB block, which is actually a matrix of RGGB but with each of those being a 2X2 binned block of four pixels of each colour!


What I was trying to confirm more than anything else is my current thinking which is that the 4.63um "Pixel" size refers to the 2X2 binned block of each colour. Something to allow apples to be compared with apples.


The oddball factor of this camera is that you can (Software) bin it but retain a colour image, which was one of the reasons I though that the 4.63um referring to the RGGB "Super pixel" might have been right.
Reply With Quote
  #4  
Old 19-09-2019, 11:18 PM
billdan's Avatar
billdan (Bill)
Registered User

billdan is offline
 
Join Date: Mar 2012
Location: Narangba, SE QLD
Posts: 1,551
The 4.63 micron pixel is a single square pixel in an array of 4164 * 2796 pixels or 19.28 * 12.95 mm in sensor size.
A colour cell would be 9.26 * 9.26 microns for the RGGB pixels.

Software binning is basically resizing the image to make it smaller in dimensions.

EDIT: I wish they made a mono version of this sensor as I believe it would be better than the ASI-1600 (14 bit, 65,000e full well, 75% QE).

Last edited by billdan; 19-09-2019 at 11:42 PM.
Reply With Quote
  #5  
Old 20-09-2019, 05:50 AM
The_bluester's Avatar
The_bluester (Paul)
Registered User

The_bluester is offline
 
Join Date: Feb 2011
Location: Kilmore, Australia
Posts: 3,355
I agree, I would love a mono version of this sensor, sadly unlikely it seems.

Regards the pixel size, what you wrote is more or less what I had decided is most likely right, with the exception that each colour "Pixel" is actually an already software binned 2X2 block of that colour pixel. The physical pixel size is about 2.31um.

The Sony literature shows the ability to play tricks with that configuration in it's original life as a security camera sensor, making HDR frames in a single exposure by exposing the sub pixels in pairs rather than the 2X2 "pixel" and exposing each pair for a different time but starting concurrently as a way to reduce the blurring of moving objects that HDR produces.
Reply With Quote
  #6  
Old 20-09-2019, 10:27 AM
billdan's Avatar
billdan (Bill)
Registered User

billdan is offline
 
Join Date: Mar 2012
Location: Narangba, SE QLD
Posts: 1,551
Hi Paul,
I had a look at he Sony data sheet and it confuses me as to what is actually happening. So maybe each pixel is physically 2.31 micron but you can only access them as 2x2 binned.
Attached Thumbnails
Click for full-size image (IMX294 cell crop.jpg)
49.3 KB101 views
Reply With Quote
  #7  
Old 20-09-2019, 10:39 AM
The_bluester's Avatar
The_bluester (Paul)
Registered User

The_bluester is offline
 
Join Date: Feb 2011
Location: Kilmore, Australia
Posts: 3,355
Agreed. It is not very clearly written. My earlier understanding was that the entire RGGB block of four groups of four sub pixels was the 4.63um “pixel” but that does not make sense as to how it debayers. I am almost as certain as I can be that the software binned 2x2 blocks of each colour are the 4.63um “pixels” or the debayer routine would be very different. There has to be something very funky going on in the software to then be able to further bin it and retain a colour image.
Reply With Quote
  #8  
Old 20-09-2019, 07:53 PM
The_bluester's Avatar
The_bluester (Paul)
Registered User

The_bluester is offline
 
Join Date: Feb 2011
Location: Kilmore, Australia
Posts: 3,355
I did an experiment tonight by shooting some flats with coloured light and it looking at an undebayered image and the image is definitely just an RGGB matrix. I did it with green light and the four pixel matrix was two bright and two dark pixels on the diagonal, then with red light and it is one bright, two dim and one dark pixels indicating a bit of spillover between the red and green, but this is taken with a non calibrated light source.
Reply With Quote
  #9  
Old 20-09-2019, 08:45 PM
ChrisV's Avatar
ChrisV (Chris)
Registered User

ChrisV is offline
 
Join Date: Aug 2015
Location: Sydney
Posts: 1,762
I thought with these OSC cameras that pixel size was each individual RGGB pixel. What you end up with depends upon how you debayer - superpixel giving half the resolution and others the full resolution?
Reply With Quote
  #10  
Old 21-09-2019, 09:02 AM
The_bluester's Avatar
The_bluester (Paul)
Registered User

The_bluester is offline
 
Join Date: Feb 2011
Location: Kilmore, Australia
Posts: 3,355
It appears that is the case, but the way the specs on the IMX294 are written up it is very unclear, particularly how you can software bin a bayer matrix and still end up with a colour image.
Reply With Quote
  #11  
Old 21-09-2019, 10:01 AM
Merlin66's Avatar
Merlin66 (Ken)
Registered User

Merlin66 is offline
 
Join Date: Oct 2005
Location: Junortoun Vic
Posts: 8,922
https://www.cambridgeincolour.com/tu...ra-sensors.htm

Not exactly the same issue but interesting and important all the same.
The Bayer matrix is but one element of the colour image process.
By far the important is the choice (if possible) of the debayer algorithm.
As can be see in the above article the debayer alogrithm produces an synthetic "coloured" pixel with the same size as a normal pixel and at the same spacing on the chip. Hence the final colour image has a resolution close to the chip pixel size.
(after de-bayering you'll never see an image, when zooooomed in to pixel level just showing RGB pixels)
In spectroscopy, when a OSC sensor is used we typically see the resolution about x1.3 that obtained with a mono chip. ie 0.1A for a mono, 0.13A for an OSC.
Reply With Quote
  #12  
Old 21-09-2019, 12:05 PM
The_bluester's Avatar
The_bluester (Paul)
Registered User

The_bluester is offline
 
Join Date: Feb 2011
Location: Kilmore, Australia
Posts: 3,355
You can certainly see the effects of debayering. When I shot coloured flats to have a look, if you looked at the undebayered image you could clearly see which pixels were most sensitive to the incoming wavelengths, but if it was debayered, the red for instance came up with a mottled orange image when viewed at the pixel level. Probably not pure red as I did not use any sort of colour calibration on the flat panel (It is an old graphics tablet) I just threw a red JPG image on the tablet and put it on the scope.


It is easy to see how some ultimate sharpness has to be lost with an OSC even at the same image scale as an equivalent mono through filters, anything involving interpolation has to come with a cost.


Now, as to being able to bin 2X2 and still have a colour image, anyone got any guesses?


Images attached are of the undebayered pixel response to a red light source (With obvious spill into the green due to no calibration of the light) and the response to a green light. The diagonal pair of green pixels are obvious in the green image. I did not get around to a blue version.
Attached Thumbnails
Click for full-size image (Red.JPG)
15.8 KB86 views
Click for full-size image (Green.JPG)
15.4 KB72 views
Reply With Quote
  #13  
Old 22-09-2019, 08:51 AM
Camelopardalis's Avatar
Camelopardalis (Dunk)
Drifting from the pole

Camelopardalis is offline
 
Join Date: Feb 2013
Location: Brisbane
Posts: 5,460
There is some inevitable loss in resolution with a Bayer array since you have two pixels of green and only one red and blue for every four pixels. Thus the resolution in both x and y is half of that of green.

But your results, as I said before, will be very dependent on sampling scale and seeing. Seeing alone can blur the received image sufficiently to make any theoretical loss of resolution insignificant.

Regarding the superpixels...it’s not impossible that they use an arrangement not dissimilar to X-Trans, only for all colours. Alternatively, it could just be that the sub-pixel data is inaccessible at the software end.
Reply With Quote
  #14  
Old 22-09-2019, 10:32 AM
Merlin66's Avatar
Merlin66 (Ken)
Registered User

Merlin66 is offline
 
Join Date: Oct 2005
Location: Junortoun Vic
Posts: 8,922
Dunk,
I think in the real world we always work with images which have been debayered.
The algorithm used can make a big difference to the outcome.
The bilinear method gives a lower resolution when compared with the more sophisticated VNG (variable number of gradients) model.

Dr. Craig Stark summary:

Thus, your 8 megapixel DSLR
has 8 million pixels, but only 4 million green pixels and only 2 million red and 2 million blue pixels.
This has led to the belief that the resolution of such images is far below that of a monochrome camera and that the quality is necessarily much lower as a result of this Bayer matrix and the debayering process.
While encoding the image into a Bayer matrix and reconstructing it does involve interpolation and can be done in ways that are horribly inaccurate, the loss need not be severe.

Full paper is here: http://www.stark-labs.com/craig/reso...yering_API.pdf
Reply With Quote
  #15  
Old 22-09-2019, 11:58 AM
The_bluester's Avatar
The_bluester (Paul)
Registered User

The_bluester is offline
 
Join Date: Feb 2011
Location: Kilmore, Australia
Posts: 3,355
I think what will be interesting when budget allows for it will be a back to back shootout of the same target and same optics with OSC and then LRGB. Looking at debayer algos it is almost certainly not as simple as saying you have a 4 pixel "super pixel" and that is your resolving power compared to a mono, when it obviously interoplates from adjacent colour pixels and few targets would be emitting light which appears on only one color pixel with the broad wavelength band of each pixel. I do suspect it is safe to say that pixel for pixel you should be able to build a better picture from a mono as you are not fudging data from adjacent pixels to turn every red, green or blue pixel into an RGB equivalent.
Reply With Quote
Reply

Bookmarks


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +10. The time is now 04:10 AM.

Powered by vBulletin Version 3.8.7 | Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Advertisement
Bintel
Advertisement