Go Back   IceInSpace > Equipment > Astrophotography and Imaging Equipment and Discussions

Reply
 
Thread Tools Rate Thread
  #1  
Old 02-08-2010, 09:49 AM
Octane's Avatar
Octane (Humayun)
IIS Member #671

Octane is offline
 
Join Date: Dec 2005
Location: Canberra
Posts: 11,159
Image Deblurring using Inertial Measurement Sensors

OK, now, this is freaking cool.

http://research.microsoft.com/en-us/...imudeblurring/

H
Reply With Quote
  #2  
Old 02-08-2010, 10:05 AM
mithrandir's Avatar
mithrandir (Andrew)
Registered User

mithrandir is offline
 
Join Date: Jan 2009
Location: Glenhaven
Posts: 4,161
Quote:
Originally Posted by Octane View Post
OK, now, this is freaking cool.
The images were giving me motion sickness. Maybe the pest inspector's spraying contributed to that too.
Reply With Quote
  #3  
Old 02-08-2010, 10:06 AM
bojan's Avatar
bojan
amateur

bojan is offline
 
Join Date: Jul 2006
Location: Mt Waverley, VIC
Posts: 6,932
Would it not be just another image stabiliser?
I think the current ones also use accelerometers (kind of solid state gyroscopes) to compensate for vibrations, the difference is they do it on hardware (moving lens element of moving the sensor, depends on design).. a better approach in my opinion, because it solves the problem where it occurs...
However, this method might be cheaper solution at the end (h/w is expensive, while s/w is not because the manufacturing cost is "diluted" by mass production)
Reply With Quote
  #4  
Old 02-08-2010, 11:34 AM
rally
Registered User

rally is offline
 
Join Date: Sep 2007
Location: Australia
Posts: 896
Bojan,

Image stabilisation removes image blur caused by small vibrations by repositioning the sensor or the projected image to compensate for the jitter - usually only in X or Y or sometimes XY

This one removes or rather deconvolves wholesale motion blur in many different axis (pretty much all of them).
Its quite a different process.

Rally
Reply With Quote
  #5  
Old 02-08-2010, 11:50 AM
bojan's Avatar
bojan
amateur

bojan is offline
 
Join Date: Jul 2006
Location: Mt Waverley, VIC
Posts: 6,932
I do understand the difference between the two.. but I am not convinced this method is really much better than traditional way.
It is only potentially cheaper (for consumer) because there is no need for correcting hardware (inside the lens.. apart from movement sensors).

The reason is, we start post-processing with already blurred picture (with known "history" of how the blurring occurred - how the camera was moving), and the deconvolution software can help, but there are limitations here.

Well, maybe it will be OK for sufficient number of typical cases...

BTW, there are only two axis to compensate, X, and Y


Quote:
Originally Posted by rally View Post
Bojan,

Image stabilisation removes image blur caused by small vibrations by repositioning the sensor or the projected image to compensate for the jitter - usually only in X or Y or sometimes XY

This one removes or rather deconvolves wholesale motion blur in many different axis (pretty much all of them).
Its quite a different process.

Rally

Last edited by bojan; 02-08-2010 at 12:01 PM.
Reply With Quote
  #6  
Old 02-08-2010, 12:31 PM
leon's Avatar
leon
Registered User

leon is offline
 
Join Date: Apr 2006
Location: Warrnambool
Posts: 12,430
It is not exactly what Doug and I were talking about a couple of years ago, but it was mentioned that soon the camera will do all the work for you, no need for after market software, etc.

We have not seen anything yet, and that line comes from Robert Reeves, page 392 Introduction To Digital Astrophotography

Leon
Reply With Quote
  #7  
Old 02-08-2010, 12:33 PM
multiweb's Avatar
multiweb (Marc)
ze frogginator

multiweb is offline
 
Join Date: Oct 2007
Location: Sydney
Posts: 22,060
Amazing technology. Now even I can take some decent DSLR hand held shots after a fews...
Reply With Quote
  #8  
Old 02-08-2010, 12:38 PM
leon's Avatar
leon
Registered User

leon is offline
 
Join Date: Apr 2006
Location: Warrnambool
Posts: 12,430
You will Marc, but your stuff is OK.

Leon
Reply With Quote
  #9  
Old 02-08-2010, 04:57 PM
multiweb's Avatar
multiweb (Marc)
ze frogginator

multiweb is offline
 
Join Date: Oct 2007
Location: Sydney
Posts: 22,060
Quote:
Originally Posted by leon View Post
You will Marc, but your stuff is OK.

Leon
Thanks for the vote of confidence Leon but my latest shots need another beer.
Reply With Quote
  #10  
Old 02-08-2010, 05:42 PM
rally
Registered User

rally is offline
 
Join Date: Sep 2007
Location: Australia
Posts: 896
Hi Bojan,

If you read the paper on Octane's link about this "IMU" method, they are correcting for 6 axes of movement, although it appears they are almost ignoring Z because they say the relatively small about of Z differential during the image acquisition does not affect focus enough to worry about.

For Astronomy AO correction, there is less axes to be concerned with.
Wavefront Error correction is quite different and requires more specialised detection devices and mechano-optical components as well as the real time processing.

Rally
Reply With Quote
  #11  
Old 02-08-2010, 06:11 PM
bojan's Avatar
bojan
amateur

bojan is offline
 
Join Date: Jul 2006
Location: Mt Waverley, VIC
Posts: 6,932
Hi Rally,
I have read the paper..
However, I am convinced that at the end, the algorithm reduces the compensation to X & Y only (because, camera rotation around all three axis + linear displacement can be closely approximated by liner displacement only.. but I might be wrong here, of course.. this will also depend on lens characteristics (actually, the lens is also the part of the equation, and the parameters will have to be "attached" to the particular lens), like cushion distortions and so on).
The paper also mentions the loss of higher frequencies from the image due to smearing, and this is why I think the current method (dynamic compensation during exposure - resulting in image that is not smeared) is possibly better.

Time will tell, anyhow, which method is better and more applicable to commercial production.
Very often absolutely brilliant ideas fade into oblivion because of commercial issues.. Classical example was Beta vs VHS (both are now history, of course)

EDIT:
As for astronomy... why would anyone want to have this in first place?
If mount is so shaky and/or prone to PE, it is better to replace it :-)
Astro-photography is not only about pretty pictures, it is about gathering scientifically accurate data (resolution is paramount here).. and no-one wants data to be "doctored" by too much post-processing...

Last edited by bojan; 02-08-2010 at 07:01 PM.
Reply With Quote
  #12  
Old 02-08-2010, 06:43 PM
rally
Registered User

rally is offline
 
Join Date: Sep 2007
Location: Australia
Posts: 896
Bojan,

I guess the resultant motion abberations only end up in X and Y on an image, but the Point Spread Functions needed to resolve all the possible camera motions of XYZ shift and Roll, Yaw are derived from the motion sensors for each of these axes.

Eg If the camera Yaws then Left side of the image might be forward of focus but the right side is backward of focus, likewise for Roll and Pitch plus the simpler motions in X and Y to lesser extent Z movement.
Each area of the image has a different PSF
The Algorithm needs to work differently on different parts of the image hence the spatially dynamic aspect of their work both in both detection and deconvolution.

Rally
Reply With Quote
  #13  
Old 02-08-2010, 06:54 PM
bojan's Avatar
bojan
amateur

bojan is offline
 
Join Date: Jul 2006
Location: Mt Waverley, VIC
Posts: 6,932
I agree with this..
Also, I would like to stress that the reason why they are "not concerned with Z-axis" is because it is VERY hard to de-convolute out-of focus images.
I can't imagine what kind of algorithm can distinguish between de-focussed image (especially if the object is terrestrial, meaning no point-like sources like stars) and naturally fuzzy objects..
Also, the depth of field, especially for short focus lenses (they are mentioning 40mm lens) is very good - meaning couple of degree of camera rotation will have no effect on the focus at all (or, the effects will be of sub-pixel size.. and this can be ignored for all practical purposes). The only effect will be due to lens focal plane distortions
Reply With Quote
  #14  
Old 02-08-2010, 08:06 PM
rally
Registered User

rally is offline
 
Join Date: Sep 2007
Location: Australia
Posts: 896
Bojan,

But they are deconvolving a defocussed image, that is the whole idea, the PSF that they can derive from knowing the cameras movement and therefore quantifying what areas of the image are out of focus and otherwise motion blurred and in what way is the key to it all.

Have a look at Focus Magic's software to see what can be achieved with off the shelf software.
http://www.focusmagic.com/

If a PSF of a given optical system is known absolutely it is conceivably possible to reconstruct a focussed image from a completely blurred one
That would mean the PSF of that lens in that particular focal position and aperture etc
So obvioulsy more difficult to do for the average Joe Blow !

I fear we have pirated H's good thread !

Cheers

Rally
Reply With Quote
  #15  
Old 02-08-2010, 08:15 PM
ballaratdragons's Avatar
ballaratdragons (Ken)
The 'DRAGON MAN'

ballaratdragons is offline
 
Join Date: Jan 2005
Location: In the Dark at Snake Valley, Victoria
Posts: 14,412
Quote:
Originally Posted by bojan View Post
why would anyone want to have this in first place?

Astro-photography is not only about pretty pictures, it is about gathering scientifically accurate data (resolution is paramount here).. and no-one wants data to be "doctored" by too much post-processing...
Wrong attitude.
Some people ONLY want to make pretty pictures, so for them it is perfect.

Others (like yourself by the sound of it) may not like the idea of this product and other post-processing techniques which reduce Scientific data.

Both parties are in it for different reasons (Data V's Pretty), allow each their own processes.
Reply With Quote
  #16  
Old 02-08-2010, 08:45 PM
Octane's Avatar
Octane (Humayun)
IIS Member #671

Octane is offline
 
Join Date: Dec 2005
Location: Canberra
Posts: 11,159
Rally,

No hijacking here. I've enjoyed the responses and reading smart peoples views.

More please!

H
Reply With Quote
  #17  
Old 02-08-2010, 10:06 PM
rally
Registered User

rally is offline
 
Join Date: Sep 2007
Location: Australia
Posts: 896
Hi H,

Smart people - well that wouldnt be me.

Whilst I dont agree with Bojan's general statement about atrophotgraphy - I think he is correct about the need for this for astrophotography - this is a tool much more suited to standard non rigidly supported cameras with short exposures.
If your camera is swinging around that much you'd be unable to capture anything worthwhile on a long astro exposure.

I think the technology is so cheap to implement that we could expect to see this stuff in a DSLR near you sometime soon !
I am assumng the MEMs devices they are using come as a surface mount device - just not sure of the gyroscopes.

The beauty is all the sensor data can be recorded in the images EXIF data and either processed in the camera during its slack time if the user so desires or used to deconvolve on your PC later with more processing power.

Keep em coming !

Cheers

Rally
Reply With Quote
  #18  
Old 03-08-2010, 08:59 AM
bojan's Avatar
bojan
amateur

bojan is offline
 
Join Date: Jul 2006
Location: Mt Waverley, VIC
Posts: 6,932
Yep, this technology will certainly be relatively cheap to implement, provided the accelerometers used are sufficiently low noise (yes, they are SMD MEMS devices.. one commercial example can be found here:
http://www.analog.com/en/sensors/ine...s/product.html
The similar (or same) is used in applications from electronics levels to game consoles and robotics..
The main problem with this particular one (and other similar sensors) is noise vs response time (for lower noise, the response time must be longer due to more averaging).
Giro's are much better.. but (being mechanical assemblies) they are bulky and expensive


Quote:
Originally Posted by rally View Post
If a PSF of a given optical system is known absolutely it is conceivably possible to reconstruct a focussed image from a completely blurred one
However, I have one question re deconvolution... why NASA people bothered to installed corrective optical assembly in the optical path of Hubble Telescope to compensate for the stuffed primary? The exact cause of the manufacturing error was known soon after realising somethig was wrong with the mirror. From there, it was possible to determine exactly the actual shape of the mirror (the twin mirror is still on Earth, btw). So, the good reconstruction of the image should have been possible.
While the software you mentioned earlier is amazing (downloaded, I will try it today), there must be some fundamental limitations with post-processing.. like I tried to point out in my responses...
It is known that before installation of corrector optics, post-processing of Hubble images, based on actual mirror shape was used.. but obviously, this was not good enough. I don't think it was only a PR exercise...

Last edited by bojan; 03-08-2010 at 09:27 AM.
Reply With Quote
  #19  
Old 03-08-2010, 09:42 AM
rally
Registered User

rally is offline
 
Join Date: Sep 2007
Location: Australia
Posts: 896
Bojan,

Interesting point re Hubble

I wonder if they had developed the ability to do that sort of deconvolution processing when Hubble was first launched ?

I guess at the end of the day having a "perfect" image to start with that can be worked even further is much better than a deconvolved imperfect image. It still adds noise.

A friend put me on to the MCS deconvolution mehtod, it uses a slightly different approach in that it assumes correctly that the optical system isnt perfect in the first instance (as others do) and does a different type of deconvolution resulting in superior results to existing mainstream processes.
Only trouble is it doesnt seem to be available in anything but fortran for a very old system !
Reply With Quote
  #20  
Old 03-08-2010, 09:49 AM
bojan's Avatar
bojan
amateur

bojan is offline
 
Join Date: Jul 2006
Location: Mt Waverley, VIC
Posts: 6,932
Hmm..
Translation from Fotran into C or whatever should not be a problem :-)
Especially if the algorithm is figured out properly (And this shouldn't be too hard, Fortran being a sufficiently high level language)

BTW, this is interesting (on MCS deconvolution method):
http://wela.astro.ulg.ac.be/themes/d.../deconv_e.html

Last edited by bojan; 03-08-2010 at 10:16 AM.
Reply With Quote
Reply

Bookmarks

Thread Tools
Rate This Thread
Rate This Thread:

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +10. The time is now 08:16 PM.

Powered by vBulletin Version 3.8.7 | Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Advertisement
Bintel
Advertisement
Testar
Advertisement