Quote:
Originally Posted by rmuhlack
A good result Martin, however your statement that "This image is presented as documentary astrophotography and has not use any AI enhanced or AI assisted technology during post processing." has got my attention, especially as you've also used SV Decon (with default settings).
After reading the description of SV Decon on the startools website ( https://www.startools.org/modules/sv-decon), i'm struggling to see how it differs in a meaningful way from BlurX ( https://www.rc-astro.com/software/bxt/). Both restore an image that has been compromised by optical and other aberrations, and both account for changing PSF across the image (rather than assuming a fixed PSF applies everywhere). Both then employ deconvolution (compared with simple "sharpening"). And yet (and correct me if i'm wrong) you would view an image processed with SV Decon as being Documentary astrophotography, while the same image processed with BlurX is not...?
|
Hi Richard,
Thanks for your comments
My statement was merely to advise that the image is in indeed of a documentary nature and that it certainly did not use AI.
Hopefully the comments below from the Ivo Jager ( the inventor of Startools ) will shed some light between Startools SV Deconvolution and AI Deconvolution Blur X ……,
“The issue is this assertion;
both account for changing PSF across the image (rather than assuming a fixed PSF applies everywhere). Both then employ deconvolution (compared with simple "sharpening"). And yet (and correct me if i'm wrong) but you would view an image processed with SV Decon as being Documentary astrophotography, while the same image processed with BlurX is not...?”
This is simply not the case. One says it employs deconvolution and it can be easily proven that it does, the other says it employs deconvolution but relies fundamentally on a black box (in the form of millions of weights and biases, which - mind you - keep changing every version) and it cannot be proven that it performs deconvolution (in fact the way stars are not properly deconvolved/coalesced shows that it doesn't; true deconvolution touches all pixels in an image).
When applied incorrectly (pushed too hard beyond what the data can bear), one starts generating predictable, well-understood ringing artefacts that cannot be mistaken for detail, The other starts hallucinating plausible (yet non-existent) detail. The latter should tell you all you need to know about what it is trained to do; make detail at all cost. True deconvolution does not "make detail" - it is just a mathematical operation. It is not concerned with what is in your image, nor should it be. It's physics expressed through mathematics. The two approaches could not be further apart.
StarTools is transparent. It fundamentally employs a well-understood Richardson-Lucy (iterative) deconvolution algorithm - the mathematics are known and well understood since the 70s. For example, you can provide a basic, image-wide PSF, such as a Gaussian profile and obtain an expected, textbook result based on that PSF. You can also convolve with that PSF and get back something close to the original blurred result. You (the user) choose the PSF. You do so based on evidence that you documented (namely stellar profile samples, or modelling of the known seeing / atmospheric turbulence severity).
There is no concept of a PSF in a neural net. It's input -> output. It does not articulate what PSF it used (or why). That's because it's cannot be proven that it is even doing that (spoiler: it is not).
It's not sufficient to hide behind the fact that deconvolution is an ill-posed problem, so "anything goes". Indeed, there is not one perfect solution (due to noise and destabilisation) to the deconvolved version of a convolved image. But that does not mean that it is acceptable that detail starts springing up out of nowhere. With true deconvolution, aforementioned ringing artefacts and destabilisation noise start to creep in. These will never be "accidentally" interpreted by your audience as real detail. With true deconvolution, there is no free lunch; you can't go beyond the signal your recorded. Deconvolution isn't magic. All it does is reallocate energy in your image that was scattered (spread) across the entire image (and beyond) back into a point. It has nothing to do with local detail interpretation or synthesis.
If claiming to practice documentary photography, people should try to understand what happens to their data on a fundamental level as much as possible. If you are not "allowed to" (because it's handled by a blackbox neural net on a "trust me, it's magic" basis), alarm bells should start ringing - loudly. A "trust me bro" attitude is the antithesis of documentary... anything. To be able to practice a documentary approach, you yourself need to be convinced, that what you are representing is the truth. If you cannot vouch for this, then you should rectify this to the best of your abilities (by learning more, asking questions, reading documentation, Wikipedia, etc.). Having an AI do plausible detail synthesis for you - often poorly - without knowing why or how it got there or if it is even real, is not only lazy, it is insulting to your audience that expects you to be able to vouch for the documentary value of your image (if you claim it is, in fact, a documentary image of course).
It's 2025 now and it's well documented (sometimes hilariously) how neural nets hallucinate. Everyone now complains how Google's AI results are often useless with information that is completely made up. It blows my mind that there are *still* people who think neural nets are some miracle invention that can somehow better bring out detail than true, physics-based algorithms.
Ivo Jager
StarTools creator and astronomy enthusiast”
Hope this explains the differences from a Startools standpoint
Cheers
Martin