Log in

View Full Version here: : AI sharpening in Astro, is it fair? -Your opinion?


Stefannebula
21-10-2020, 11:59 AM
After seeing some impossibly sharp images on APOD and Astrobin IOTD recently, I asked around and found out that the trick behind them is Topaz Denoise AI and Sharpen AI. - see attached images for reference

I am interested in your thoughts about the use of these algorithms in astrophotography.

At what point do our image processing algorithms change from manipulating information that is already within the data, to generating information that was never there?

Should people explicitly disclose when these algorithms are used, if submitting them to APOD or competitions? Should there be separate categories for AI sharpened data?

I understand the general debate about whether Astrophotography more science or art, but I think this is a specific case where this distinction matters. How is it fair to compare data taken with a 24" Planewave to an 80mm refractor that is 'sharper' purely because of AI in a competition setting?

I know this has probably been discussed ad nauseam, but I couldn't seem to find any specific discussion, and I just want to hear what other people think :)

bojan
21-10-2020, 12:16 PM
To me, AP is technical photography, nothing to do with art.
So, if enhancement bring out details (which are REALLY there), no problems here...
As far as competition is concerned... who is competing with whom here? My answer is "Big optical and software companies", and we (well, some of us) are paying for it :D

jahnpahwa
21-10-2020, 12:42 PM
Definitely fair :) especially when the stakes are very high

glend
21-10-2020, 12:57 PM
Imho, and this will not surprise some, there is far too much interference processing going on lately. When astrophotography becomes just a series of programs and scripts, then it is not photography any longer. Yes it is art, and art usually involves personal interpretation; when you have a bunch of people all running the same apps and scripts with the goal of processing to their view of nirvana, then the hobby has lost its attraction.

rustigsmed
21-10-2020, 01:54 PM
Like any denoise or sharpening tool if used in moderation it is fine and it is enhancing real details. with topaz ai denoise it can definitely be pushed too far pretty easily. once you've used it the program a few time however, it is pretty easy to spot images which have pushed the envelope too far creating fake detail. if you're talking about competitions you'd hope the judges were well enough informed of topaz ai denoise.

cheers

multiweb
21-10-2020, 02:23 PM
I don't understand what fair means in the context of processing. Sharpening, noise reduction, color balance, layering, masking are all processes used to enhance a photo whether you want to show true details or factored compositions to give a perception of something else. It's up to the user to decide. AI is a marketing thing thrown everywhere nowadays. AI this, AI that. Maybe it stands for "as if..." I have the TOPAZ labs AI suite. It sort of does its own thing at times good other spectacularly wrong.

Robert_T
21-10-2020, 08:04 PM
absolutely fair and fine in my mind. there is no "real". it's all simply perceptions and sensory interpretation. already with quite technical approaches you can see the endless variety of how the same objects are represented, colours are just what we attribute to, we have palettes that are invented or refined. I can understand why others feel differently. For me it's about what looks best to my eyes or my perceived audience.

peter_4059
21-10-2020, 08:09 PM
In or out?
- selective masking
- noise reduction
- deconvolution
- changing colour balance
- increasing saturation

Still trying to understand what the problem with a script is since it is just executing a series of steps you would have done manually.

The hobby is already challenging. Anything that makes life easier and makes you happy is fair game.

Peter Ward
21-10-2020, 11:30 PM
There is also a good deal of plain and simple plagiarism that goes on with images posted on the web.

Amazing how the object of interest has Hubble like details but the rest of the field is filled with low resolution blobs and the poster is too dim to realize their dodgy image is shouting out : "this is a fake"

One would hope if entered into a contest the judges would spot such clumsy efforts and cull them accordingly.

David Malin is a master at this, but even he admitted the "fakes" are getting better and more difficult to spot. He even suggested having a new category: best fake.

Rest assured if there is a dollar to be had, there will also be those who will cheat to get it.

I have yet to see any 80mm telescope, even with the best AI sharpening, out-resolve a 0.5 metre class instrument.

While there are no prohibitions to over processing, in a contest environment my hope is the judges will spot the difference without much fuss and consider works more deserving.

jahnpahwa
22-10-2020, 08:34 AM
This is a good point. These scripts are something that a human could and often do perform manually.
How does this compare to adaptive optics? No human could ever perform that job quick enough to aid capture, right? At least these tools are accessible to most, if not all, which is probably good for competitions, no?

atalas
22-10-2020, 10:24 AM
Agreed......this question makes no sense.

Andy01
22-10-2020, 11:16 AM
Spot on Peter. I'm one of the IOTD judges on Astrobin, and there was a lengthy discussion about this image behind the scenes. Initially the image was called out, but some have rallied to the photographers' defence suggesting this is result of extensive overuse of Topaz in processing. Personally, as an experienced Topaz user for the past year, I'm in your camp - regardless of it being awarded APOD & other accolades, I'm firmly convinced that there is a mix of professional and amateur data blended here and attempt has been made to mask this in processing.

My overall thought here though is... Why bother? :shrug:

Atmos
22-10-2020, 12:35 PM
I've had a bit of a play around with the Topaz Labs DeNoise and Sharpening. I agree with Marc, AI does get thrown around a lot as it is the current buzz word. If you write a deconvolution algorithm that measures the average PSF and then runs the deconvolution a number of times looking for specific artefacts and tunes the variables until the the best fit (least artefacts) out come happens, this is now called AI.

I personally have never been a fan of the appearance of the noise reduction from Topaz DeNoise. Even when run at low levels it has a very artificial and unique look that I personally don't like in astro.

I gave the Sharpening a go last night and found that on some images it does improve sharpness a bit but on others it did a lot more damage than good. One thing I did notice is that it makes small stars look like it has been taken under 0.2" seeing! Larger ones not so much and the bigger they are the less "shrinkage" they have.

Peter Ward
22-10-2020, 02:48 PM
Humm.. I found this little ditty about online fakes..

Fake it till you make it
Show off to the world
What you are not

Why is the need so strong
Something that is wrong
Is it worth the S***
Only to fit

The show is on
Innocence is gone
Falseness is accepted
Originality is rejected

Benjamin
23-10-2020, 10:10 AM
FWIW...

I compared a few images of the “Soap Bubble”. Both are close up and cropped and resolutions vary so nothing definitive could be said, but they seem to make a few of the points mentioned already in this thread.

One is an APOD with a 2.5m Newt the other is an APOD with a 127mm triplet refractor. The 127mm APOD is certainly gorgeous from a certain distance...

More a comment on sharpening than AI processing. Not sure AI sharpening has been used but it does seem likely

Tulloch
23-10-2020, 11:13 AM
Here's Damian Peach's view on the matter when it come to planetary imaging
https://www.patreon.com/posts/40809286

There's not doubt that the process can produce images that look incredibly sharp, however is the actual object that sharp to begin with? Have a look at these two images of Jupiter, which one do you prefer? Which one "accurately" records the planet best?
https://www.cloudynights.com/topic/718826-topaz-ai-sharpening/

Andrew

Benjamin
23-10-2020, 02:28 PM
A good take on it that could apply to any object, given you see yourself as a photographic observer who is interested in seeing and/or sharing what might actually be there.

tim.anderson
23-10-2020, 03:59 PM
I doubt that I know enough about astrophotography techniques to be able to recognise plagiarism of the type to which you refer.

Could you possibly provide a link to an example?

multiweb
23-10-2020, 05:00 PM
Very cool video. :thumbsup: Also applies to deep sky. Less is more.

Peter Ward
23-10-2020, 05:37 PM
What? and do a roll-over comparison as well? I'll pass.

That might not be prudent :)

tim.anderson
23-10-2020, 05:49 PM
Heh. Yes, probably.

gregbradley
23-10-2020, 06:00 PM
Well they must have improved their products because I have found them to be very hit or miss with little control available to the user.

Greg.

tim.anderson
23-10-2020, 06:33 PM
Same question. Can you provide an example?

Tim

Benjamin
25-10-2020, 10:58 AM
Another close up 127mm triplet image (an incredibly detailed 11 hours) compared to a CDK14 image (36 hours or so). The 127mm looks pretty cool on Facebook, but even then, how could you feel okay doing this to the detail in your data (assuming this is a result of excessive AI sharpening)? I guess to give an ‘impression’ of sharpness or detail at a certain scale? Is that a thing??! Maybe I’m just a bit too in the realist camp at present...

gregbradley
25-10-2020, 04:34 PM
I don't have any saved images just what I noticed during the processing of some past images that I then rejected.

Greg.

gregbradley
25-10-2020, 04:36 PM
I don't see anything wrong with sharpening if it works. When it failed for me it usually accentuated noise or wrecked the stars.
The CDK data must have been under terrible seeing as there is no way a 127 low end refractor is going to compete with a properly collimated and decent seeing CDK14.

Unless it was lots of short sub lucky imaging shots using a CMOS camera versus long exposure CCD.

Longer focal length has the achilles heel of requiring better seeing.

Greg.

Benjamin
25-10-2020, 07:27 PM
Thanks for your thoughts on this Greg as it helps clarify some ideas :-) The object shown in my example is CTB-1 (very faint) and no image whether 36 hours with a CDK14 or 52 hours with a 30cm Newt has anything like the 127mm detail, which I believe is largely generated by some kind of AI sharpening (could be wrong). The stars in the image I believe (again could be wrong) aren’t affected by the sharpening as the sharpening is probably done on a starless image with the stars added back in later. However even a CDK14 or 30cm newt can be poor no matter how many hours I guess. Also, to be clear, I’m not having a go at anyone, as the Facebook version on my iPhone (which is how some would consume their images) is pretty cool and the examples I used are from full res screen shots on Astrobin and aren’t really visible with various compression algorithms. Also there are other many images turning up that are very similarly sharpened although easier to detect artifacts as a Hubble comparison is available.

gregbradley
25-10-2020, 09:55 PM
That dotty grain in the 127 image does remind me of oversharpening artifacts or excessive decon artifacts.

Something is odd about the image. Perhaps it is as Peter suggested and its a bit Hubble imagery pasted in. Who knows.

Greg.

Benjamin
25-10-2020, 11:22 PM
It was more that sharp dark ‘pillar’ and the other ‘features’ in the 127mm image (the first one) surrounded by the smoothed out background that suggests some kind of selective AI texturing to me. The noise profile of the CDK image (the second one) seems more natural and doesn’t feel like sharpening is interpolating additional detail. In all images I’ve seen the ‘pillar’ or dark bump is never defined in regard to a sharp edge let alone that jagged top with the bright linear features next to it. Again at a lower resolution the 127mm pillar is a sharp ‘something’ but close up it looks like a jagged tower rising through the mist against a blood red sky. And perhaps that’s the dilemma here - what I think is oversharpened and interpolated detail could be more dramatically suggestive, even if it isn’t the actual product of a star going supernova (which at a micro level on a very faint object could be quite regular and not very suggestive of anything). Not sure there is a Hubble version of this object to check against which is why I thought a deep CDK14 version might suffice, although as you point out Greg, these bigger scopes do have their issues. Anyway, enough from me. Thinking about it too much when I should be trying to solve my own issues!:) Clear skies!

The_bluester
27-10-2020, 09:12 AM
The CDK stars ones look much more natural than the stars in that 127mm image to me, those look very much like what I saw when I gave the AI sharpening routine a try. At a very low level of sharpening they lost the gradual fading that comes out of normal seeing and became hard, flat blobs, I found it a very unattractive look, it also produced artifacts very quickly and not just sharpening worms in the background areas. A few small patterns of stars suddenly looked like someone had dropped text X's on the image, not nice at all. I also do not find the sharpening result on the nebulosity to look very natural. It would be interesting to do an integration with significantly more more or less data and run the AI sharpen on both to see if it produces the same "Detail" both times.

Similarly the AI denoise just did not produce a natural looking image with my data sets so I uninstalled both in short order.

Sunfish
27-10-2020, 10:19 AM
Becomes a little less like photography and a little more like painting. That image is not so much sharpened as enhanced with detail which may be assumed from other examples but may not really be present.

It is always the unexpected details which are more interesting than the expected assumptions.

The_bluester
27-10-2020, 11:41 AM
I would actually take the statement a step further than that. It is not really even assumed detail or expected detail, I found when the sharpening was pushed beyond a very small amount it produced detail that really does not exist. It is more "predicted" detail than anything else.

I tried the AI sharpen on a scanned picture of a car I had many years ago and I have to say it failed horribly. It is an older car with a rubber seal fitting the windscreen rather than the bonded glass common since the early 80's, any amount of "Sharpening" at all and the paint on the A pillar was extended on to the black rubber of the windscreen seal, and other body styling lines suddenly appeared (I am pretty sure I know what a 240Z is supposed to look like, and it ain't that!)

Benjamin
27-10-2020, 01:53 PM
Interesting to read your results Paul. I don’t have or use Photoshop so have only suspected (with good reason!) that these ‘super details’ in some recent Astro photos are artifacts added to images, drawn from whatever has trained the AI processing routine. Isn’t the end point then that you could take a picture of M8 (say) and then the AI, trained on a Hubble M8, could just whack in details that might be correct but barley related to what data has been captured? I could imagine it’d be easy to write some program that could scour the web for every pic of M8 and interpolate as required. No point to this of course but such a scenario would blur the line between plagiarizing and processing: you’d be plagiarizing every image not one or two perhaps?

Stonius
27-10-2020, 02:26 PM
I figure that as soon as you photograph anything, it's not how it 'really looks', so any technique to make it look better is 'fair'. Unless it's for scientific purposes, of course.


Cheers


Markus

multiweb
27-10-2020, 02:43 PM
That sharpened close up is typical of Topaz Labs DeNoise AI and Sharpen AI. It creates and adds these stringy details. Dead give away.

I've noticed that on many planetary shots including mine when I used the Topaz AI suite. It's even harder to tell on a Jupiter shot because the clouds are so busy and dynamic.
But what gave it away is when I started doing animations.
The AI thingy would process each frames differently because the pattern slightly changes when the planet rotates and new details pop in and out of existence. :lol:

As a test rotate the same deep sky shot let's say 30 degrees clockwise or anticlockwise and apply the same filter to the same crop. I guarantee the result will be different.

So AI could be good (barely) to enhance terrestrial or people shots but it doesn't in anyway reflect the reality. Even having the sliders right down to zero will in some cases leave residual processing in your photos depending on the image scale.

There is no such issues with the previous Topaz Labs 2 or 1 where the sliders are purely manual and you can save a configuration as a preset. Because you have control over every aspect of noise reduction and sharpening you can stop before it gets silly. Also applying the same settings to different pictures and/or orientation of the same will yield similar and consistent results.

Benjamin
27-10-2020, 03:18 PM
In regard to ‘fairness’ (if you think of getting an APOD a competitive process) I was a little surprised that what seems to be an AI sharpened image did receive a NASA gong recently.

https://apod.nasa.gov/apod/image/2010/NGC6888WissamAyoub.jpg

The overall image is still stunning I think (the blue OIII rim stands out so well against those rich reds) but to let what I think are regions of generated (possibly AI generated?) detail fly seemed to me a little bit of an oversight. I would have thought some somewhat scientifically grounded processing would be necessary to just get past go. Anyway, I really know nothing about APODs and I’m probably being a geekish prude :lol:

The_bluester
27-10-2020, 03:43 PM
I don't think at least in the current versions that the AI is actually trained in that way (As in trained on the detail of a specific image or object) the training is more in terms of what sort of patterns are "Real" and what are not.

Benjamin
27-10-2020, 03:50 PM
Thanks for clarifying :-)

multiweb
27-10-2020, 04:16 PM
It's an AIPOD. They're launching a new trend. :P

Benjamin
27-10-2020, 04:17 PM
:lol::lol::lol:

ChrisV
28-10-2020, 09:58 AM
I'm now using a trained neural network to remove stars (starnet, now a PI process).

What's the difference ?

The_bluester
28-10-2020, 10:16 AM
The difference that I see is one is used to remove something that IS in an image to allow for processing without it (Either for later re addition or not) and is automating something that you can do manually, where the other is very capable of extrapolating "Data" where none previously existed.

Benjamin
28-10-2020, 10:45 AM
I find starnet++ slightly destructive in regard to detail at times and there is little control. I also use a Newt so diffraction spikes aren’t so great for this process. I much prefer to manually remove stars so I can make sure no detail is lost or obscured (especially details that can be confused with a star or detail very near a star). It’s time consuming but also quite an education.